We are using a Solmeta GPS geotagger to record yaw, pitch and roll for oblique images taken. I have found numerous equations online for converting yaw, pitch and roll to omega, phi and kappa, but none have been successful in properly georeferencing the image locations. Has anyone used this tool and had luck with converting the values to create a frames file for ortho mapping? What is the most appropriate way to convert these values to omega, phi and kappa. Yaw is recorded as 0 = N, 90 = E, etc; pitch as 0 = level, - = left tilt, + = right tilt; roll as 0 = level, + = up, - = down. Thanks in advance for your help.
Hi Stacie,
while when searching for that topic there are a lot of PHD-thesis and highly technical documents to be found where I do not understand the math 🙂 there is an ArcGIS solution for it to get the Frames oriented: Creating a mosaic dataset with the Frame Camera Raster Type. You need to prepare a frames table and a camera table (or add the fields of the camera table to the frames table) ... and there, yaw, pitch and roll are supported!
Hope this helps
Thanks for your response, Gunter! Yes, I've probably found those same highly technical articles! I think part of what has been unclear is that the frames table calls for kappa, phi and omega, where the Solmeta returns heading, pitch and roll. From what I've been able to understand from the technical articles, it seems like these may not be identical, but I have not been able to find anything in the Esri documentation about these fields to know if I should assume they are identical or not based on how the fields are defined. I've tried entering the roll, pitch and heading values into the kappa, phi and omega fields, but was not successful in creating a reasonable orthorectified surface using the images. I now have better data to test from the field, but am not confident I am entering the correct values for kappa, phi and omega.
Stacie
We realize our documentation is not complete, and we are working on revising this documentation right now. I will send you a private message about this.
For general discussion, note that ArcGIS uses the standard photogrammetric angles kappa, phi and omega, measured relative to a projected coordinate system (e.g. UTM, state plane, etc.). Even with those standards, there are different definitions (e.g. see this classic paper
http://www.hochschule-bochum.de/fileadmin/media/fb_v/veroeffentlichungen/baeumker/baheimesoeepe.pdf which includes two) and the order of rotations changes the rotation matrix. Unfortunately, the definitions for Roll/Pitch/Yaw (RPY) are even less "standard".
I looked very briefly at http://solmeta.com/pic/download/Geotagger%20Pro%202%20User%20Manual%20V1.0.pdf but I do not see definitions for how the angles are defined by Solmeta.
Your definitions above do not sound correct (you describe Pitch as left/right, but I assume this is just a typo, and you meant Roll). If you can double check details such as which coordinate system is used (I assume Lat Long, not projected coordinates?), the origins (I assume Pitch = 0 when aimed at the horizon? etc.) and the rotation directions (in photogrammetry, OPK are all measured counterclockwise e.g. right hand rule; your description above indicates Pitch and Roll are measured counterclockwise, but Yaw is measured clockwise), we should be able to advise.
Cody B.
Thank you for your response, Cody! Yes, I reversed the descriptions for pitch and roll; the correct descriptions are as follows: yaw is recorded as 0 = N, 90 = E, etc; roll as 0 = level, - = left tilt, + = right tilt; pitch as 0 = level (at horizon), + = up, - = down. I converted our location from our test images to a projected coordinate system (Web Mercator) based on the documentation online, but our data are collected in the field as lat/long (WGS-84).
Stacie
Thanks. I believe you'll be able to get acceptable results by converting RPY to OPK, but I'm not yet certain. Toward that goal, this will only be a partial answer, but some of the details you'll need to consider:
This will be a work in progress - need to double check each assumption (and test as we go). Can you clarify your last sentence? You said "converted our location" - did you mean "...and orientation..." or did you mean XYZ location values only?
Also, you haven't mentioned details about the camera model - do you have parameters for principal point, focal length, and any indication of lens distortion? Note you can proceed with estimates (nominal focal length, assume principal point is image center, and distortion is zero) just to be sure the other parameters are working as expected, but your accuracy will be limited. You'll also need a DEM, but you can use the ArcGIS Online world terrain for that.
Can you tell us more about your project and objectives? Are you capturing images from an airborne platform, lens aimed at nadir? Do you have parallel flightlines, overlapping images? And last: what version of ArcGIS are you using?
Cody B.
Thanks for these suggestions, Cody! I'll work on testing them out tomorrow and reporting back. A few other follow-up comments:
--For "right tilt", I mean tilting the top of the camera towards the right, so the bottom right of the camera is lower than the top left of the camera, or rotating clockwise when looking through the viewfinder.
--For my last sentence, I meant projected instead of converted. We originally had lat/long for our location in WGS-84; I reprojected to Web Mercator.
--We are using a Canon 7D. I was able to extract focal length from the image's exif data. I calculated A0, A1, A2, B0, B1 and B2 using the pixel to microns value for the camera and the equation I found here: http://proceedings.esri.com/library/userconf/devsummit16/papers/dev_int_237.pdf. And I estimated the principal point as the center of the image. Based on sample data I found online, I assumed this was the midpoint as 0, 0.
--Our original test images were from a fixed location on a building down to the ground. Our real data are images collected from an aircraft of harbor seal haulout locations in Alaska. We use a Canon 7D with a Solmeta attached (for heading, pitch and roll). The camera is not fixed to the plane; rather hand-held and photos are only taken when harbor seal haulout locations are surveyed. There is a fair amount of overlap at some sites; and at other sites, there may only be one image. Our final product will be a number of orthorectified surfaces (one for each haulout) or a single surface (with large gaps between haulout locations), depending on what works best for counting/digitizing harbor seals from the images. Currently, individual seals are counted and the location is just the geo-tagged location where the image was taken. We would like to improve our counting to get georeferenced locations of at least the image, if not of the individual seals within the haulout.
--I am currently using ArcGIS Pro 2.0.