Shopping Cart (0)

Exploring Aerial Photogrammetry using Bundler and Meshlab

This weekend I started exploring aerial photogrammetry using Bundler and Meshlab. The first few Google searches I did while researching aerial photogrammetry discussed KAP (Kite Aerial Photography) enthusiasts who have used free photogrammetry tools like Microsoft PhotoSynth and Synth Export, Autodesk 123D Catch, or the open source program Bundler (SfM) to create surveys of archeological sites.

My goal was to use aerial photos my brother & I captured with our EasyStar model airplane to create DEMs (digital elevation models) of our local scenery. The aerial photos I used to test Bundler were taken at a 400 foot altitude with a model airplane flying at 35 kilometers per hour. I used a Canon Powershot SD780IS camera with CHDK and the countdown intervalometer script to trigger the photos.

After a few hours of tinkering with software, reading manuals, and Google searching I successfully created and textured my first DEM (digital elevation model).

This is a screenshot in Autodesk Maya of the first photogrammetry model I created using Bundler, and Meshlab.

This is a screenshot in Autodesk Maya of the first photogrammetry model I created using Bundler, and Meshlab.

Sample Bundler Aerial Images

A set of three vertical aerial photos taken with an EasyStar

A set of three vertical aerial photos taken with an EasyStar

I have included a ZIP archive with the three aerial photos from this tutorial. You can use these images to follow along with your copy of the Bundler photogrammetry software.

You can download the aerial photos here:

bundler_cliff_sample_images.zip (14 MB)

How I Created my First Digital Elevation Model

I spent a few hours this weekend compiling my own Mac OS X 10.7 Lion build of Bundler. After a half day spent fighting with gfortran and Lion / Xcode 4.2 issues and modifying countless makefiles I compiled my own build of Bundler. Then I noticed that the standard version of the SIFT keypoint detector (by David Lowe) only works on Linux and Windows.

While searching for a SIFT replacement I came across a precompiled version of Bundler for Mac OS X that also included a modified copy of SiftGPU. Thanks Ivan!

You can download the Mac OS X build of Bundler (SfM) from the thread on the Photogrammetry Forum

The precompiled version of Bundler for Mac OS X has to be placed in your home directory at:

~/bundler/

You then need to edit your ~/.profile and add Bundler to your system path.

PATH=/opt/local/bin:/opt/local/sbin:~/bundler:~/bundler/bin:$PATH

Bundler is started from the terminal by navigating to the folder where your images are located and running RunBundler.sh

Bundler is started by running the RunBundler.sh script from the terminal. It is important that the EXIF data is present in the JPEG images so you don't have to manually enter the camera parameters.

Bundler is started by running the RunBundler.sh script from the terminal. It is important that the EXIF data is present in the JPEG images so you don’t have to manually enter the camera parameters.

After a little while RunBundler will finish running and tell you the conversion is complete.

After a little while RunBundler will finish running and tell you the conversion is complete.

After RunBundler processes the images you need to edit the file pmvs/prep_pmvs.sh to define the value BUNDLER_BIN_PATH for each image project:

BUNDLER_BIN_PATH=~/bundler/bin

It is important to use a plain text editor like TextWrangler so you don’t add any rich text formatting to the prep_pmvs.sh script. Also make sure there aren’t any spaces between the equals sign and the ~/bundler/bin part of the path or the script will report errors.

It only takes a moment to edit the prep_pmvs.sh script. You need to add the BUNDLER_BIN_PATH to the script file.

It only takes a moment to edit the prep_pmvs.sh script. You need to add the BUNDLER_BIN_PATH to the script file.

Next you need to run the following commands in your terminal from inside your image directory:

sh pmvs/prep_pmvs.sh

cmvs pmvs/

genOption pmvs/

Exporting a PLY file

If you have a single view cluster you can export it to a .ply file using:

pmvs2 pmvs/ option-0000

If you have a larger dataset with multiple view clusters you could use this command instead:

sh pmvs/pmvs.sh

Here is a screenshot of the output from the prep_pmvs script.

Here is a screenshot of the output from the prep_pmvs script.

When you run the command pmvs2 pmvs/ option-0000 you will get a PLY formatted 3d model named option-0000.ply as the output.

When you run the command pmvs2 pmvs/ option-0000 you will get a PLY formatted 3D model named option-0000.ply as the output.

After the pmvs2 program finishes the final Bundler PLY point cloud file is output inside the image folder at the path:

pmvs/models/option-0000.ply

Creating the Reconstructed Polygon Mesh Surface

I used Meshlab to convert the Bundler generated point cloud into a 3D triangulated polygon mesh. Then I exported the geometry in the Wavefront .OBJ mesh format.

Everything starts in MeshLab with a new empty project.

Everything starts in MeshLab with a new empty project.

You need to import the Bundler PLY file using the Import Mesh... menu item.

You need to import the Bundler PLY file using the Import Mesh… menu item.

The Bundler PLY file is located in your image directory under pmvs/model/option-0000.ply

The Bundler PLY file is located in your image directory under pmvs/model/option-0000.ply

The Bundler suite outputs a coloured point cloud from your aerial photos. In this tutorial I am going to use MeshLab to convert the point cloud into a triangulated polygonal surface.

The Bundler suite outputs a coloured point cloud from your aerial photos. In this tutorial I am going to use MeshLab to convert the point cloud into a triangulated polygonal surface.

MeshLab needs to calculate the point set normals before a surface can be reconstructed.

MeshLab needs to calculate the point set normals before a surface can be reconstructed.

The default settings work well for computing the point set normals.

The default settings work well for computing the point set normals.

MeshLab makes it quick and easy to generate a surface reconstruction using the Poisson algorithm.

MeshLab makes it quick and easy to generate a surface reconstruction using the Poisson algorithm.

When generating the surface from the point set I used an Octree Depth of 10, a Solver Divide of 8, and the default values of 1 for both the Samples per Node and Surface Offsetting attributes.

When generating the surface from the point set I used an Octree Depth of 10, a Solver Divide of 8, and the default values of 1 for both the Samples per Node and Surface Offsetting attributes.

The reconstructed surface is visible in white over the point set data.

The reconstructed surface is visible in white over the point set data.

This is the reconstructed surface viewed with an electron microscope surface shader.

This is the reconstructed surface viewed with an electron microscope surface shader.

You can copy the colors from the point set onto the vertices of the reconstructed surface using the Vertex Attribute Transfer feature.

You can copy the colors from the point set onto the vertices of the reconstructed surface using the Vertex Attribute Transfer feature.

The key to using the Vertex Attribute Transfer dialogue is to enable the Transfer Color checkbox. Then we need to set the Source Mesh to the imported PLY file and the target mesh to the MeshLab reconstructed Poisson mesh.

The key to using the Vertex Attribute Transfer dialogue is to enable the Transfer Color checkbox. Then we need to set the Source Mesh to the imported PLY file and the target mesh to the MeshLab reconstructed Poisson mesh.

This is the reconstructed polygon mesh with the transfered vertex color data applied.

This is the reconstructed polygon mesh with the transfered vertex color data applied.

MeshLab has quite a few output formats to choose from. I selected the Wavefront .OBJ geometry format for this cliff reconstruction example.

MeshLab has quite a few output formats to choose from. I selected the Wavefront .OBJ geometry format for this cliff reconstruction example.

This is a screenshot in Autodesk Maya of the first photogrammetry model I created using Bundler, and Meshlab.

This is a screenshot in Autodesk Maya of the first photogrammetry model I created using Bundler, and Meshlab.

Meshlab and WebGL

When I started experimenting with a Bundler aerial imaging workflow I did a several tests with WebGL and ascii .ply files using the open source XB PointStream library.

Here is a link to the first interactive aerial mosaic I visualized with WebGL:
http://www.andrewhazelden.com/projects/bundler/

The point cloud was generated from a 800 meter long strip of aerial images. If you want to play with the point cloud data in MeshLab you can download the ASCII .ply file here:
http://www.andrewhazelden.com/projects/bundler/bundler.ply (15 MB).

Changing MeshLab Point Sizes

When you are working with point cloud data sets in MeshLab it can be helpful to scale the size of the point cloud dots to make the scene easier to work with. This is most evident when you change from editing point clouds of scanned objects to viewing point clouds of large landscapes.

This is a view of the PLY formatted bundler.ply point cloud showing the ocean shoreline in West Dover.

This is a view of the PLY formatted bundler.ply point cloud showing the ocean shoreline in West Dover.

Open the MeshLab Options window by selecting the Tools > Options... menu item. On Mac OS X the controls are under the Meshlab > Preferences menu item.

Open the MeshLab Options window by selecting the Tools > Options… menu item. On Mac OS X the controls are under the Meshlab > Preferences menu item.

Scroll down until you see the Appearance:: pointSize option. Double click on the value field to change the size of the dot used for each of the displayed points in your MeshLab project.

Scroll down until you see the Appearance:: pointSize option. Double click on the value field to change the size of the dot used for each of the displayed points in your MeshLab project.

The pointSmooth option can be used to render your point cloud dots as either a square shape or a circular dot shape. With Point Smooth OFF you get a square point cloud dot shape in the viewport.

The pointSmooth option can be used to render your point cloud dots as either a square shape or a circular dot shape. With pointSmooth set to OFF you get a square point cloud dot shape in the viewport.

When you double click on the point smooth value in the options window, MeshLab will open a simple dialogue with a single checkbox control. Turning the checkbox ON or OFF with enable or disable point smoothing.

When you double click on the point smooth value in the options window, MeshLab will open a simple dialogue with a single checkbox control. Turning the checkbox ON or OFF with enable or disable point smoothing.

With pointSmooth turned ON you will see circular dots for each of the points in your point cloud data set.

With pointSmooth turned ON you will see circular dots for each of the points in your point cloud data set.

Deleting Points in Meshlab

When I’m working with point cloud data sets in MeshLab I find it a lot easier to deleted the unwanted points from my project before I create a polygon surface reconstruction. Cleaning the point cloud data up now will save a lot of time later because you won’t have to spend time trying to fix the polygon mesh topology.

The Select Vertices tool is used to choose a group of points in your mesh lab project. The tool works with a rectangular shaped selection region.

The Select Vertices tool is used to choose a group of points in your mesh lab project. The tool works with a rectangular shaped selection region.

The next step is to box select a group of point you want to remove from your MeshLab project.

The next step is to box select a group of point you want to remove from your MeshLab project.

The Delete Vertices button will remove the unwanted points from your project. It is a good idea to save a version of your MeshLab project file before you start deleting parts of your point cloud. This is especially important because MeshLab lacks an undo command.

The Delete Vertices button will remove the unwanted points from your project. It is a good idea to save a version of your MeshLab project file before you start deleting parts of your point cloud. This is especially important because MeshLab lacks an undo command.

This is a screenshot of the MeshLab scene after I deleted a group of points from the point cloud.

This is a screenshot of the MeshLab scene after I deleted a group of points from the point cloud.

 

25 comments
  1. Andrew, awesome work! This is really cool!! I dig people who use the command line. 😀 I have a half broken implementation of osmBundler going on DM. Would you mind if I follow your workflow and try and incorporate this when I get some time? Thanks for posting!

  2. Hi JP.

    Thanks for the feedback. Feel free to incorporate the MeshLab surface reconstruction workflow from the tutorial in your pipeline.

    Regards,
    Andrew

  3. I found your work was fantastic!! Although I tried your workflow using Kermit’s sample images to get a final PLY point cloud file in pmvs/models folder, no PLY file was generated. I appreciate very much if you could point out what was wrong in my steps. My steps were below:

    kawata-MacBookPro:~ YKawata$ cd /Users/YKawata/demo-images
    kawata-MacBookPro:demo-images YKawata$ $PATH
    -bash: /opt/local/bin:/opt/local/sbin:/Users/YKawata/bundler:/Users/YKawata/bundler/bin:/Library/Frameworks/Python.framework/Versions/2.7/bin:/Users/YKawata/ImageMagick-6.7.5/bin:/opt/local/bin:/opt/local/sbin:/Users/YKawata/bundler:/Users/YKawata/bundler/bin:/Library/Frameworks/Python.framework/Versions/2.7/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:
    kawata-MacBookPro:demo-images YKawata$ RunBundler.sh
    mkdir: ./prepare: File exists
    0
    Image list is list_tmp.txt
    [Extracting exif tags from image ./00000000.jpg]
    [Focal length = 0.000mm]
    [Couldn’t find CCD width for camera ]
    [CCD width = 0.000mm]
    [Resolution = 640 x 480]
    [Extracting exif tags from image ./00000001.jpg]
    [Focal length = 0.000mm]
    [Couldn’t find CCD width for camera ]
    [CCD width = 0.000mm]
    [Resolution = 640 x 480]
    [Extracting exif tags from image ./00000002.jpg]
    [Focal length = 0.000mm]
    [Couldn’t find CCD width for camera ]
    [CCD width = 0.000mm]
    [Resolution = 640 x 480]
    [Extracting exif tags from image ./00000003.jpg]
    [Focal length = 0.000mm]
    [Couldn’t find CCD width for camera ]
    [CCD width = 0.000mm]
    [Resolution = 640 x 480]
    [Extracting exif tags from image ./00000004.jpg]
    [Focal length = 0.000mm]
    [Couldn’t find CCD width for camera ]
    [CCD width = 0.000mm]
    [Resolution = 640 x 480]
    [Extracting exif tags from image ./00000005.jpg]
    [Focal length = 0.000mm]
    [Couldn’t find CCD width for camera ]
    [CCD width = 0.000mm]
    [Resolution = 640 x 480]
    [Extracting exif tags from image ./00000006.jpg]
    [Focal length = 0.000mm]
    [Couldn’t find CCD width for camera ]
    [CCD width = 0.000mm]
    [Resolution = 640 x 480]
    [Extracting exif tags from image ./00000007.jpg]
    [Focal length = 0.000mm]
    [Couldn’t find CCD width for camera ]
    [CCD width = 0.000mm]
    [Resolution = 640 x 480]
    [Extracting exif tags from image ./00000008.jpg]
    [Focal length = 0.000mm]
    [Couldn’t find CCD width for camera ]
    [CCD width = 0.000mm]
    [Resolution = 640 x 480]
    [Found 9 good images]
    [- Matching keypoints (this can take a while) -]
    [BundlerMatcher]
    [Sift Feature extracted]
    [Sift Key files saved]
    [Sift Feature matched]
    mkdir: bundle: File exists
    [- Running Bundler -]
    [- Done -]
    [- Running Bundle2PMVS -]
    [ReadBundleFile] Bundle version: 0.300
    [ReadBundleFile] Reading 9 images and 44 points…
    [GetJPEGDimensions] File ./00000000.jpg: ( 640 , 480 )
    [GetJPEGDimensions] File ./00000001.jpg: ( 640 , 480 )
    [GetJPEGDimensions] File ./00000005.jpg: ( 640 , 480 )

    @@ Conversion complete, execute “sh pmvs/prep_pmvs.sh” to finalize
    @@ (you will first need to edit prep_pmvs.sh to specify your bundler path,
    @@ so that the script knows where to find your
    @@ RadialUndistort and Bundle2Vis binaries)
    kawata-MacBookPro:demo-images YKawata$ sh pmvs/prep_pmvs.sh
    [ReadBundleFile] Bundle version: 0.300
    [ReadBundleFile] Reading 9 images and 44 points…
    Undistorting image ./00000000.jpg
    Undistorting image ./00000001.jpg
    Undistorting image ./00000005.jpg
    [WriteBundleFile] Writing 3 images and 44 points…
    Running Bundle2Vis to generate vis.dat

    [ReadBundleFile] Bundle version: 0.300
    [ReadBundleFile] Reading 3 images and 44 points…
    Num visible: 112
    Num cameras: 3
    @@ Sample command for running pmvs:
    pmvs2 pmvs/ pmvs_options.txt
    – or –
    use Dr. Yasutaka Furukawa’s view clustering algorithm to generate a set of options files.
    The clustering software is available at http://grail.cs.washington.edu/software/cmvs
    kawata-MacBookPro:demo-images YKawata$ cmvs pmvs/
    Reading bundle…3 cameras — 44 points in bundle file
    ***********
    3 cameras — 44 points
    Reading images: ***
    Set widths/heights…done 0 secs
    done 0 secs
    slimNeighborsSetLinks…done 0 secs
    mergeSFM…************resetPoints…done
    Rep counts: 44 -> 5 0 secs
    setScoreThresholds…done 0 secs
    sRemoveImages… ***
    Kept: 0 1 2

    Removed:
    sRemoveImages: 3 -> 3 0 secs
    slimNeighborsSetLinks…done 0 secs
    Cluster sizes:
    3
    Adding images:
    0
    Image nums: 3 -> 3 -> 3
    Divide:
    done 0 secs
    2 images in vis on the average
    kawata-MacBookPro:demo-images YKawata$ genOption pmvs/
    kawata-MacBookPro:demo-images YKawata$ pmvs2 pmvs/option-0000
    Usage: pmvs2 prefix option_file

    ————————————————–
    level 1 csize 2
    threshold 0.7 wsize 7
    minImageNum 3 CPU 4
    useVisData 0 sequence -1
    quad 2.5 maxAngle 10.0
    ————————————————–
    2 ways to specify targetting images
    timages 5 1 3 5 7 9 (enumeration)
    -1 0 24 (range specification)
    ————————————————–
    4 ways to specify other images
    oimages 5 0 2 4 6 8 (enumeration)
    -1 24 48 (range specification)

  4. Hello yoshiyuki,
    The problem you are facing starts right at the bundler stage :
    ————————————————–
    .
    .
    [Extracting exif tags from image ./00000000.jpg]
    [Focal length = 0.000mm]
    [Couldn’t find CCD width for camera ]
    [CCD width = 0.000mm]
    .
    .

    Bundler uses the focal length information from EXIF tags of the images / from a perl file (bundler-v0.4-source/bin/extract_focal.pl) for doing point cloud generation. If the focal length information is missing, you bundler outputs will be very sparse.

    I suggest you manually find out and enter the focal length information into the perl file (since your EXIF tags seem to miss it) in either of the two ways:
    i) In case you can find focal length information about your camera online / in the camera manual.
    ii) Perform camera calibration (http://robots.stanford.edu/cs223b04/JeanYvesCalib/index.html#links).

  5. Hi Yoshiyuki.

    Thanks for your comments. I hope you have been able to resolve your bundler issues with the help of Pratyush’s tip.

    I have included a ZIP archive with the three aerial photos from this tutorial. You can use these images to follow along with your copy of the Bundler photogrammetry software.

    You can download the aerial photos here:
    bundler_cliff_sample_images.zip (14 MB)

    Regards,
    Andrew Hazelden

  6. Hi Pratyush and Andrew.
    Thanks for your replies and messages.
    I retried the steps with cautions. I got a final PLY point cloud file in pmvs/models folder. Both cliff-image PLY and kermit-image PLY were generated!!
    Many thanks for your concerns and Best Regards,
    Yoshiyuki

  7. Cool!

    I want to share my 3D with Skywalker in Thailand but I create 3D with 123D Catch. You can see in YouTube

    Raweeharn

  8. Thanks for the great tutorial! There is one minor glitch, though, which took me quite a while to find out: With a larger dataset, “pmvs2 pmvs/ option-0000” will give you only ONE of the view clusters. Instead, you have to issue the command “sh pmvs/pmvs.sh”, so that all of the clusters are processed (of course, you could also process them one by one).

    Still, many thanks for the tutorial again!

    Best regards,

    Nils

  9. Hi Nils.

    Thanks for the tip on using the “sh pmvs/pmvs.sh” command with Bundler. I will have to try that out on my next photogrammetry experiment.

    Regards,
    Andrew Hazelden

  10. Hi Homme.

    Thanks for the link to the MicMac tools. This is the first time I have heard of the software.

    In the latest version of MeshLab the release notes indicate that a photo texture projection mode has been included. I haven’t had a chance to explore it yet but it should be interesting.

    MeshLab 1.3.2 Release Notes:
    https://sourceforge.net/apps/mediawiki/meshlab/index.php?title=Release_Notes_1.3.2

    The feature is listed as:
    Color Projection: Raster colors can be now perspective-projected onto a mesh.

  11. Cool! I’ll have to check out the new MeshLab functionality.

    I haven’t investigated MicMac much, but after checking out the SVN repository and building everything I managed to get the Draix-Drone-Village example working (http://www.micmac.ign.fr/svn/micmac_data/trunk/ExempleDoc/Draix-Drone-Village/) which seems to be the best example for UAV based processing. It all looks good, with the potential to create fully automated orthorectified photos and DEMs.

    With this kind of functionality it would be possible to kick start the OpenAerialMap project again, fed with data from amateur UAV flights.

  12. First of all, thank you so much for this amazing tutorial. I just wanted to contribute by saying that I run into some problems (Segmentation Exceptions) with an old white MacBook when running RunBundler.sh, everything was sorted by using newer hardware (in the form of my girlfriend´s macbook pro).

    But my question is, are there any good free alternatives to Maya for doing the mapping? I cannot pay that price!

    Thanks!

  13. Hi Luca.

    As an alternative to mapping textures with Maya you can now use Meshlab with the new “raster camera” feature. The raster camera tools allow you to project high resolution photo textures onto polygon meshes.

    Mr P.’s Meshlab Tutorials page on YouTube has several tutorials covering Raster layers and Color texture projection with the raster camera.

    Here are a few links to get you started:

    MeshLab Basics: Raster Layers

    Set Raster Cameras

    Raster Layers: Parameterization and Texturing from rasters

    Raster Layers: Set Raster Camera

    Color Projection: Mutual Information, Basic

  14. Hi. Today I updated this blog post with a new MeshLab tip on changing the size and shape of the point cloud dots used in view port. This is helpful when you switch from to viewing sparse scenes to a project with a high resolution scan of an object.

  15. Thanks to your tutorial I had a go myself with some testdata from somewhere. The latest version of VisualSfM also exports camera positions, so Meshlab can reproject the image data back onto the mesh in any resolution you prefer.

    I made a tutorial myself using this technique:

  16. Hi Andrew,
    I am a beginner and I want to generate a DEM from my images of 3D model in lab. How do I run bundler on Windows? Do I need to calibrate the camera before taking photos?

  17. Hi Sirkhail.

    When you are starting out with photogrammetry you should start with a simple set of images that are properly exposed, in-focus, and have enough texture detail to make the photogrammetry process easy. As long as the original EXIF data is preset, Bundler should be able to process your images without any extra image calibration.

    Have you checked out the Everything Related to Photogrammetry forums?

    Regards,
    Andrew

  18. Hi andrew,
    i have used the pipeline for surface reconstruction but my meshlab crashes . I have meshlab 1.3.1 so required help in this regard.

    Thank in advance

Comments are closed.