We supported the last mission using a combination of three different techniques:
1) bi-dimensional photomapping
To have a fast documentation we use traditional photomapping using the evolution of the Corte Inferiore method based on the single software QuantumGIS 2.2.
Every evening we were able to finish the daily documentation. The picture below shows an example of photomosaic made of 11 images.
2) orthophoto and 3D model (top view) using MicMac
To improve the quality and the accuracy of the documentation we decide to take zenithal pictures and process them inside the suite MicMac. The data acquisition was done keeping vertically the camera and shooting every step moving along a line. After the first line, we move a step forward and repeat the operation proceeding in the opposite direction. The picture below shows the pointcloud (Apero result) with the position of every single shoot (the black arrows rapresent the direction of the movement).
We elaborated the images using the ground geometry approach of Malt. The results are an orthophoto and a DEM of the layer. The picture below shows the difference between a 2D photomosaic and the orthophoto.
The picture below shows the high resolution of the DEM (the white target is a square with side of 3 cm )
Due to the long time in calculation, we were not able to finish all the elaborations during the excavation time.
We want to thanks Hansjörg Ragg (REDcatch GmbH) for the help in finding the best workflow inside MicMac.
3) 3D model (360 degrees) using Python Photogrammetry Toolbox
The ground geometry approach of MicMac is not the best choice when the layer has a complex shape, characterized by different horizontal and vertical faces. That is the reason why we decide to take pictures also for Python Photogrammetry Toolbox (Bundler + CMVS + PMVS2). The picture below shows the difference between MicMac point cloud (ground geometry) and PPT pointcloud.
The data acquisition is really simple: just take pictures from many points of view (picture below), paying attention to cover all the different faces of the layer (at least three images).
The raw point clouds were elaborated inside CloudCompare (cleaning and mesh) and Meshlab (mesh, texturing and reference). The picture below shows the final result of a broken jar.
We were able to elaborate all the documentations during the excavation time.
The three different type of documentation are perfectly compatible with the time of an archaeological excavation. The best way to work is to acquire first the PPT dataset, than the 2D photomosaic and finally the MicMac images (it has to be the final step because it is necessary to walk over the layer!). The GCP are the same for all the three techniques.