Thursday, 20 June 2013
Kinect - Infrared prospections
Despite what I wrote at the end of this post, it looks like that Kinect is not really the best option for archaeological underground documentation, or for any other situation in which it is necessary to work in darkness.
I already tested the hardware and the software (RGBDemo) at home, simulating the light conditions of an underground environment, and the result was that Kinect scanned in 3D some parts of an object (a small table), with great difficulties.
My hope was that the infrared sensors of Kinect were enough to record the objects geometries also in darkness, as actually happened. The problem was that probably RGBDemo, to work properly, needs also RGB values (from the normal camera). Without colors information the final 3D model is obviously black (as you can see below), but (and this is the real difficulty) it seems that the software loses a fundamental parameter to keep tracking the object to document, so that the operations become too slow and, in most cases, it is not possible to complete the recording of a whole scene. In other words the documentation process often stops, so that after it is necessary to start again or simply to save different partial scans of the scene, to reassemble at a later time.
However, before discarding Kinect as an option for 3D documentation in darkness, I wanted to do one more experiment in a real archaeological excavation and, some weeks ago, I found the right test area: an acient family tomb inside a medieval church.
As you see in the movie below, the structure was partially damaged, having a small hole on the North side. This hole was big enough to insert Kinect in the tomb, so that I could try to get a fast 3D overview of the inside, also to understand its real area (which was not identifiable from the outside).
As I expected, it was problematic to record the 3D characteristics of such a dark room, but I got all the informations I needed to estimate the real perimeter. I guess that in this occasion RGBDemo worked better because of the ray of light that, entering the underground structure and illuminating a small spot of the ground, was giving the software a good reference point in order to track all the surrounding areas.
Since the poor quality video it is difficult to evaluate the low resolution of the 3D reconstruction, you can get a better idea looking this other short clip, where the final pointcloud is loaded in MeshLab.
This new test of Kinect in a real archaeological excavation seems to confirm that this technology is not (yet?) ready for documentation in complete absence of light. However the most remarkable result of the experiment was the use of one of the tool of RGBDemo, which shows directly the infrared input in a monitor. This option has been a good prospection instrument to explore and monitoring the inside of the burial structure, without other invasive methodology. As you see in the screenshot, it is possible to see the inside condition of the tomb and to recognize some of the objects that lie on the ground (e.g. wooden planks or human bones), but of course this could have been done simply with a normal endoscope and some led lights (like we did in this occasion).
However, here is possible to compare what the normal RGB sensor of Kinects is able to "see" in darkness and what its infrared sensors can do:
This experiment was possible thanks to the support of Gianluca Fondriest, who helped me in every single step of the workflow.