As was published in the last articles, we are working on the Taung Project, which involves the reconstruction of a 2.5-million-year-old fossil; not just reconstructing the face with soft tissue, but restructuring the entire skull as well.
The most important thing in this project is the technology that will be used, because evidently, all the results will be shared with the community. And the ‘community’ means everyone.
This article will describe the techniques in recovering the missing parts of the Taung child skull.
It's important to state at this point that all integrants of the Arc-Team work hard in their professions, so there will be times one would publish an article before another, when they have free time to share their
knowledge. Having said that, this article was written during someone’s free time, in the hopes that it might be useful to otherswho would read this blog. Below you’ll find the description of how the skull was scanned in 3D.
Describing the process
The skull was scanned in great detail for Luca Bezzi. The model was prepped for importing to Blender.
Unfortunately (or fortunately, for the nerds), a significant part of the skull was missing, as indicated by the purple lines. For a complete reconstruction, the missing parts needed to be recovered.
How can this be solved?
An option was to use the CT scan of primates to reconstruct the missing parts at the mandible and other areas.
Obviously, the CT scan chosen was that of infant and juvenile primates.
You can found the tomographies in this link. They can be used for research purposes. To download the files, you'll have to creat an account.
But, beyond of the size bigger, the Australopithecus didn't have canines so big.
…and made compatible with the Taung cranium.
Why didn’t we use the decimate tool? Because the computer (Core 5) often crashes when this is used.
Why didn’t we make a manual reconstruction of the mesh? To avoid a subjective reconstruction.
How was this solved?
A fake tomography needed to be done to reconstruct a clean mesh in InVesalius. How? We know that when you illuminate an object, the surface reflects the light, but inside it's totally dark because of the absence of light.
So since Blender allows the user to start the camera view when needed, you can set up the camera to "cut" a space and see inside the objects.
Using the Python script IMG2DCM, the image sequence was converted in a DICOM sequence that was imported to InVesalius and reconstructed like a 3D mesh.
With IMG2DCM, it is possible to manually establish the distances of the DICOM slices, but in this case,the conversion was made with default values (because this is flattened), and the mesh will just be rescaled later on.
The reconstructed mesh is then imported and rescaled to match the original model.
You can download the Collada file (3D) here.
See you there…a big hug!