Automatisation de la pose de landmarks sur des meshs 3D en morphométrie grâce à une approche de Deep Learning
Résumé
Despite the growing development of morphometric approaches based on deformable registration of 3D surfaces directly or indirectly via dense pseudo-landmark templates, landmark labelling of 3D objects remains a current and routine task in geometric morphometrics analyses. However, manual labelling is a tedious and time-consuming task, highly prone to intra-/inter-observer variability, requiring a high level of expertise, and is becoming incompatible with the increasing throughput of imaging technologies. Methods to automate the process have been developed, mainly based on the deformable registration of 3D images, surfaces or point-set. Whereas overall shapes are qualitatively well described with these algorithms, biases are observed in both the localization and the variance-covariance of landmarks. Some attempts using learning have been made to correct these biases but limited so far to specific anatomical parts of some organisms.
In this work, we aim at developing a versatile approach for learning landmark positioning that could be used to generalize an automated positioning of landmark not dependent of the vertebrate species and the specific skeletal element under study. The developed pipeline starts from an approximate prediction of landmarks obtained from a global registration of a reference model on the object, step which remains specific, and uses these predictions to subset the surface. The resulting local 3D surface is then parametrized in 2D and colorized to enhance some geometric features (ridges, flaws, hollows…) according to differential geometry and ambient lighting algorithms. The resulting images, quite generic of any bone structures, are associated to the manual landmark positions to train different Convolutional Neural Network (CNN) algorithms. Our results are promising with landmark predictions closer to the manual positioning than current deformable algorithms.
Origine | Fichiers produits par l'(les) auteur(s) |
---|