Skip to Main content Skip to Navigation
Journal articles

Toward Joint Acquisition-Annotation of Images with Egocentric Devices for a Lower-Cost Machine Learning Application to Apple Detection

Abstract : Since most computer vision approaches are now driven by machine learning, the current bottleneck is the annotation of images. This time-consuming task is usually performed manually after the acquisition of images. In this article, we assess the value of various egocentric vision approaches in regard to performing joint acquisition and automatic image annotation rather than the conventional two-step process of acquisition followed by manual annotation. This approach is illustrated with apple detection in challenging field conditions. We demonstrate the possibility of high performance in automatic apple segmentation (Dice 0.85), apple counting (88 percent of probability of good detection, and 0.09 true-negative rate), and apple localization (a shift error of fewer than 3 pixels) with eye-tracking systems. This is obtained by simply applying the areas of interest captured by the egocentric devices to standard, non-supervised image segmentation. We especially stress the importance in terms of time of using such eye-tracking devices on head-mounted systems to jointly perform image acquisition and automatic annotation. A gain of time of over 10-fold by comparison with classical image acquisition followed by manual image annotation is demonstrated.
Complete list of metadata

https://hal.inrae.fr/hal-02953007
Contributor : Olivier Dupre <>
Submitted on : Tuesday, September 29, 2020 - 5:31:34 PM
Last modification on : Thursday, February 18, 2021 - 3:32:56 AM

Links full text

Identifiers

Citation

Salma Samiei, Pejman Rasti, Paul Richard, Gilles Galopin, David Rousseau. Toward Joint Acquisition-Annotation of Images with Egocentric Devices for a Lower-Cost Machine Learning Application to Apple Detection. Sensors, MDPI, 2020, 20 (15), pp.4173. ⟨10.3390/s20154173⟩. ⟨hal-02953007⟩

Share

Metrics

Record views

74