AcousticIA, a deep neural network for multi-species fish detection using multiple models of acoustic cameras
Abstract
Acoustic cameras, or imaging sonars, are high-potential devices for many applications in aquatic ecology, notably for fisheries management and population monitoring. However, how to extract such data into high-value information without a time-consuming entire data set reading by an operator is still a challenge. Moreover, the analysis of acoustic imaging, due to its low signal-to-noise ratio, is a perfect training ground for experimenting with new approaches, especially concerning deep learning techniques. We present hereby a novel approach that takes advantage of both convolutional neural network (CNN) and classical computer vision (CV) techniques, able to detect fish passages in acoustic video streams. The pipeline pre-treats the acoustic images to localise the signals of interest and to improve the detection performances. The YOLOv3-based model was trained with fish data from multiple species recorded by the two most frequently used models of acoustic cameras, the DIDSON and ARIS, including species of high ecological interest, as Atlantic salmon or European eels. The pre-treatment of images greatly improves the model performance, increasing its F1-score from 0.52 to 0.69. The model we developed provides satisfying results detecting almost 80% of fish passages and minimising the false-positive rate. On a validation data set, 40 h of videos and around 1 800 fish passages, the efficiency increases with the fish sizes, notably reaching a recall higher than 95% for Atlantic salmon. Conversely, the model appears much less efficient for eel detections on ARIS videos than on DIDSON data (31% recall vs 75%).
Origin | Files produced by the author(s) |
---|