Extracting meaningful video segments using a movement detection algorithm applied to dairy cow behavior study and welfare monitoring.

Authors: T. Gisiger and M. Cellier and E. Vasseur and A. B. Diallo

Date: 2024-07-01

Status: Published

View External Publication


Precision dairy farming is essential to creating a food production system that is durable and respects animal welfare and the environment.
This approach requires gathering many hours of videos with still cameras, which are then used for research, welfare monitoring and tool training. Extracting the video segments with the most meaningful information would allow us to study larger fractions of the recordings taken while gathering the maximum number of observations. This can be framed as a movement detection problem, or, alternatively, a detection problem using traditional or deep learning techniques. However, the latter process of training in a cluttered farm environment might prove challenging.
Here, we propose an algorithm that estimates cow movement in a robust manner without the need for object detection or training. The resulting movement indices, paired with an independently set movement threshold, can then be used to partition videos into episodes where the cow is either immobile or displaying relevant movements and behaviours. This approach takes advantage of typical cow behaviour features and allows for factoring out video sections with repetitive or little/no movement.
The experimental setting consists of five 15-minute videos and focuses on measuring the extent to which discarding episodes with little to no movements speeds up the process of labelling behaviours by animal science experts.
This approach will allow for more complex experiments, novel angles of investigation and larger data-sets to study cow behaviour and interaction with their environment as well as monitoring for welfare status.