Tracking guided actions recognition for cows

Yun Liang, Xiaoming Chen

PDF(11070 KB)
PDF(11070 KB)
Quant. Biol. ›› 2022, Vol. 10 ›› Issue (4) : 351-365. DOI: 10.15302/J-QB-022-0291
RESEARCH ARTICLE
RESEARCH ARTICLE

Tracking guided actions recognition for cows

Author information +
History +

Abstract

Background: Cows actions are important factors of cows health and their well-being. By monitoring the individual cows actions, we prevent cows diseases and realize modern precision cows rearing. However, traditional cows actions monitoring is usually conducted through video recording or direct visual observation, which is time-consuming and laborious, and often lead to misjudgement due to the subjective consciousness or negligence.

Methods: This paper proposes a method of cows actions recognition based on tracked trajectories to automatically recognize and evaluate the actions of cows. First, we construct a dataset including 60 videos to describe the popular actions existing in the daily life of cows, providing the basic data for designing our actions recognition method. Second, eight famous trackers are used to track and obtain temporal and spatial information of targets. Third, after studying and analysing the tracked trajectories of different actions about cows, a rigorous and effective constraint method is designed to realize actions recognition by us.

Results: Many experiments demonstrate that our method of actions recognition performs favourably in detecting the actions of cows, and the proposed dataset basically satisfies the actions evaluation for farmers.

Conclusion: The proposed tracking guided actions recognition provides a feasible way to maintain and promote cows health and welfare.

Author summary

People often use cows actions to achieve scientific feeding and improve cows welfare, and then promote the quality and production of cows. However, traditional cows actions monitoring is usually conducted through video recording or direct visual observation, which is time-consuming and laborious, and often lead to misjudgment due to the subjective consciousness or negligence. We propose a method of cows actions recognition based on tracked trajectories to automatically recognize and evaluate the actions of cows.The method provides a feasible way to maintain and promote cows health and welfare.

Graphical abstract

Keywords

public framework / actions recognition / visual tracking / precision agriculture / cows health

Cite this article

Download citation ▾
Yun Liang, Xiaoming Chen. Tracking guided actions recognition for cows. Quant. Biol., 2022, 10(4): 351‒365 https://doi.org/10.15302/J-QB-022-0291

References

[1]
Qiu, T., Chen, N., Li, K., Atiquzzaman, M. (2018). How can heterogeneous internet of things build our future: a survey. IEEE Comm. Surv. and Tutor., 20: 2011–2027
CrossRef Google scholar
[2]
Fregonesi, J. (2001). Behaviour, performance and health indicators of welfare for dairy cows housed in strawyard or cubicle systems. Livest. Prod. Sci., 68: 205–216
CrossRef Google scholar
[3]
Haley, D. B., (2001). Assessing cow comfort: effects of two floor types and two tie stall designs on the behaviour of lactating dairy cows. Appl. Anim. Behav. Sci., 71: 105–117
CrossRef Google scholar
[4]
KrohnC.. (1994) Behaviour of dairy cows kept in extensive (loose housing/pasture) or intensive (tie stall) environments. iii. grooming, exploration and abnormal behaviour. Appl. Anim. Behav. Sci., 42, 73–86
[5]
Green, L. E., Hedges, V. J., Schukken, Y. H., Blowey, R. W. Packington, A. (2002). The impact of clinical lameness on the milk yield of dairy cows. J. Dairy Sci., 85: 2250–2256
CrossRef Google scholar
[6]
Mattachini, G., Riva, E., Bisaglia, C., Pompe, J. C. (2013). Methodology for quantifying the behavioral activity of dairy cows in freestall barns. J. Anim. Sci., 91: 4899–4907
CrossRef Google scholar
[7]
Thorup, V. M., Munksgaard, L., Robert, P. E., Erhard, H. W., Thomsen, P. T. Friggens, N. (2015). Lameness detection via leg-mounted accelerometers on dairy cows on four commercial farms. Animal, 9: 1704–1712
CrossRef Google scholar
[8]
pez-Gatius, F., Santolaria, P., Mundet, I. niz, J. (2005). Walking activity at estrus and subsequent fertility in dairy cows. Theriogenology, 63: 1419–1429
CrossRef Google scholar
[9]
Kiddy, C. (1977). Variation in physical activity as an indication of estrus in dairy cows. J. Dairy Sci., 60: 235–243
CrossRef Google scholar
[10]
HouR.,ChenC.. (2017) Tube convolutional neural network (T-CNN) for action detection in videos. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 5823−5832
[11]
KuehneH.,JhuangH.,GarroteE.,PoggioT.. (2011) Hmdb: a large video database for human motion recognition. In: Proceedings of the IEEE international conference on computer vision (ICCV), pp. 25562563
[12]
SoomroK.,ZamirA. R.. (2012) Ucf101: a dataset of 101 human actions classes from videos in the wild. arXiv, 1212.0402
[13]
Liu, M., Meng, F., Chen, C. (2019). Joint dynamic pose image and space time reversal for human action recognition from videos. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence, pp. 8762–8769
[14]
Kristan, M., Pflugfelder, R. (2015). The visual object tracking VOT2015 challenge results. In: Proceedings of the IEEE International Conference on Computer Vision Workshop (ICCVW), pp. 564–586
[15]
Saif, S., Tehseen, S. (2018). A survey of the techniques for the identification and classification of human actions from visual data. Sensors (Basel), 18: 3979
CrossRef Google scholar
[16]
Wu, Y., Lim, J. Yang, M. (2015). Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell., 37: 1834–1848
CrossRef Google scholar
[17]
Henriques, J. F., Caseiro, R., Martins, P. (2012). Exploiting the circulant structure of tracking-by-detection with kernels. In: Proceedings of the 12th European conference on Computer Vision (ECCV), pp. 702–715
[18]
Hart, B. (1988). Biological basis of the behavior of sick animals. Neurosci. Biobehav. Rev., 12: 123–137
CrossRef Google scholar
[19]
Norouzzadeh, M. S., Nguyen, A., Kosmala, M., Swanson, A., Palmer, M. S., Packer, C. (2018). Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. In: Proceedings of the National Academy of Science of the United States of America, pp. E5716–E5725
[20]
Sharma, S. U. Shah, D. (2016). A practical animal detection and collision avoidance system using computer vision technique. IEEE Access, 5: 347–358
CrossRef Google scholar
[21]
Andriluka, M. Iqbal, U. Pishchulin, L., Milan, A., Gall, J., (2018). PoseTrack: A Benchmark for Human Pose Estimation and Tracking. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5167–5176
[22]
Andriluka, M., Pishchulin, L., Gehler, P. (2014). 2D human pose estimation: New benchmark and state of the art analysis. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3686–3693
[23]
Kalogeiton, V., Weinzaepfel, P., Ferrari, V. (2017). Joint learning of object and action detectors. In: The IEEE International Conference on Computer Vision (ICCV), pp. 2001–2010
[24]
Gilani, S. O., Subramanian, R., Yan, Y., Melcher, D., Sebe, N. (2015). PET: An eye-tracking dataset for animal-centric Pascal object classes. In: Proceeding of the IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6
[25]
Jeong, S., Kim, G. (2017). Effective visual tracking using multi-block and scale space based on kernelized correlation filters. Sensors (Basel), 17: 433
CrossRef Google scholar
[26]
Henriques, J. F., Caseiro, R., Martins, P. (2015). High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell., 37: 583–596
CrossRef Google scholar
[27]
Wang, H., Liu, P., Du, Y., (2019). Online convolution network tracking via spatio-temporal context. Mulitmed. Tools Appl., 78: 257–270
[28]
Liu, N., Hong, H., (2019). Robust object tracking via constrained online dictionary learning. Mulitmed. Tools Appl., 78: 3689–3703
[29]
Gao, J., Wang, Q., Xing, J., Ling, H., Hu, W., Maybank, S. (2018). Tracking-by-fusion via Gaussian process regression extended to transfer learning. IEEE Trans. Pattern Anal. Mach. Intell., 42: 939–955
[30]
Li, F., Tian, C., Zuo, W., Zhang, L. Yang, M. (2018). Learning spatial-temporal regularized correlation filters for visual tracking. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4904–4913
[31]
Zhang, K., Zhang, L. Yang, M. (2012). Real-time compressive tracking. In: Proceedings of the 12th European conference on Computer Vision (ECCV), pp. 864–877
[32]
Sevilla-Lara, L. (2012). Distribution fields for tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1910–1917
[33]
Henriques, J. Caseiro, R., Martins, P., (2015). High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell., 37: 583–596
CrossRef Google scholar
[34]
Li, Y., Zhu, J. Hoi, S. C. (2015). Reliable Patch Trackers: Robust visual tracking by exploiting reliable patches. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 353–361
[35]
Galoogahi, H. K., Fagg, A. (2017). Learning backgroundaware correlation filters for visual tracking. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1144–1152
[36]
Bertinetto, L., Valmadre, J., Henriques, J., Vedaldi, A. (2016). Fully-convolutional siamese networks for object tracking. In: European Conference on Computer Vision, pp. 850–865
[37]
Song, Y., Chao, M., Gong, L., Zhang, J. (2017). Crest: convolutional residual learning for visual tracking. In: IEEE International Conference on Computer Vision (ICCV), pp. 2574–2583
[38]
Zhang, Y., Cheng, L., Wu, J., Cai, J., Do, M. N. (2016). Action recognition in still images with minimum annotation efforts. IEEE Trans. Image Process., 25: 5479–5490
CrossRef Google scholar
[39]
Poppe, R. (2010). A survey on vision-based human action recognition. Image Vis. Comput., 28: 976–990
CrossRef Google scholar
[40]
Dollar, P., Rabaud, V. (2005). Behavior recognition via sparse spatio-temporal features. In: Proceedings of the IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 65–72
[41]
Yin, L., Wei, X. (2006). A 3D facial expression database for facial behavior research. In: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition, pp. 211–216
[42]
Tanfous, A. B., Drira, H. Amor, B. (2018). Coding kendall’s shape trajectories for 3D action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2840–2849
[43]
Tang, Y., Tian, Y., Lu, J., Li, P. (2018). Deep progressive reinforcement learning for skeleton-based action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5323–5332
[44]
Gallego, G., Rebecq, H. (2018). A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3867–3876
[45]
Guo, G. (2014). A survey on still image based human action recognition. Pattern Recognit., 47: 3343–3361
CrossRef Google scholar
[46]
ZhangT.,ZhangY.,CaiJ.KotA.. (2016) Efficient object feature selection for action recognition. In: Proceeding of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2707–2711
[47]
Feichtenhofer, C., Pinz, A. Wildes, R. (2017). Spatiotemporal multiplier networks for video action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7445–7454
[48]
Girdhar, R., Gkioxari, G., Torresani, L., Paluri, M. (2018). Detect-and-track: Efficient pose estimation in videos. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 350–359
[49]
Lu, X., Yao, H., Zhao, S., Sun, X., (2019). Action recognition with multi-scale trajectory-pooled 3D convolutional descriptors. Mulitmed. Tools Appl., 78: 507–523

ACKNOWLEDGMENTS

This work was supported by the National Natural Science Foundation of China (No. 61772209), Science and Technology Planning Project of Guangdong Province (Nos. 2019A050510034 and 2019B020219001), the Production Project of Ministry Education China (No. 201901240030), the College Students Innovations Special Project of China (No. 202010564026), and Guangzhou Key Laboratory of Intelligent Agriculture (No. 201902010081).

COMPLIANCE WITH ETHICS GUIDELINES

The authors Yun Liang and Xiaoming Chen declare that they have no conflict of interest or financial conflicts to disclose. All procedures performed in studies involving animals were in accordance with the ethical standards of the institution or practice at which the studies were conducted, and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

OPEN ACCESS

This article is licensed by the CC By under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creativecommons.org/licenses/by/4.0/.

RIGHTS & PERMISSIONS

2022 The Author(s). Published by Higher Education Press.
AI Summary AI Mindmap
PDF(11070 KB)

Accesses

Citations

Detail

Sections
Recommended

/