نوع مقاله : مقاله پژوهشی

نویسندگان

1 عضو هیئت علمی گروه مهندسی کشاورزی، دانشکده فنی کشاورزی شهریار، دانشگاه فنی و حرفه ای

2 عضو هیئت علمی گروه مهندسی کشاورزی، دانشکده فنی کشاورزی شهریار، دانشگاه فنی و حرفه ای 09375468339

3 کارشناس، گروه مهندسی کشاورزی، دانشکده فنی کشاورزی شهریار، دانشگاه فنی و حرفه ای

چکیده

این تحقیق، به منظور سنجش هوشمند و سریع وضعیت کلنی‌ها از نظر بازده تولید عسل در طی دورة چرا، و ارایه یک روش مبتنی بر سامانة بینایی ماشین انجام شد. با بهره‌گیری از روش یادگیری عمیق، در ابتدا محدودة شان و سپس الگوی هندسی، بافتی و رنگی عسل تشخیص داده شد. پس از آن، مقدار درصد مساحت عسل محاسبه شد. برای این کار، آزمون عکسبرداری توسط دوربین دیجیتال از کلنی‌های زنبور عسل به نحوی طراحی و اجرا شد که طی آن وضعیت‌های مختلف عسل روی شان قرار داشت. در مرحلة تحلیل تصاویر، از شبکة عصبی کانولوشنی با الگوریتم YOLOv5 و روش بخش‌بندی معنایی استفاده شد. نتایج نشان داد که سامانة هوشمند ارایه شده توانایی شناسایی قاب از محیط پیرامونی تصویر را با دقت بیش از 88 درصد دارد. همچنین نواحی مربوط به عسل در هر شان با دقت حدود 83 درصد و با سرعت حدود 240 برابر زنبوردار خبره شناسایی شد. این نتایج به طور همزمان با شمارش دستی توسط یک زنبوردار ماهر مورد تایید قرار گرفت. با توجه به افزایش سرعت تخمین، کاهش خطای انسانی و در نتیجه کاهش زمان اختلال در فعالیت کلنی، روش ارائه شده می‌تواند جایگزین مناسبی برای روش سنتی استفاده از کادرگذاری به منظور بازدیدهای دوره‌ای و برآورد بازدهی تولید عسل باشد.

کلیدواژه‌ها

عنوان مقاله [English]

Development of a machine vision system for periodic evaluation of honey production efficiency by deep learning method

نویسندگان [English]

  • Mohammad Shojaaddini 1
  • Ashkan Moosavian 2
  • Sakineh Babaei 3

1 Faculty Member, Department of Agricultural Engineering, Faculty of Shahriar, Technical and Vocational University

2 Faculty Member, Department of Agricultural Engineering, Faculty of Shahriar, Technical and Vocational University

3 Specialist, Department of Agricultural Engineering, Faculty of Shahriar, Technical and Vocational University

چکیده [English]

This study was performed to intelligent and rapid assessment of the status of colonies in terms of honey production efficiency during foraging period, and presenting a method based on machine vision system. Using deep learning method, at first the comb frame and then the geometric, textural and color pattern of honey were identified. After that, the percentage of honey area was calculated. To do this, the imaging test of bee colonies using digital camera was designed and performed in such a way that different states of cells were present on the combs. In image analysis stage, the convolutional neural network with YOLOv5 algorithm and semantic segmentation method were used. The results showed that the present intelligent system has the ability to detect the comb frame from the surrounding environment of the image with an accuracy of more than 88%. Also, honey-related areas in each comb were identified with almost 83% accuracy and about 240 times quicker that of an expert beekeeper. These results were simultaneously confirmed with manual counting by a skilled beekeeper. Due to increase in the estimation speed, reduction of human error and consequently reduction of disruption time in colony activity, the proposed method can be a proper alternative to the traditional method of using framing technique for regular visits and evaluation of honey production efficiency.

کلیدواژه‌ها [English]

  • Deep Learning
  • Honey Production Efficiency
  • Machine Vision
  • Semantic Segmentation Method
  • YOLOv5 Algorithm
1. Aljazaeri M, Bazi Y, AlMubarak H and Alajlan N (2020) Faster R-CNN and DenseNet regression for glaucoma detection in retinal fundus images. In: 2020 2nd International Conference on Computer and Information Sciences (ICCIS), IEEE. 1-4.
2. Alves TS, Pinto MA, Ventura P, Neves CJ, Biron DG, Junior AC, de Paula Filho PL and Rodrigues PJ (2020) Automatic detection and classification of honey bee comb cells using deep learning. Computers and Electronics in Agriculture, 170: 105244.
3. Colin T, Bruce J, Meikle WG and Barron AB (2018) The development of honey bee colonies assessed using a new semi automated brood counting method: CombCount. PLoSOne, 13: e0205816.
4. Cornelissen B, Schmid S, Henning J and Der JV (2009) Estimating colony size using digital photography. In: of 41st International Apicultural Congress. 48.
5. Girshick R, Donahue J, Darrell T and Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 580-587.
6. Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision. 1440-1448.
7. Hajizadeh R, Iraqi M, Safamehr A and Asadi A (2013) Effect of pollen substitutes on honey production performance in bee colonies. National Conference on Livestock and Poultry Nutrition, Islamic Azad University, Maragheh Branch, Iran.
8. He K, Gkioxari G, Dollár P and Girshick R (2017) Mask r-cnn. Proceedings of the IEEE International Conference on Computer Vision. Honolulu. HI, USA.
9. Höferlin B, Höferlin M, Kleinhenz M and Bargen H (2013) Automatic analysis of apis mellifera comb photos and brood development. In: Association of Institutes for Bee Research Report of the 60th Seminar in Würzburg Apidologie. 19.
10. Jocher G, Stoken A and Borovec J (2020) NanoCode012, ChristopherSTAN, L. Changyu, Laughing, tkianai, A Hogan, lorenzomammana, yxNONG, AlexWang1900, L Diaconu, Marc, wanghaoyang0106, ml5ah, Doug, F Ingham, Frederik, Guilhen, Hatovix, J Poznanski, J Fang, L Yu, changyu98, M Wang, N Gupta, O Akhtar, PetrDvoracek, and P Rai, “ultralytics/yolov5: v3 1.
11. Kim JA, Sung JY and Park SH (2020) Comparison of Faster-RCNN, YOLO, and SSD for real-time vehicle type recognition. In: 2020 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia). IEEE. 1-4.
12. LeCun Y, Bengio Y and Hinton G (2015) Deep learning. Nature, 521: 436-444.
13. Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P and Zitnick CL (2014) Microsoft coco: Common objects in context. In: European Conference on Computer Vision, Springer. 740-755.
14. Nuzzi C, Pasinetti S, Lancini M, Docchio F and Sansoni G (2018) Deep learning based machine vision: first steps towards a hand gesture recognition set up for collaborative robots. In: 2018 Workshop on Metrology for Industry 40 and IoT, IEEE. 28-33.
15. Redmon J, Divvala S, Girshick R and Farhadi A (2016) You only look once: Unified, realtime object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 779-788.
16. Ren S, He K, Girshick R and Sun J (2015) Faster r-cnn: towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems: 28.
17. Rodrigues PJ, Neves C and Pinto MA (2016) Geometric contrast feature for automatic visual counting of honey bee brood capped cells. In: EURBEE 2016: 7th European Conference of Apidology.
18. Yoshiyama M, Kimura K, Saitoh K and Iwata H (2011) Measuring colony development in honey bees by simple digital image analysis. Journal of Apicultural Research, 50: 170-172.