[1] 马军. 我国皮革产品出口现状及竞争力分析[J].中国皮革,2023,52(5):1-3. MA J.Export situation and competitiveness of chineseleather products[J].China Leather,2023,52(5):1-3.(in Chinese) [2] Huan Y,Ren G C,Su X Y,et al.A versatile end effector for grabbing and spreading of flaky deformableobject manipulation[J].Mechanical Sciences,2023,14(1):111-123. [3] 刘书磊,任工昌,桓源,等.双臂拉伸铺展皮革的方法设计与算法实现[J].皮革科学与程,2023,33(1):21-25. LIU S L,REN G C,HUAN Y,et al.Method design and algorithm realization of stretching leather with two arms[J].Leather Sci and Eng,2023,33(1):21-25.(in Chinese) [4] Onoro-Rubio D,Lopez-Sastre R J.Towards perspective-free object counting with deep learning[C].Amsterdam:European Conference on Computer Vision,2016:615-629. [5] XU M,GE Z,JIANG X,et al.Depth information guided crowd counting for complex crowd scenes[J].Pattern Recognition Letters,2019,125:563-569. [6] CHEN S W,Shivakumar S S,Dcunha S,et al.Counting apples and oranges with deep learning:a data-driven approach[J].IEEE Robotics and Automation Letters,2017,2(2):781-788. [7] Girshick R.Fast R-CNN[C].Santiago:Proceedings of the IEEE International Conference on Computer Vision,2015:1440-1448. [8] Ren S,He K,Girshick R,et al.Faster R-CNN:Towards real-time object detection with region proposal networks[J].Advances in Neural Information Processing Systems,2015,28:91-99. [9] Li J,Liang X,Shen S,et al.Scale-aware Fast r-cnn for pedestrian etection[J].IEEE Transactions on Multimedia,2017,20(4):985-996. [10] Liu W,Anguelov D,Erhan D,et al.SSD:Single shot multibox detector[C].Cham:European Conference on Computer Vision.Springer.2016:21-37. [11] Womg A,Shafiee Mohammad J,Li F,et al.Tiny SSD:A tiny single-shot detection deep convolutional neural network for real-time embedded object detection[C].Toronto:2018 15th Conference on Computer and Robot Vision(CRV),2018:95-101. [12] Wang X,Hua X,Xiao F,et al.Multi-object detection in traffic scenes based on improved SSD[J].Electronics,2018,7(11):302. [13] Zhai S,Shang D,Wang S,et al.DF-SSD:An improved SSD object detection algorithm based on DenseNet and feature fusion[J].IEEE Access,2020,8:24344-24357. [14] Redmon J,Divvala S,Girshick R,et al.You only look once:Unified,real-time object detection[C].Las Vegas:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2016:779-788. [15] Redmon J,Farhadi A.YOLO9000:better,faster,stronger[C].Honolulu:Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition,2017:6517-6525. [16] Redmon J,Farhadi A.YOLOv3:An incremental improvement[EB/OL].(2018-04-08)[2022-08-12].https://arxiv.org/abs/1804.02767. [17] Alexey B,Wang C,Liao H.YOLOv4:Optimal speed and accuracy of object detection[EB/OL].(2020-04-23)[2022-08- 12].https://arxiv.org/abs/2004.10934. [18] 段洁利,王昭锐,邹湘军,等.采用改进YOLOv5的蕉穗识别及其底部果轴定位[J].农业工程学报,2022,38(19):122-130. DUAN J L,WANG Z R,ZOU X J,et al.Recognition ofbananas to locate bottom fruit axis using improved YOLOv5[J].Transactions of the Chinese Society of Agricultural Engineering,2022,38(19):122-130.(in Chinese) [19] 杨秋妹,陈淼彬,黄一桂,等.基于改进YOLOv5n的猪只盘点算法[J].农业机械学报,2023,54(1):251-262. YANG Q M,CHEN M B,HUANG Y G,et al.Pig counting algorithm based on improved YOLO v5n[J].Transactions of the Chinese Society for Agricultural Machinery,2023,54(1):251-262.(in Chinese) [20] Fu L,Yang Z,Wu F,et al.YOLO-Banana:A lightweight neural network for rapid detection of banana bunches and stalks in the natural environment[J].Agronomy,2022,12(2):391. [21] 祁宣豪,智敏.图像处理中注意力机制综述[J].计算机科学与探索,2023:1-20. QI X H,ZHI M.A Review of Attention mechanisms in image processing[J].Journal of Frontiers of Computer Science&Technology,2023:1-20.(in Chinese) [22] Hou Q,Zhou D,Feng J.Coordinate attention for efficient mobile network design[EB/OL].(2021-03-04)[2022-08-12].https://arxiv.org/abs/2103.02907. [23] Zhang Y,Ren W,Zhang Z,et al.Focal and efficient IOU loss for accurate bounding box regression[EB/OL].(2022-07-16)[2022-08-12].https://arxiv.org/abs/2101.08158. [24] Elfwing S,Uchibe E,Doya K.Sigmoid-weighted linear units for neural network function approximation in reinforcement learning[J].Neural Networks,2017,107:3-11. |