Research Article | | Peer-Reviewed

Signed Language Translation into Afaan Oromo Text Using Deep-Learning Approach

Received: 17 October 2023     Accepted: 2 November 2023     Published: 17 November 2023
Views:       Downloads:
Abstract

A person who is unable to talk or hear anything can communicate via sign language. For those who have trouble hearing, sign language is a great way to communicate their thoughts and feelings. The vocabulary, grammar, and allied lexicons of sign language are well- defined. This study focuses primarily on Signed Afaan Oromo. The main issue in our society is the detection of Sign Language for the Afaan Oromo language. The construction of static word level, alphabet, and number translations into their equivalent Afaan Oromo text is the main focus of this thesis study. Video frames are used as the system's input, and Afaan Oromo text is used as the system's ultimate output. Data from 90 classes at the alphabet, number, and word level from five special needs instructors have been collected as part of an experiment and literature study to help answer the research objectives. Preprocessing, such as frame extraction, resizing, labeling, and splitting data using Roboflow, as well as the conversion of photos into Yolo model format, was done in order to train our model. Finally, based on the results of our experiment, we can quickly and effectively recognize and classify gestures using data sets of a medium size. The image, webcam, and video file's promising value and forecast results indicate that the yolov5 algorithm has a good chance of successfully detecting the sign in real-time. We trained and tested the model using a signed Afaan Oromo dataset. The YOLOv5s model was successful in obtaining accuracy of 90%, recall of 92.5%, mAP of 93.2% at 0.5 IoU, and a score of 71.5% at 0.5:0.95 IoU, which is suitable for real-time gesture translation.

Published in American Journal of Artificial Intelligence (Volume 7, Issue 2)
DOI 10.11648/j.ajai.20230702.12
Page(s) 40-51
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2023. Published by Science Publishing Group

Keywords

Signed Language, Deep Learning, Computer Vision, CNN, YOLOv5

References
[1] A. Tsegay and K. Raimond, “Offline candidate hand gesture selection and trajectory determination for continuous Ethiopian sign language,” J. Theor. Appl. Inf. Technol., vol. 36, no. 1, pp. 145–153, 2012.
[2] Ibrahim Bedane, “The Origin of Afaan Oromo : Mother Language,” Double Blind Peer Rev. Int. Res. J., vol. 15, no. 12, 2015.
[3] D. Bijiga, “The Development of Oromo Writing System School of European Culture and Languages,” Http://Kar.Kent.Ac.Uk, pp. 1–295, 2015, [Online]. Available: https://kar.kent.ac.uk/52387/
[4] V. Adithya and R. Rajesh, “A Deep Convolutional Neural Network Approach for Static Hand Gesture Recognition,” Procedia Comput. Sci., vol. 171, no. 2019, pp. 2353–2361, 2020, doi: 10.1016/j.procs.2020.04.255.
[5] M. Nanda, “You Only Gesture Once (Yougo): American Sign Language Translation Using Yolov3,” no. May, 2020.
[6] K. Brady et al., “American Sign Language Recognition and Translation Feasibility Study. | National Technical Reports Library - NTIS,” pp. 3–76, 2018, [Online]. Available: https://ntrl.ntis.gov/NTRL/dashboard/searchResults/titleDetail/AD1098892.xhtml
[7] H. D. Patel and A. Saluja, “Sign Language Recognition and Translator Application,” Int. Res. J. Eng. Technol., pp. 652–658, 2021, [Online]. Available: www.irjet.net
[8] A. Halder and A. Tayade, “Sign Language to Text and Speech Translation in Real Time Using Convolutional Neural Network,”,” Int. J. Res. Publ. Rev., vol. 8, no. 2, pp. 9–17, 2021.
[9] S. Krishnamurthi and M. Indiramma, “Sign Language Translator Using Deep Learning Techniques,” 2021 4th Int. Conf. Electr. Comput. Commun. Technol. ICECCT 2021, 2021, doi: 10.1109/ICECCT52121.2021.9616795.
[10] K. Yin, M. Alikhani, A. Moryossef, J. Hochgesang, and Y. Goldberg, “Including Signed Languages in Natural Language Processing (Extended Abstract),” IJCAI Int. Jt. Conf. Artif. Intell., pp. 5369–5373, 2022, doi: 10.24963/ijcai.2022/753.
[11] W. Chen, Z. Liqiang, Y. Tianpeng, J. Tao, J. Yijing, and L. Zhihao, “Research on the state detection of the secondary panel of the switchgear based on the YOLOv5 network model,” J. Phys. Conf. Ser., vol. 1994, no. 1, 2021, doi: 10.1088/1742-6596/1994/1/012030.
[12] S. S. Sharma et al., “Effective Face Detector Based on YOLOv5 and Superresolution Reconstruction,” Comput. Math. Methods Med., vol. 2021, no. June, pp. 1–9, 2021, doi: 10.1155/2021/7748350.
[13] L. Jiang, H. Liu, H. Zhu, and G. Zhang, “Improved YOLO v5 with balanced feature pyramid and attention module for traffic sign detection,” MATEC Web Conf., vol. 355, p. 03023, 2022, doi: 10.1051/matecconf/202235503023.
[14] S. M. Ali, “Comparative Analysis of YOLOv3, YOLOv4 and YOLOv5 for Sign Language Detection,” Int. J. Adv. Res. Innov. Ideas Educ., vol. 7, no. 4, pp. 2393–2398, 2021, [Online]. Available: http://ijariie.com/AdminUploadPdf/Comparative_Analysis_of_YOLOv3__YOLOv4_and_YOLOv5_for_Sign_Language_Detection_ijariie15253.pdf
[15] D. Thuan, “Evolution of Yolo Algorithm and Yolov5: the State-of-the-Art Object Detection Algorithm,” p. 61, 2021.
[16] T. F. Dima and M. E. Ahmed, “Using YOLOv5 Algorithm to Detect and Recognize American Sign Language,” 2021 Int. Conf. Inf. Technol. ICIT 2021 - Proc., no. December, pp. 603–607, 2021, doi: 10.1109/ICIT52682.2021.9491672.
[17] Q. Xu, Z. Zhu, H. Ge, Z. Zhang, and X. Zang, “Effective Face Detector Based on YOLOv5 and Superresolution Reconstruction,” Comput. Math. Methods Med., vol. 2021, 2021, doi: 10.1155/2021/7748350.
[18] J. H. Kim, N. Kim, Y. W. Park, and C. S. Won, “Object Detection and Classification Based on YOLO-V5 with Improved Maritime Dataset,” J. Mar. Sci. Eng., vol. 10, no. 3, 2022, doi: 10.3390/jmse10030377.
[19] W. Wu et al., “Application of local fully Convolutional Neural Network combined with YOLO v5 algorithm in small target detection of remote sensing image,” PLoS One, vol. 16, no. 10 October, pp. 1–15, 2021, doi: 10.1371/journal.pone.0259283.
[20] Z. Q. Zhao, P. Zheng, S. T. Xu, and X. Wu, “Object Detection with Deep Learning: A Review,” IEEE Trans. Neural Networks Learn. Syst., vol. 30, no. 11, pp. 3212–3232, 2019, doi: 10.1109/TNNLS.2018.2876865.
[21] W. Jia et al., “Real-time automatic helmet detection of motorcyclists in urban traffic using improved YOLOv5 detector,” IET Image Process., vol. 15, no. 14, pp. 3623–3637, 2021, doi: 10.1049/ipr2.12295.
[22] T. Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal Loss for Dense Object Detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 2, pp. 318–327, 2020, doi: 10.1109/TPAMI.2018.2858826.
[23] A. Ananthanarayan, “Hand Detection and Body Language Recognition Using YOLO,” no. May, 2021, [Online]. Available: https://hdl.handle.net/1969.1/188381
[24] M. Mulugeta, “Development of an Amharic Speech to Ethiopian Sign Language Translation System.,” Bahirdar universty, Ethiop., 2016.
[25] D. Solanki, “OBJECT DETECTION AND CLASSIFICATION THROUGH DEEP LEARNING APPROACHES,” vol. 5, no. 9, pp. 230–235, 2018.
[26] S. L. Kuna, “Real Time Object Detection and Tracking using Deep Learning,” Int. J. Res. Appl. Sci. Eng. Technol., vol. 8, no. 6, pp. 58–64, 2020, doi: 10.22214/ijraset.2020.6010.
[27] E. Martinez-Martin and F. Morillas-Espejo, “Deep Learning Techniques for Spanish Sign Language Interpretation,” Comput. Intell. Neurosci., vol. 2021, no. i, 2021, doi: 10.1155/2021/5532580.
[28] M. Rivera-Acosta, J. M. Ruiz-Varela, S. Ortega-Cisneros, J. Rivera, R. Parra-Michel, and P. Mejia-Alvarez, “Spelling correction real-time american sign language alphabet translation system based on yolo network and LSTM,” Electron., vol. 10, no. 9, 2021, doi: 10.3390/electronics10091035.
[29] S. Daniels, N. Suciati, and C. Fathichah, “Indonesian Sign Language Recognition using YOLO Method,” IOP Conf. Ser. Mater. Sci. Eng., vol. 1077, no. 1, p. 012029, 2021, doi: 10.1088/1757-899x/1077/1/012029.
[30] I. Cardoza, J. P. García-Vázquez, A. Díaz-Ramírez, and V. Quintero-Rosas, “Convolutional Neural Networks Hyperparameter Tunning for Classifying Firearms on Images,” Appl. Artif. Intell., vol. 36, no. 1, pp. 1–23, 2022, doi: 10.1080/08839514.2022.2058165.
[31] K. Guo, C. He, M. Yang, and S. Wang, “A pavement distresses identification method optimized for YOLOv5s,” Sci. Rep., vol. 12, no. 1, pp. 1–15, 2022, doi: 10.1038/s41598-022-07527-3.
[32] D. K. Singh, “3D-CNN based Dynamic Gesture Recognition for Indian Sign Language Modeling,” Procedia CIRP, vol. 189, pp. 76–83, 2021, doi: 10.1016/j.procs.2021.05.071.
[33] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018, [Online]. Available: http://arxiv.org/abs/1804.02767
Cite This Article
  • APA Style

    Negash Tesso, D., Fikadu Dinsa, E., Fikadu Kenani, H. (2023). Signed Language Translation into Afaan Oromo Text Using Deep-Learning Approach. American Journal of Artificial Intelligence, 7(2), 40-51. https://doi.org/10.11648/j.ajai.20230702.12

    Copy | Download

    ACS Style

    Negash Tesso, D.; Fikadu Dinsa, E.; Fikadu Kenani, H. Signed Language Translation into Afaan Oromo Text Using Deep-Learning Approach. Am. J. Artif. Intell. 2023, 7(2), 40-51. doi: 10.11648/j.ajai.20230702.12

    Copy | Download

    AMA Style

    Negash Tesso D, Fikadu Dinsa E, Fikadu Kenani H. Signed Language Translation into Afaan Oromo Text Using Deep-Learning Approach. Am J Artif Intell. 2023;7(2):40-51. doi: 10.11648/j.ajai.20230702.12

    Copy | Download

  • @article{10.11648/j.ajai.20230702.12,
      author = {Diriba Negash Tesso and Etana Fikadu Dinsa and Hawi Fikadu Kenani},
      title = {Signed Language Translation into Afaan Oromo Text Using Deep-Learning Approach},
      journal = {American Journal of Artificial Intelligence},
      volume = {7},
      number = {2},
      pages = {40-51},
      doi = {10.11648/j.ajai.20230702.12},
      url = {https://doi.org/10.11648/j.ajai.20230702.12},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20230702.12},
      abstract = {A person who is unable to talk or hear anything can communicate via sign language. For those who have trouble hearing, sign language is a great way to communicate their thoughts and feelings. The vocabulary, grammar, and allied lexicons of sign language are well- defined. This study focuses primarily on Signed Afaan Oromo. The main issue in our society is the detection of Sign Language for the Afaan Oromo language. The construction of static word level, alphabet, and number translations into their equivalent Afaan Oromo text is the main focus of this thesis study. Video frames are used as the system's input, and Afaan Oromo text is used as the system's ultimate output. Data from 90 classes at the alphabet, number, and word level from five special needs instructors have been collected as part of an experiment and literature study to help answer the research objectives. Preprocessing, such as frame extraction, resizing, labeling, and splitting data using Roboflow, as well as the conversion of photos into Yolo model format, was done in order to train our model. Finally, based on the results of our experiment, we can quickly and effectively recognize and classify gestures using data sets of a medium size. The image, webcam, and video file's promising value and forecast results indicate that the yolov5 algorithm has a good chance of successfully detecting the sign in real-time. We trained and tested the model using a signed Afaan Oromo dataset. The YOLOv5s model was successful in obtaining accuracy of 90%, recall of 92.5%, mAP of 93.2% at 0.5 IoU, and a score of 71.5% at 0.5:0.95 IoU, which is suitable for real-time gesture translation.
    },
     year = {2023}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Signed Language Translation into Afaan Oromo Text Using Deep-Learning Approach
    AU  - Diriba Negash Tesso
    AU  - Etana Fikadu Dinsa
    AU  - Hawi Fikadu Kenani
    Y1  - 2023/11/17
    PY  - 2023
    N1  - https://doi.org/10.11648/j.ajai.20230702.12
    DO  - 10.11648/j.ajai.20230702.12
    T2  - American Journal of Artificial Intelligence
    JF  - American Journal of Artificial Intelligence
    JO  - American Journal of Artificial Intelligence
    SP  - 40
    EP  - 51
    PB  - Science Publishing Group
    SN  - 2639-9733
    UR  - https://doi.org/10.11648/j.ajai.20230702.12
    AB  - A person who is unable to talk or hear anything can communicate via sign language. For those who have trouble hearing, sign language is a great way to communicate their thoughts and feelings. The vocabulary, grammar, and allied lexicons of sign language are well- defined. This study focuses primarily on Signed Afaan Oromo. The main issue in our society is the detection of Sign Language for the Afaan Oromo language. The construction of static word level, alphabet, and number translations into their equivalent Afaan Oromo text is the main focus of this thesis study. Video frames are used as the system's input, and Afaan Oromo text is used as the system's ultimate output. Data from 90 classes at the alphabet, number, and word level from five special needs instructors have been collected as part of an experiment and literature study to help answer the research objectives. Preprocessing, such as frame extraction, resizing, labeling, and splitting data using Roboflow, as well as the conversion of photos into Yolo model format, was done in order to train our model. Finally, based on the results of our experiment, we can quickly and effectively recognize and classify gestures using data sets of a medium size. The image, webcam, and video file's promising value and forecast results indicate that the yolov5 algorithm has a good chance of successfully detecting the sign in real-time. We trained and tested the model using a signed Afaan Oromo dataset. The YOLOv5s model was successful in obtaining accuracy of 90%, recall of 92.5%, mAP of 93.2% at 0.5 IoU, and a score of 71.5% at 0.5:0.95 IoU, which is suitable for real-time gesture translation.
    
    VL  - 7
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • Department of Computer Science, College of Engineering and Technology, Wallaga University, Nekemte, Ethiopia

  • Department of Computer Science, College of Engineering and Technology, Wallaga University, Nekemte, Ethiopia

  • Department of Computer Science, College of Engineering and Technology, Wallaga University, Nekemte, Ethiopia

  • Sections