figshare
Browse
Conf_Paper_1_ICEANS.pdf (1.01 MB)

Unimpeded Walking with Voice Guidance on Raspberry Pi Platform

Download (1.01 MB)
conference contribution
posted on 2022-12-07, 21:52 authored by Cenk Berkan Deligoz, Feride Seymen, Erdem Bayhan, Mustafa NamdarMustafa Namdar, Arif Basgumus

In this study, object and distance detection with Faster R-CNN and SSD MobileNetV2 architectures in Raspberry Pi platform, tracking tactile paving surfaces with the Hough’s theorem and voice transmission of the obtained object detection information to the visually impaired person with the Google Text-To-Speech (gTTS) library are studied. In the proposed approach, 8291 images were used while creating the data set of 11 different objects that visually impaired individuals may encounter. The training of two different deep learning architectures in a computer environment has been completed and transferred to the Raspberry Pi platform for object detection. Object detection at high FPS was performed using Google Coral USB Accelerator in the SSD MobileNetV2 model. While 95% accuracy rate was obtained in the Faster R-CNN model, 93% accuracy rate was obtained in the SSD MobileNetV2 model. Edge detection of tactile paving surfaces has been successfully performed using Hough's theorem at the appropriate angle and color tone. The distance between the visually impaired person and the object was measured and the names of the objects were transmitted to the visually impaired person audibly.

Funding

This study was supported within the scope of the TUBITAK 2209-A university students' research projects support program with application number 1919B012110467.

History

Usage metrics

    Categories

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC