Unimpeded Walking with Voice Guidance on Raspberry Pi Platform
In this study, object and distance detection with Faster R-CNN and SSD MobileNetV2 architectures in Raspberry Pi platform, tracking tactile paving surfaces with the Hough’s theorem and voice transmission of the obtained object detection information to the visually impaired person with the Google Text-To-Speech (gTTS) library are studied. In the proposed approach, 8291 images were used while creating the data set of 11 different objects that visually impaired individuals may encounter. The training of two different deep learning architectures in a computer environment has been completed and transferred to the Raspberry Pi platform for object detection. Object detection at high FPS was performed using Google Coral USB Accelerator in the SSD MobileNetV2 model. While 95% accuracy rate was obtained in the Faster R-CNN model, 93% accuracy rate was obtained in the SSD MobileNetV2 model. Edge detection of tactile paving surfaces has been successfully performed using Hough's theorem at the appropriate angle and color tone. The distance between the visually impaired person and the object was measured and the names of the objects were transmitted to the visually impaired person audibly.