Vision Navigator: A Smart and Intelligent Obstacle Recognition Model for Visually Impaired Users
Table 1
Hardware and software components of vision navigator.
System Requirements
Description
Arduino board
Controls, processes, and generates all inputs and outputs. It receives the echo signals from the ultrasonic sensor that trigger it to take further actions and checks if the obstacle is there. It generates an immediate alert using a buzzer. It also generates a caption for the image captured by a camera and later converts that caption into speech that is played through an audio device.
Ultrasonic sensor
Determines the target obstacle distance by emission of ultrasonic sound waves, which maps the reflected sound into an electrical signal.
IR sensor
Detects infrared radiation that helps in sensing obstacles in surroundings.
Vibrator
Used to notify the user about any obstacle present.
Bluetooth module
Used to send and receive the signal through two devices in a wireless medium.
Push buttons
Used to switch on the microcontroller board.
Water sensor
Used to notify the user of any water body presence.
Audio module
Conveys image captions to the user in the form of audio. It receives an audio signal from Arduino once the caption for the image is converted into an audio format using a text-to-to-speech algorithm.
Camera modules
Acts as eyes for visually impaired people. Each time an ultrasonic sensor detects an obstacle, the camera modules capture the picture that is sent to the board for processing and caption generation.
Buzzers
Used in this system for an immediate alert. When the ultrasonic sensor detects an obstacle, it is triggered.
Plastic stick bodies
Acts as the outer texture body of the Smart-fold Cane.
Python
Programming interface to implement the model.
TensorFlow
Open-source machine learning platform.
Text-to-speech API
Application to map the obstacle text into speech to notify users.