Research Article

Vision Navigator: A Smart and Intelligent Obstacle Recognition Model for Visually Impaired Users

Table 1

Hardware and software components of vision navigator.

System RequirementsDescription

Arduino boardControls, processes, and generates all inputs and outputs. It receives the echo signals from the ultrasonic sensor that trigger it to take further actions and checks if the obstacle is there. It generates an immediate alert using a buzzer. It also generates a caption for the image captured by a camera and later converts that caption into speech that is played through an audio device.
Ultrasonic sensorDetermines the target obstacle distance by emission of ultrasonic sound waves, which maps the reflected sound into an electrical signal.
IR sensorDetects infrared radiation that helps in sensing obstacles in surroundings.
VibratorUsed to notify the user about any obstacle present.
Bluetooth moduleUsed to send and receive the signal through two devices in a wireless medium.
Push buttonsUsed to switch on the microcontroller board.
Water sensorUsed to notify the user of any water body presence.
Audio moduleConveys image captions to the user in the form of audio. It receives an audio signal from Arduino once the caption for the image is converted into an audio format using a text-to-to-speech algorithm.
Camera modulesActs as eyes for visually impaired people. Each time an ultrasonic sensor detects an obstacle, the camera modules capture the picture that is sent to the board for processing and caption generation.
BuzzersUsed in this system for an immediate alert. When the ultrasonic sensor detects an obstacle, it is triggered.
Plastic stick bodiesActs as the outer texture body of the Smart-fold Cane.
PythonProgramming interface to implement the model.
TensorFlowOpen-source machine learning platform.
Text-to-speech APIApplication to map the obstacle text into speech to notify users.