Real-Time Object Detection and Location Tracking System for the Visually Impaired Abstract: This project presents a Smart Guidance System for the Visually Impaired using Raspberry Pi, designed to enhance mobility and safety. The system leverages a USB camera for real-time video capture, with YOLO (You Only Look Once) object detection to identify and classify objects in the user’s environment. A GPS module provides live location data, while a GSM module sends SMS alerts with location coordinates to a designated contact during emergencies. Audio feedback through earphones informs the user about detected objects, enabling obstacle avoidance and safer navigation. The combination of AI-powered vision, real-time location tracking, and automated alerts offers a low-cost, portable, and efficient solution to improve the independence and quality of life for visually impaired individuals. Introduction: Navigating the environment independently poses significant challenges for visually impaired individuals. Traditional mobility aids, such as white canes and guide dogs, offer basic assistance but have limitations in detecting objects at a distance or providing real-time situational awareness. To overcome these challenges, advancements in technology can be leveraged to create smart assistance systems that improve mobility and safety for blind users. This project aims to develop a Smart Guidance System for Visually Impaired Individuals using a Raspberry Pi microcontroller, integrating AI-based object detection, GPS-based location tracking, and realtime alert mechanisms to enhance navigation capabilities. The core of the system is powered by the YOLO (You Only Look Once) object detection algorithm, renowned for its speed and accuracy in identifying objects in real-time. A USB camera continuously captures the surrounding environment, and YOLO processes the video frames to detect and classify objects. Using text-to-speech technology, the detected objects are conveyed as audio instructions through earphones, allowing users to be aware of their surroundings without requiring visual input. This real-time feedback significantly improves spatial awareness and obstacle avoidance, making independent navigation safer and more efficient. In addition to object detection, the system integrates a GPS module to provide live location tracking. The user’s current location coordinates are continuously monitored, enhancing situational awareness and supporting location-based navigation. In emergency scenarios, such as encountering critical obstacles or hazards, a GSM module sends SMS alerts containing the user’s precise location to a designated contact person. This functionality offers an added layer of safety by enabling rapid response in case of unforeseen incidents. By combining object detection, location tracking, and alert systems, this project provides a comprehensive, low-cost, and portable solution for visually impaired users. The use of a Raspberry Pi microcontroller allows for flexible customization and scalability, while opensource AI models and libraries ensure affordability. The system represents a significant step toward enhancing the quality of life and independence of visually impaired individuals, making navigation in complex environments safer and more intuitive. Objectives: To develop a smart guidance system for visually impaired individuals using Raspberry Pi. To implement real-time object detection using YOLO and a USB camera. To provide audio feedback through earphones for obstacle awareness and navigation assistance. To integrate a GPS module for live location tracking. To send SMS alerts with live location coordinates using a GSM module in emergency situations. To enhance the safety and independence of visually impaired users with a cost-effective, portable solution. Problem statement: Visually impaired individuals face significant challenges in navigating their surroundings independently, relying primarily on traditional aids like white canes and guide dogs, which have limitations in detecting distant obstacles and providing real-time situational awareness. These conventional tools do not offer dynamic object recognition, location-based navigation, or automated alert mechanisms, increasing the risk of accidents and limiting mobility. There is a critical need for an intelligent, portable system that can detect objects, track location, and send emergency alerts while providing real-time audio feedback to guide visually impaired users safely and efficiently in complex environments. LITRATURESURVAY: 1. Anurag Patil, Abhishek Brorse, Ajay Phad, “A Low-Cost Internet of Things-Based Navigation Aid for People with Visual Impairments,” 2023. This paper presents a low-cost navigation assistance system for visually impaired individuals using IoT technology. The system incorporates various sensors such as ultrasonic, IR, gyroscopic, and accelerometer, along with a microcontroller (ESP32), battery, and vibration motor to provide real-time obstacle detection and navigation support. The emphasis is on creating an affordable and effective solution to improve the mobility of visually impaired users [1]. 2. Himanshu Singh, Harsha Kumar, S Sabarivelan, Amit Kumar, Prakhar Rai, “Smart Assistance for Visually Impaired People with IoT,” 2023. The authors propose a smart assistance system that uses IoT to help visually impaired people navigate safely. The system includes ultrasonic sensors for obstacle detection, a buzzer for audio alerts, and web-based servers for emergency assistance. This approach highlights the importance of real-time feedback and remote support to enhance the user's independence and safety [2]. 3. Deepa J, Maria Adeline P, Sai Madhumita S S, Pavalaselvi N, “SMART Technology for Visually Impaired: Obstacle Detection and Navigation Support,” 2023. This study focuses on using SMART technology to provide obstacle detection and navigation support for visually impaired individuals. The system utilizes various sensors to detect obstacles and provides real-time feedback through audio signals. The integration of modern technologies aims to improve the mobility and safety of visually impaired users in both indoor and outdoor environments [3]. 4. S. Mohan Kumar, Vivek Vishnudas Nemane, Ramakrishnan Raman, N. Latha, “Indoor Navigation Using IoT-BLE for People with Visual Impairments,” 2023. This paper discusses an indoor navigation system for visually impaired individuals using IoT and Bluetooth Low Energy (BLE) technology. Unlike GPS-based systems that struggle indoors, this system provides accurate real-time location and navigation assistance within buildings. The use of BLE enables the identification of smart devices and provides detailed navigation support, enhancing the user's ability to move independently indoors [4]. 5. R Arthi, Vemuri Jyothi Kiran, Anukiruthika, Mathan Krishna, Utkalika Das, “IoTPowered Real-Time Assistive Shoes for People with Visual Impairments,” 2023. This paper presents a prototype of an assistive shoe designed for visually impaired individuals. The shoe is equipped with ultrasonic sensors and an Arduino UNO board to detect obstacles and provide real-time alerts. This wearable technology aims to facilitate safe navigation and improve mobility for visually impaired users by offering immediate feedback on their surroundings [5]. 6. Priyanka Bhosle, Prashant Pal, Vallari Khobragrade, Shashank Kumar Singh, “Assistance for visually-impaired People with Smart Navigation Systems,” 2023. The authors propose a smart navigation system using an ESP32 microcontroller, radar sensor, ultrasonic sensor, and GPS. The system detects obstacles and converts the information into audio signals for the user. It also includes Bluetooth for audio navigation, making it a comprehensive and user-friendly solution for visually impaired individuals to navigate safely [6]. Block Diagram: Object Detection USB Camera Earphone GPS Module Raspberry Pi Accelerometer GSM Hardware Requirements: Raspberry pi: Raspberry Pi: is a series of small single-board computers (SBCs) developed in the United Kingdom by the Raspberry Pi Foundation in association with Broadcom. The Raspberry Pi project originally leaned towards the promotion of teaching basic computer science in schools and in developing countries. The original model became more popular than anticipated, selling outside its target market for uses such as robotics. It is widely used in many areas, such as for weather monitoring, because of its low cost, modularity, and open design. It is typically used by computer and electronic hobbyists, due to its adoption of the HDMI and USB standards. Fig..Rasberry Pi Raspberry Pi is defined as a minicomputer the size of a credit card that is interoperable with any input and output hardware device like a monitor, a television, a mouse, or a keyboard – effectively converting the set-up into a full-fledged PC at a low cost. The first generation of computers came as massive processing systems built with vacuum tube technology. Over the years, more compact and less expensive versions of what a computer would come to look like sprung up. Today, we have minicomputer gadgets such as smart phones in our pockets. Even though computers have become so commonplace, they are still not widely accessible in developing countries. This imbalance in access to computers and programming technology led to the development and creation of the Raspberry Pi computer. Raspberry Pi is a small, low-cost, single-board computer the size of a credit card that allows people from different backgrounds and levels of expertise to experience and learn to compute. It is an enhanced motherboard developed in the United Kingdom by the Raspberry Pi foundation, now widely accepted as a part of evolving computer technology. The minicomputer can connect with other peripheral hardware devices such as a keyboard, mouse, and monitor. Raspberry Pi is a programmable device. It comes with all the critical features of the motherboard in an average computer but without peripherals or internal storage. To set up the Raspberry computer, you will need an SD card inserted into the provided space. The SD card should have the operating system installed and is required for the computer to boot. Raspberry computers are compatible with Linux OS. This reduces the amount of memory needed and creates an environment for diversity. After setting up the OS, one can connect Raspberry Pi to output devices like computer monitors or a High-Definition Multimedia Interface (HDMI) television. Input units like mice or keyboards should also be connected. This minicomputer’s exact use and applications depend on the buyer and can cover many functions. Features of Raspberry pi: Fig.. Raspberry pi pin out Processor (CPU): As already mentioned, Raspberry Pi models use ARM-based processors. Each generation has its own CPU and corresponding improvements. All models use Broadcom processors and typically have the prefix “BCM” (e.g. BCM2835, BCM2836, BCM2837, or BCM2711). RAM (Memory): The amount of RAM on a Raspberry Pi varies across models. Older models have lower RAM compared to newer ones. The lowest is 256MB while the newest has 8GB. GPIO (General-Purpose Input/Output): Raspberry Pi boards include a set of GPIO pins that allow for interfacing with external devices and components, making it a versatile platform for hardware projects. USB Ports: Raspberry Pi boards typically come with multiple USB ports for connecting peripherals such as keyboards, mice, external storage devices, and other USB-compatible devices. Video Output: Most Raspberry Pi models have an HDMI port for connecting to monitors or TVs. Older models may have composite video or other types of video output. Audio Output: Raspberry Pi boards usually have a 3.5mm audio jack for audio output. HDMI also supports audio, so sound can be transmitted through an HDMI connection as well. Ethernet Port: Many Raspberry Pi models include an Ethernet port for wired network connectivity. However, some models rely solely on Wi-Fi for network connectivity. Wi-Fi and Bluetooth: Some Raspberry Pi models come with built-in Wi-Fi and Bluetooth capabilities, enabling wireless network connectivity and communication with Bluetooth-enabled devices. Storage: Raspberry Pi boards do not have built-in storage but support microSD cards for primary storage. Newer models might also support booting from USB storage. Camera and Display Ports: Certain Raspberry Pi models have dedicated ports for connecting the Raspberry Pi Camera Module and the Raspberry Pi Touchscreen Display. OS Support: Raspberry Pi supports a variety of operating systems, including Raspbian (now called Raspberry Pi OS), Ubuntu, and other Linux distributions. It can also run a special port of Windows 10. Additionally, there are community-supported projects for running different operating systems. Power Supply: Raspberry Pi boards typically use a micro-USB or USB-C connector for power. The power requirements vary depending on the model. The latest RPi 5 requires 5V and 5A while the oldest ones only required 5V and 500 mA. Form Factor: Raspberry Pi boards are compact, credit card-sized single-board computers, making them suitable for a wide range of projects. Uses of Raspberry Pi: Learning to Program: Raspberry Pi is an excellent tool for learning programming languages such as Python, Scratch, and others. It provides a hands-on experience for beginners to develop their coding skills. If you’re using a Linux-based OS, then the RPi is a great tool for learning C/C++ or web programming languages as well. Home Automation: Raspberry Pi can be used to create a home automation system. You can control lights, appliances, and other devices using the GPIO pins or by connecting them to other home automation platforms. Media Center: With software like Kodi or Plex, Raspberry Pi can be turned into a media center. It can stream and play media content, making it an affordable alternative to dedicated media players. Desktop Computer: Although not as powerful as traditional desktop computers, Raspberry Pi can be used for basic computing tasks such as web browsing, word processing, and programming. Web Server: Raspberry Pi can host simple websites or web applications using server software like Apache or Nginx. It's a great way to learn about web hosting and server management. Network-Attached Storage (NAS): By connecting external hard drives to the Raspberry Pi, you can turn it into a personal cloud storage system for sharing and accessing files on your local network. Robotics and DIY Projects: Raspberry Pi is widely used in robotics and DIY projects. Its GPIO pins allow you to connect and control sensors, motors, and other components for building custom projects. Security Camera System: Using the Raspberry Pi along with a camera module, you can set up a DIY security camera system. There are various software options available for motion detection and recording. Educational Tools: Raspberry Pi is used in educational settings to teach computer science, programming, and electronics. It provides an affordable platform for hands-on learning. VPN Server: Raspberry Pi can be configured as a VPN (Virtual Private Network) server, allowing you to secure your internet connection and access your home network remotely. Weather Station: With the addition of sensors, Raspberry Pi can be turned into a weather station to measure and log temperature, humidity, and other environmental data. Gaming Console: Retro gaming enthusiasts use Raspberry Pi to build DIY gaming consoles using software like Retro Pie, allowing them to play classic games on emulators. SD Card: The SD card must be formatted, or written to, in a special way that means the Raspberry Pi can read the data it needs to start properly. Pi Camera: It is used to capture image of the crops. It is directly connected to raspberry pi model. There are two ways to Raspberry pi is a small size module like a small computer. The image captured by camera is sent to the raspberry pi. Using tensor flow, the image is processed and detected by the raspberry pi. Fig.. Pi Camera Features of Pi camera: 1. 15-pin ribbon cable 2. Still picture resolution is 3280 x 2464 3. Automatic 50/60Hz luminance detection 4. Automatic black level calibration 5. Automatic exposure control, white balance and band filter 6. High data capability 7. High-quality imagery 8. Operating temperature ranges between -20°C and 60°C 9. Supports 1080p, 720p60 and VGA90 Applications: Home security Ideal for capturing HD videos and still photographs Wildlife watching Accelerometer: Fig.. Accelerometer With I2C and SPI interfaces, the ADXL345 is a compact, low-power, full 3-axis MEMS accelerometer module. The ADXL345 board has an on-board level shifter and 3.3V voltage regulator, which makes interacting with 5V microcontrollers like the Arduino easy. DC Supply Voltage range: 3V to 6V Integrated low-dropout (LDO) voltage regulator Embedded voltage level converter (based on MOSFET technology) Compatible with microcontrollers operating at either 3.3V or 5V Extremely low power consumption: 40µA in active measurement mode and 0.1µA in standby mode at 2.5V Detection capabilities: Tap and double-tap recognition Ability to detect free-fall events Supports both SPI and I2C communication protocols Acceleration measurement range: ±16g Measurement values for each axis (in g): X-axis: -235 to +270 Y-axis: -240 to +260 Z-axis: -240 to +270 Applications of ADXL345 Accelerometer Affordable, energy-efficient solutions for detecting motion and tilt Handheld electronics and smartphones Video game consoles and controllers Hard drive impact protection Camera stabilization systems Fitness and wellness monitoring devices ADXL345 Module Pin Configuration Power Supply: The board can be supplied with power either from the DC power jack (7 - 12V), the USB connector (5V), or the VIN pin of the board (7-12V). Supplying voltage via the 5V or 3.3V pins bypasses the regulator, and can damage your board. GSM SIM900A: The SIM900A is a readily available GSM/GPRS module,used in many mobile phones and PDA. The module can also be used for developing IOT (Internet of Things) and Embedded Applications. SIM900A is a dual-band GSM/GPRS engine that works on frequencies EGSM 900MHz and DCS 1800MHz. SIM900A features GPRS multi-slot class 10/ class 8 (optional) and supports the GPRS coding schemes CS-1, CS-2, CS-3 and CS-4. Features and Specifications Single supply voltage: 3.4V – 4.5V Power saving mode: Typical power consumption in SLEEP mode is 1.5mA Frequency bands:SIM900A Dual-band: EGSM900, DCS1800. The SIM900A can search the two frequency bands automatically. The frequency bands also can be set by AT command. GSM class: Small MS GPRS connectivity:GPRS multi-slot class 10 (default) , GPRS multi-slot class 8 (option) Transmitting power: Class 4 (2W) at EGSM 900, Class 1 (1W) at DCS 1800 Operating Temperature: -30ºC to +80ºC Storage Temperature: -5ºC to +90ºC DATA GPRS: download transfer max is 85.6KBps, Upload transfer max 42.8KBps Supports CSD, USSD, SMS, FAX Supports MIC and Audio Input Speaker Input Features keypad interface Features display interface Features Real Time Clock Supports UART interface Supports single SIM card Firmware upgrade by debug port Communication by using AT commands Applications Cellular Communication Robotics Mobile Phone Accessories Servers Computer Peripherals Automobile USB Dongles SIM900A GSM Module Pinout Configuration SIM900A is a 68 terminal device as shown in pin diagram. We will describe the function of each pin below. Pin Number Pin Name Description 1 PWRKEY Voltage input for PWRKEY. PWRKEY should be pulled low to power on or power off the system. The user should keep pressing the key for a short time when power on or power off the system because the system need margin time in order to assert the software. 2 PWRKEY_OUT Connecting PWRKEY and PWRKEY_OUT for a short time then release also can power on or power off the module. 3 DTR Data terminal Ready [Serial port ] 4 RI Ring indicator [Serial port ] 5 DCD Data carry detect [Serial port ] 6 DSR Data Set Ready [Serial port ] 7 CTS Clear to send [Serial port ] 8 RTS Request to send [Serial port ] 9 TXD Transmit data [Serial port ] 10 RXD Receive data [Serial port ] 11 DISP _CLK Clock for display [Display interface] 12 DISP_DATA Display data output [Display interface] 13 DISP _D/C Display data or command select [Display interface] 14 DISP _CS Display Enable [Display interface] 15 VDD_EXT 2.8V output power supply 16 NRESET External reset input 17,18,29,39,45, GND Ground 46,53,54,58,59, 61,62,63,64,65 19 MIC_P Microphone Positive 20 MIC_N Microphone Negative 21 SPK_P Speaker Positive 22 SPK_N Speaker Negative 23 LINEIN_R Right Channel input [External line inputs are available to directly mix or multiplex externally generated analog signals such as polyphonic tones from an external melody IC or music generated by an FM tuner IC or module.] 24 LINEIN_L Left Channel Input 25 ADC General purpose analog to digital converter. 26 VRTC Current input for RTC when the battery is not supplied for the system. Current output for backup battery when the main battery is present and the backup battery is in low voltage state. 27 DBG_TXD Transmit pin [Serial interface for debugging and firmware upgrade ] 28 DBG_RXD Receive pin [Serial interface for debugging and firmware upgrade ] 30 SIM_VDD Voltage supply for SIM card 31 SIM_DATA SIM data output 32 SIM_CLK SIM clock 33 SIM_RST SIM reset 34 SIM_PRESENCE SIM detect 35 PWM1 PWM Output 36 PWM2 PWM Output 37 SDA Serial Data [I2C] 38 SCL Serial Clock [I2C] 40,41,42,43,44 KBR0 to KBR4 Keypad interface [ROWS & COLUMNS] & & 47,48,49,50,51 KBC4 to KBC0 52 NETLIGHT Indicate net status 55,56,57 VBAT Three VBAT pins are dedicated to connect the supply voltage. The power supply of SIM900A has to be a single voltage source of VBAT= 3.4V to 4.5V. It must be able to provide sufficient current in a transmit burst which typically rises to 2A. 60 RF_ANT Antenna connection 66 STATUS Indicate working status 67 GPIO 11 General Purpose Input/output 68 GPIO 12 General Purpose Input/output GPS Module: Fig.4. GPS Module The NEO-6MV2 is the name of the GPS (Global Position System) module that is used in satellites for navigation. All that the module does is find its location on Earth and output the latitude and longitude of that position. It is part of a series of standalone GPS receivers that have the potent ubox 6 position engine. These versatile and reasonably priced receivers have a large selection of connecting options and are compact (16 x 12.2 x 2.4 mm). NEO-6 modules' small design, power, and memory options make them ideal for battery-operated cell phones with extremely strict financial and spatial constraints. Because of its creative design, the NEO-6MV2 can perform remarkably effectively in even the most challenging navigational conditions. Applications Global Positioning System (GPS) usage Mobile phones and tablets Guidance and navigation systems Unmanned aerial vehicles (UAVs) Do-it-yourself (DIY) and hobbyist projects NEO-6MV2 GPS Module Pin Configuration The four output pins on the module will each get a description of their respective functions underneath. These four pins are used to power the communication interface and module. Features 1. Standalone GPS Receiver: The module operates independently as a GPS receiver, which means it does not require additional hardware to function for basic GPS operations. 2. Anti-jamming Technology: This feature enhances the receiver's ability to maintain signal accuracy and reliability even in environments with interference or signal disruptions. 3. UART Interface: The module provides a UART (Universal Asynchronous ReceiverTransmitter) interface for communication with other devices. Additionally, you can use SPI (Serial Peripheral Interface), I²C (Inter-Integrated Circuit), and USB by soldering pins directly to the chip core. 4. Time-to-First-Fix: Cold Start: 32 seconds, which is the time required to acquire satellite signals and determine location from scratch. Warm Start: 23 seconds, used when the GPS module has recent satellite data but is not currently active. Hot Start: Less than 1 second, where the module quickly acquires satellite signals due to recent and active data. 5. Receiver Type: The module features support for 50 channels and operates on the GPS L1 frequency band. It is also compatible with various Satellite-Based Augmentation Systems (SBAS), including the Wide Area Augmentation System (WAAS), the European Geostationary Navigation Overlay Service (EGNOS), the Multi-functional Satellite Augmentation System (MSAS), and the GPS Aided Geo Augmented Navigation (GAGAN) system. 6. Maximum Navigation Update Rate: 5 Hz, which means the module can update its position data up to 5 times per second. 7. EEPROM with Battery Backup: Ensures that important data, such as satellite positions and configuration settings, are retained even when the power is off. Electrical Characteristics: 1. Default Baud Rate: 9600 bps (bits per second), which is the standard communication speed for UART data transmission. 2. Sensitivity: -160 dBm, indicating the module’s ability to detect very weak GPS signals, which enhances its performance in challenging environments. 3. Supply Voltage: 3.6V, the operating voltage required to power the module. 4. Maximum DC Current at Any Output: 10 mA, the maximum current that the module's output pins can supply without damage. 5. Operation Limits: Gravity: Can operate under up to 4g of acceleration. Altitude: Up to 50,000 meters (50 km), making it suitable for high-altitude applications. Velocity: Up to 500 meters per second (approximately 1800 km/h), covering highspeed scenarios such as aircraft or high-speed vehicles. 6. Operating Temperature Range: -40°C to 85°C, allowing the module to function reliably across a wide range of environmental conditions. Earephone: 1. Driver Type: o Dynamic, balanced armature, or hybrid drivers o Affects sound quality, bass response, and overall clarity. 2. Impedance: o Measured in ohms (Ω), typically ranging from 16Ω to 32Ω for earphones. o Affects compatibility with devices and sound output. Lower impedance works well with portable devices. 3. Frequency Response: o Typically ranges from 20Hz to 20kHz, covering the full range of human hearing. o A wider range may indicate better sound reproduction. 4. Sensitivity: o Measured in decibels (dB), typically between 90dB and 110dB. o Higher sensitivity indicates that earphones can produce louder sound with less power. 5. Connector: o Usually a 3.5mm jack, but some high-end earphones may come with a 2.5mm or 4.4mm balanced jack for improved sound quality. o USB-C or Lightning connectors are also available for some newer models. 6. Cable Length: o Generally between 1.2 meters to 1.5 meters, though this can vary. o A longer cable may be more flexible for certain uses but can become tangled more easily. 7. Microphone: o Some wired earphones have built-in microphones for calls or voice commands. 8. Noise Isolation/Noise Cancellation: o Noise isolation is passive (ear tips block external sounds). o Noise cancellation (Active Noise Cancellation - ANC) uses electronics to cancel out background noise, though ANC is more common in over-ear headphones. 9. Material: o Earbuds may have plastic, metal, or silicone parts for durability and comfort. o The tips can be made from silicone, foam, or memory foam for better fit and comfort. 10. Weight: Wired earphones are usually lightweight, typically weighing 10–30 grams depending on the design and materials. 11. Compatibility: Most wired earphones are compatible with smartphones, laptops, and other audio devices with a 3.5mm jack or USB-C/Lightning port (for specific models). Software Requirements: Telegram Bot Platform Telegram is about freedom and openness – our code is open for everyone, as is our API. Today we’re making another step towards openness by launching a Bot API and platform for third-party developers to create bots. Bots are simply Telegram accounts operated by software – not people – and they will often have AI features. They can do anything – teach, play, search, broadcast, remind, connect, integrate with other services, or even pass commands to the Internet of Things. (Add your photo) Install an operating system: To use your Raspberry Pi, you will need an operating system. By default, Raspberry Pis Check for an operating system on any SD card inserted in the SD card slot. Depending on your Raspberry Pi model, you can also boot an operating System from other storage devices, including USB drives, storage connected via a HAT, and network storage. To install an operating system on a storage device for your Raspberry Pi, you'll need: a computer you can use to image the storage device into a boot device a way to plug your storage device into that computer Most Raspberry Pi users choose microSD cards as their boot device. We recommend installing an operating system using Raspberry Pi Imager. Raspberry Pi Imager is a tool that helps you download and write images on macOS, Windows, and Linux. Imager includes many popular operating system images for Raspberry Pi. Imager also supports loading images downloaded directly from Raspberry Pi or third-party vendors such as Ubuntu. You can use Imager to preconfigure credentials and remote access settings for your Raspberry Pi Imager supports images packaged in the .img format as well as container formats like .zip If you have no other computer to write an image to a boot device, you may be able to install an operating system directly on your Raspberry Pi from the internet. Install using Imager: You can install Imager in the following ways: Download the latest version from raspberrypi.com/software and run the Installer Install it from a terminal using your package manager, e.g. sudo apt install rpi-imager Once you've installed Imager, launch the application by clicking the Raspberry Pi Imager icon or running rpi-imager. Fig.2. Choose device Click Choose device and select your Raspberry Pi model from the list. Fig.3. Select Raspberry Pi Model Next, click Choose OS and select an operating system to install. Imager always shows the recommended version of Raspberry Pi OS for your model at the top of the list. Fig.4. Choose the OS Connect your preferred storage device to your computer. For example, plug a microSD card in using an external or built-in SD card reader. Then, click Choose storage and select your storage device. WARNING: If you have more than one storage device connected to your computer, be sure to choose the correct device! You can often identify storage devices by size. If you’re unsure, disconnect other devices until you’ve identified the device you want to image. Fig.5. Select the correct device (microSD card) Next, click Next. Fig.6. OS customisation In a popup, Imager will ask you to apply OS customisation. We strongly recommend configuring your Raspberry Pi via the OS customisation settings. Click the Edit Settings button to open OS customisation. If you don't configure your Raspberry Pi via OS customisation settings, Raspberry Pi OS will ask you for the same information at first boot during the configuration wizard. You can click the No button to skip OS customisation. OS customisation: The OS customisation menu lets you set up your Raspberry Pi before first boot. You can preconfigure: A username and password WiFi credentials The device hostname The time zone Your keyboard layout Remote connectivity When you first open the OS customisation menu, you might see a prompt asking for permission to load WiFi credentials from your host computer. If you respond "yes", Imager will prefill WiFi credentials from the network you're currently connected to. If you respond "no", you can enter WiFi credentials manually. The hostname option defines the hostname your Raspberry Pi broadcasts to the network using mDNS. When you connect your Raspberry Pi to your network, other devices on the network can communicate with your computer using <hostname>.local or <hostname>.lan. The username and password option defines the username and password of the admin user account on your Raspberry Pi. The wireless LAN option allows you to enter an SSID (name) and password for your wireless network. If your network does not broadcast an SSID publicly, you should enable the "Hidden SSID" setting. By default, Imager uses the country you're currently in as the "Wireless LAN country. This setting controls the WiFi broadcast frequencies used by your Raspberry Pi. Enter credentials for the wireless LAN option if you plan to run a headless Raspberry Pi. The locale settings option allows you to define the time zone and default keyboard layout for your Pi. Fig.7. OS customisation General details of user The Services tab includes settings to help you connect to your Raspberry Pi remotely. If you plan to use your Raspberry Pi remotely over your network, check the box next to Enable SSH. You should enable this option if you plan to run a headless Raspberry Pi. Choose the password authentication option to SSH into your Raspberry Pi over the network using the username and password you provided in the general tab of OS customisation. Choose Allow public-key authentication only to preconfigure your Raspberry Pi for passwordless public-key SSH authentication using a private key from the computer you're currently using. If already have an RSA key in your SSH configuration, Imager uses that public key. If you don't, you can click Run SSH-keygen to generate a public/private key pair. Imager will use the newly-generated public key. Fig.8. OS customisation Services OS customisation also includes an Options menu that allows you to configure the behaviour of Imager during a write. These options allow you to play a noise when Imager finishes verifying an image, to automatically unmount storage media after verification, and to disable telemetry. Fig.9. OS customisation Options Write: When you've finished entering OS customisation settings, click Save to save your customisation. Then, click Yes to apply OS customisation settings when you write the image to the storage device. Finally, respond Yes to the "Are you sure you want to continue?" popup to begin writing data to the storage device. Fig.10. microSD Card data erased If you see an admin prompt asking for permissions to read write to your storage medium, it’s safe to proceed. Fig.11. Writing the OS (Uploading in microSD card) Grab a cup of coffee or go for walk. This could take a few minutes. Fig.12. Writing the OS (Uploading in microSD card) If you want to live especially dangerously, you can click cancel verify to skip the verification process. When you see the “Write successful” popup, your image has been completely written and verified. You’re now ready to boot a Raspberry Pi from the storage device!. Fig.13. Writing the OS Successful (Uploaded in microSD card) Next, proceed to the first boot configuration instruction to get your Raspberry Pi up and running. Set up your Raspberry Pi: After installing an operating system image, connect your storage device to your Raspberry Pi. First, unplug your Raspberry Pi’s power supply to ensure that the Raspberry Pi is powered down while you connect peripherals. If you installed the operating system on a microSD card, you can plug it into your Raspberry Pi’s card slot now. If you installed the operating system on any other storage device, you can connect it to your Raspberry Pi now. Fig.14. Insert microSD card to Raspberry Pi Then, plug in any other peripherals, such as your mouse, keyboard, and monitor. Fig.15. Connect Peripherals Finally, connect the power supply to your Raspberry Pi. You should see the status LED light up when your Pi powers on. If your Pi is connected to a display, you should see the boot screen within minutes. Configuration on First Boot: If you used OS customisation in Imager to preconfigure your Raspberry Pi, congratulations! Your device is ready to use. Proceed to next steps to learn how you can put your Raspberry Pi to good use. If your Raspberry Pi does not boot within 5 minutes, check the status LED. If it’s flashing, see the LED warning flash codes for more information. If your Pi refuses to boot, try the following mitigation steps: If you used a boot device other than an SD card, try booting from an SD card. Re-image your SD card; be sure to complete the entire verify step in Imager. Update the bootloader on your Raspberry Pi, then re-image your SD card. If you chose to skip OS customisation in Imager, your Raspberry Pi will run a configuration wizard on first boot. You need a monitor and keyboard to navigate through the wizard; a mouse is optional. Fig.16. Raspberry Pi booting Bluetooth: If you’re using a Bluetooth keyboard or mouse, this step will walk you through device pairing. Your Raspberry Pi will scan for pairable devices and connect to the first device it finds for each item. This process work with built-in or external USB Bluetooth adapters. If you use a USB adapter, plug it in before booting your Raspberry Pi. Locale: This page helps you configure your country, language, and time zone, and keyboard layout. Fig.17. Writing the OS Successful (Uploaded in microSD card) User: This page helps you configure the username and password for the default user account. By default, older versions of Raspberry Pi OS set the username to “pi”. If you use the username “pi”, avoid the old default password of “Raspberry” to keep your Raspberry Pi secure. Fig.18. Create User Account WiFi: This page helps you connect to a WiFi network. Choose your preferred network from the list. Fig.19. Connect to WiFi If your network requires a password, you can enter it here. Fig.20. Enter WiFi Password Browser: This page lets you select Firefox or chromium as your default internet browser. You can optionally uninstall the browser you don’t set as default. Fig.21. Choose browser Software Updates: Once your Raspberry Pi has internet access, this page helps you update your operating system and software to the latest. During the software update process, the wizard will remove the non-default browser if you opted to uninstall it in the browser selection step. Downloading updates may take several minutes. Fig.22. Update Software Fig.23. Download Updates When you see a popup indicating that your system is up to date, click OK to proceed to the next step. Finish: At the end of the configuration wizard, click Restart to reboot your Raspberry Pi. Your Raspberry Pi will apply your configuration and boot to the desktop. Fig.24. Setup Completed Restart Next steps: Now that your Raspberry Pi is set up and ready to go, what’s next? Recommended software: Raspberry Pi OS comes with many essential applications pre-installed so you can start using them straight away. If you’d like to take advantage of other applications we find useful, click the raspberry icon in the top left corner of the screen. Select Preferences> Recommended Software from the drop-down menu, and you’ll find package manager. You can install a wide variety of recommended software here for free. Fig.25. Raspberry Pi Desktop YOLO and Darkflow YOLO object detector: Figure 1: A simplified illustration of the YOLO object detector pipeline (source). We’ll use YOLO with OpenCV in this blog post. When it comes to deep learning-based object detection, there are three primary object detectors you’ll encounter: R-CNN and their variants, including the original R-CNN, Fast R- CNN, and Faster R-CNN Single Shot Detector (SSDs) YOLO R-CNNs are one of the first deep learning-based object detectors and are an example of a twostage detector. 1. In the first R-CNN publication, Rich feature hierarchies for accurate object detection and semantic segmentation, (2013) Girshick et al. proposed an object detector that required an algorithm such as Selective Search (or equivalent) to propose candidate bounding boxes that could contain objects. 2. These regions were then passed into a CNN for classification, ultimately leading to one of the first deep learning-based object detectors. The problem with the standard R-CNN method was that it was painfully slow and not a complete end-to-end object detector. Girshick et al. published a second paper in 2015, entitled Fast R- CNN. The Fast R-CNN algorithm made considerable improvements to the original R-CNN, namely increasing accuracy and reducing the time it took to perform a forward pass; however, the model still relied on an external region proposal algorithm. It wasn’t until Girshick et al.’s follow-up 2015 paper, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, that R-CNNs became a true end-to-end deep learning object detector by removing the Selective Search requirement and instead relying on a Region Proposal Network (RPN) that is (1) fully convolutional and (2) can predict the object bounding boxes and “objectness” scores (i.e., a score quantifying how likely it is a region of an image may contain an image). The outputs of the RPNs are then passed into the R-CNN component for final classification and labeling. While R-CNNs tend to very accurate, the biggest problem with the R-CNN family of networks is their speed — they were incredibly slow, obtaining only 5 FPS on a GPU. To help increase the speed of deep learning-based object detectors, both Single Shot Detectors (SSDs) and YOLO use a one-stage detector strategy. These algorithms treat object detection as a regression problem, taking a given input image and simultaneously learning bounding box coordinates and corresponding class label probabilities. In general, single-stage detectors tend to be less accurate than two-stage detectors but are significantly faster. YOLO is a great example of a single stage detector. First introduced in 2015 by Redmon et al., their paper, You Only Look Once: Unified, Real-Time Object Detection, details an object detector capable of super real-time object detection, obtaining 45 FPS on a GPU. Note: A smaller variant of their model called “Fast YOLO” claims to achieve 155 FPS on a GPU. YOLO has gone through a number of different iterations, including YOLO9000: Better, Faster, Stronger (i.e., YOLOv2), capable of detecting over 9,000 object detectors. Redmon and Farhadi are able to achieve such a large number of object detections by performing joint training for both object detection and classification. Using joint training the authors trained YOLO9000 simultaneously on both the ImageNet classification dataset and COCO detection dataset. The result is a YOLO model, called YOLO9000, that can predict detections for object classes that don’t have labeled detection data. While interesting and novel, YOLOv2’s performance was a bit underwhelming given the title and abstract of the paper. On the 156 class version of COCO, YOLO9000 achieved 16% mean Average Precision (mAP), and yes, while YOLO can detect 9,000 separate classes, the accuracy is not quite what we would desire. Redmon and Farhadi recently published a new YOLO paper, YOLOv3: An Incremental Improvement (2018). YOLOv3 is significantly larger than previous models but is, in my opinion, the best one yet out of the YOLO family of object detectors. We’ll be using YOLOv3 in this blog post, in particular, YOLO trained on the COCO dataset. The COCO dataset consists of 80 labels, including, but not limited to: People Bicycles Cars and trucks Airplanes Stop signs and fire hydrants Animals, including cats, dogs, birds, horses, cows, and sheep, to name a few Kitchen and dining objects, such as wine glasses, cups, forks, knives, spoons, etc. …and much more! I’ll wrap up this section by saying that any academic needs to read Redmon’s YOLO papers and tech reports — not only are they novel and insightful they are incredibly entertaining as well. But seriously, if you do nothing else today read the YOLOv3 tech report. It’s only 6 pages and one of those pages is just references/citations. Furthermore, the tech report is honest in a way that academic papers rarely, if ever, are. You Only Look Once [3] (YOLO) is an image classifier that takes parts of an image and process it. In classic object classifiers, they run the classifier at each step providing a small window across the image to get a prediction of what is in the current window. This approach is very slow since the classifier has to run many times to get the most certain result.But YOLO [3], divides the image into a grid of 13x13 cells. This means it just looks at the image just once and thus faster. Each grid box predicts bounding boxes and the confidence of these bounding boxes. The confidence represents how accurate the model is that the box contains the object. Hence, if there is no object, then the confidence should be zero. Also an intersection over union (IOU) is taken between the predicted box and the ground truth to draw the bounding box. As described in [3], each bounding box has 5 predictions:x,y,w, h and confidence. The(x,y)coordinates represent the centreof the box relative to the bounds of the grid cell. The widthand height are predicted relative to the whole image. Finallythe confidence prediction represents the IOU between thepredicted box and any ground truth box. For every cell, the classifier takes 5 of its surrounding boxes and predicts what is present in it. YOLO outputs a confidence score that lets us know how certain it is about its prediction. The prediction bounding box encloses the object that it has classified. The higher the confidence score, the thicker the box is drawn. Every bounding box represents a class or a label. Since there are 13×13 = 169 grid cells and each cell predicts 5 bounding boxes and end up with 845 bounding boxes in total. It turns out that most of these boxes will have very low confidence scores, so the boxes whose final score is 55% or more are retained. Based on the needs, the confidence score can be increased or decreased. From the paper [3], the architecture of YOLO can be retrieved, which is a convolutional neural network. The initial convolutional layers of the network extract features from the image whereas the fully connected layerspredict the output probabilities and coordinates. YOLO [3] uses a 24 layer convolutional layers followed by 2 fully connected layers. The final output from this model is a 7X7X30 tensor of predictions. The PASCAL VOC [6] dataset is used to train this model. There is an implementation of YOLO [3] in C/C++ called darknet. There are pre-trained weights and cfg which can be used to detect objects on. But to make the implementation more efficient on the raspberry pi, the tensorflow implementation of darknet called the darkflow is used. The images are passed to this image detection framework and get the output which contains the 5 predictions as discussed before. Darkflow outputs the file with bounding boxes or a json file. This json file is converted into text file and takes the count of objects and omits the rest. The objects along with their count are fed into the text to speech unit eSpeak. Methodology: The proposed Smart Guidance System for visually impaired individuals is built around a Raspberry Pi microcontroller, which serves as the central processing unit for integrating various hardware components and software functionalities. A USB camera captures real-time video, which is processed using the YOLO (You Only Look Once) object detection algorithm to identify and classify objects in the user’s environment. The detected objects are converted into audio descriptions using a text-to-speech system, providing real-time feedback through earphones to inform the user about surrounding obstacles and enhance spatial awareness. To enable location tracking, a GPS module continuously retrieves the user’s current coordinates. This location data is crucial for providing situational awareness and guiding the user effectively. In addition to navigation support, the system is designed to handle emergency scenarios. When a significant obstacle or critical hazard is detected, the Raspberry Pi triggers a GSM module to send an SMS alert containing the user’s live location to a pre-defined contact. This real-time communication feature ensures prompt assistance, enhancing the safety and reliability of the system. The system integrates multiple technologies in a compact, portable form. Python programming is used to implement the control logic, leveraging libraries such as OpenCV for image processing, Pyttsx3 for audio feedback, and GPS/GSM libraries for location-based services. All hardware components are powered by a rechargeable power supply, ensuring usability in mobile scenarios. This modular, scalable design makes the system cost-effective and adaptable for future enhancements, including advanced navigation features, voice commands, and improved detection accuracy using additional sensors or AI models. Result: The expected result of this project is a fully functional Smart Guidance System that enhances the mobility and safety of visually impaired individuals. The system will accurately detect and classify objects in real-time using YOLO-based object detection and provide clear audio feedback through earphones for obstacle avoidance. The integrated GPS module will offer continuous location tracking, and the GSM module will send timely SMS alerts with live coordinates to a designated contact during emergencies. This comprehensive solution will provide an efficient, low-cost, and portable assistive device, improving navigation independence and overall quality of life for blind users. Object detection output: Conclusion: In conclusion, the proposed Smart Guidance System for visually impaired individuals integrates real-time object detection, GPS-based location tracking, and automated SMS alerts to provide a comprehensive navigation aid. By utilizing Raspberry Pi, YOLO object detection, and GSM communication, the system enhances mobility, safety, and independence for blind users. This lowcost, portable solution addresses the limitations of traditional mobility aids by offering dynamic obstacle awareness and emergency alert capabilities. Future enhancements, such as improved detection models and additional sensory inputs, can further optimize the system's performance, contributing to more advanced and accessible assistive technologies for visually impaired communities. References [1]. Anurag Patil, Abhishek Brorse, Ajay Phad, “A Low-Cost IoT based Navigation Assistance for Visually Impaired Person," IEEE Xplore, 2023. [2]. Himanshu Singh, Harsha Kumar, S Sabarivelan, Amit Kumar, Prakhar Rai, "IoT based Smart Assistance for Visually Impaired People," IEEE Xplore, 2023. [3]. Deepa J, Maria Adeline P, Sai Madhumita S S, Pavalaselvi N, "Obstacle detection and navigation support Using SMART Technology for Visually Impaired," IEEE Xplore, 2023. [4]. S. Mohan Kumar, Vivek Vishnudas Nemane, Ramakrishnan Raman, N. Latha, "IoT-BLE Based Indoor Navigation for Visually Impaired People," IEEE Xplore, 2023. [5]. R Arthi, Vemuri Jyothi Kiran, Anukiruthika, Mathan Krishna, Utkalika Das, "Real Time Assistive Shoe for Visually Impaired People using IoT," IEEE Xplore, 2023. [6]. Priyanka Bhosle, Prashant Pal, Vallari Khobragrade, Shashank Kumar Singh, "Smart Navigation System Assistance for Visually Impaired People," IEEE Xplore, 2023. [7]. Nikhi S Patankar, Bhushan Haribhau, Prithviraj Shivaji Dhorde, Harshal Pravin Patil, "An Intelligent IoT Based Smart Stick for Visually Impaired Person Using Image Sensing," IEEE Xplore, 2023. [8]. J. Wang, L. Wu, Y. Zhang, "A Real-time Navigation System for Visually Impaired People Based on Smart Glasses," IEEE Access, vol. 8, pp. 112888-112897, 2020. [9]. A. Gupta, R. Choudhary, S. Verma, "Assistive Navigation for Visually Impaired Using Deep Learning and IoT," in Proc. IEEE Int. Conf. Comput. Intell. Data Sci. (ICCIDS), 2020, pp. 1-6. [10]. G. C. Y. Chan, H. C. So, "Smart Cane for Visually Impaired Users," IEEE Trans. Instrum. Meas., vol. 62, no. 1, pp. 66-72, Jan. 2013. [11]. J. M. Reinhardt, "A Wearable Ultrasonic Navigation System for the Blind," IEEE Trans. Consum. Electron., vol. 53, no. 2, pp. 576-581, May 2007. [12]. B. Ando, S. Baglio, V. Marletta, A. Valastro, "A Haptic Solution to Assist Visually Impaired in Mobility Tasks," IEEE Trans. Hum. Mach. Syst., vol. 45, no. 5, pp. 641-646, Oct. 2015. [13]. H. Zhu, S. Z. Fang, Z. Cao, J. Xiao, "A GPS and GSM-Based Navigation and Monitoring System for Blind People," in Proc. IEEE Int. Conf. Cyber Technol. Autom. Control Intell. Syst. (CYBER), Shenyang, 2012, pp. 265-270. [14]. R. Tapu, B. Mocanu, T. Zaharia, "A Smartphone-Based Obstacle Detection and Classification System for Assisting Visually Impaired People," IEEE Trans. Consum. Electron., vol. 61, no. 3, pp. 304-311, Aug. 2015. [15]. K. Kannan, R. Kalidas, "An Assistive Device for the Visually Impaired Based on IoT," in Proc. IEEE Int. Conf. Intell. Comput. Control Syst. (ICICCS), Madurai, India, 2017, pp. 12021205. [16]. A. R. Al-Ali, A. Zualkernan, F. Aloul, "A Mobile GPRS-Sensors Array for Air Pollution Monitoring," IEEE Trans. Instrum. Meas., vol. 57, no. 8, pp. 1543-1550, Aug. 2008.