CAPSTONE PROJECT REPORT DURATION JANUARY - JUNE 2023 DRIVER DROWSINESS DETECTION SYSTEM Submitted by Md. Fardous Al Masum 119040 43 Sadman Tajwar 11901143 Udit Pratap Singh 12011478 G kartheshwar 12014169 KC294 INT 445 Gunseerat Kaur School of Computer Science and Engineering i ii iii DECLARATION We hereby declare that the project work entitled “Driver Drowsiness Detection System” is an authentic record of our own work carried out as requirements of Capstone Project for the award of B. Tech degree in Information Technology from Lovely Professional University, Phagwara, under the guidance of Gunseerat Kaur, during January to June 2023. All the information furnished in this capstone project report is based on our own intensive work and is genuine. Project Group Number: Name: Md. Fardous Al Masum Registration Number: 11904043 Name: Sadman Tajwar Registration Number: 11901143 Name: Udit Pratap Singh Registration Number: 12011478 Name: G kartheshwar Registration Number: 12014169 iv CERTIFICATE This is to certify that the declaration statement made by this group of students is correct to the best of my knowledge and belief. They have completed this Capstone Project under my guidance and supervision. The present work is the result of their original investigation, effort and study. No part of the work has ever been submitted for any other degree at any University. The Capstone Project is fit for the submission and partial fulfilment of the conditions for the award of B. Tech degree in Information Technology, from Lovely Professional University, Phagwara. Signature and Name of the Mentor Gunseerat Kaur Assistant Professor School of Computer Science and Engineering, Lovely Professional University, Phagwara, Punjab. Date: 26/042023 v ACKNOWLEDGEMENT We humbly take this opportunity to present our votes of thanks to all those guideposts who really acted as lightening pillars to enlighten our way throughout this project that has led to successful and satisfactory completion of this study. We are grateful to our HOD and our mentor for providing us with an opportunity to undertake this project in this university and providing us with all the bright and innovative ideas for making our project a worthwhile of running in an organization. We are highly thankful for his active support, valuable time and advice, whole-hearted guidance, sincere cooperation, and pains-taking involvement during the study and in completing the capstone project within the time stipulated. We are thankful to all those, particularly our friends, who have been instrumental in creating proper, healthy and conductive environment and including new and fresh innovative ideas for us during the project, without their help, it would have been extremely difficult for us to prepare the project in a time bound framework. vi TABLE OF CONETENTS Inner first page (ⅰ) Acceptance of research paper (ii) PAC form (iii) Declaration (iv) Certificate (v) Acknowledgement (ⅴi) Table of Contents (vii) INDEX 1.INTRODUCTION 1 1.1 FACE RECOGANIZATION 1 2.SCOPE OF STUDY 2 3.EXISTING SYSTEM 3 3.1 INTRODUCTION 3 3.2 EXISTING METHODS 3 3.3 PROBLEMS IN EXISTING SYSTEM 3 3.3.1 VEHICULAR BASED DROWSINESS 3 DETECTION PROBELMS 3.3.2 PHYSIOLOGICAL BASED DROWSINESS 4 DETECTION PROBLEMS 3.3.3 HYBRID SYSTEM PROBLEMS 4 3.4 FLOW DIAGRAM OF THE PRESENT SYSTEM 6 3.5 FEATURES OF THE SYSTEM WHICH HAS BEEN 7 DEVELOPED vii 4. PROBLEM ANALYSIS 8 4.1 PRODUCT DEFINATION 8 4.2 FEASABILITY ANALYSIS 8 4.2.1 Economically Feasibility 8 4.2.2 Technical Feasibility 9 4.3.3 Behavioural Feasibility 9 4.3 DISADVANTAGES OF PRESENT SYSTEM 9 4.4 CHARACTERISTIC OF PRESENT SYSTEM 9 5.0 SOFTWARE REQUIREMENT 10 5.1 LIBRARIES 10 5.1.1 OPEN CV 10 5.1.2 NUMPY 10 5.1.3 DLIBRARY 10 5.1.4 IMUTILS 11 5.1.5 CMAKE 11 5.1.6 SCIPY 11 5.2 SYSTEM REQUIREMENTS 12 6.0 DESIGN 13 6.1 SYSTEM DESIGN 13 6.2 DETAILED DESIGN 13 6.3 FLOW CHART 15 6.4 PESUDO CODE 16 7.0 TESTING 17 7.1 FUNCTIONAL TESTING 17 7.2 STRUCTURAL TESTING 17 7.3 LEVEL OF TESTING 18 7.4 INTEGRATION TESTING 18 7.5 SMOKE TESTING 19 7.6 IMPLEMENTATION TESTING 20 8.0 IMPLEMENTATION 21 viii 8.1 IMPLEMENTATION OF THE PROJECT 21 8.2 CONVERSION PLAN 22 8.3 SOFTWARE MAINTANCE 22 9.0 PROJECT LEGACY 24 9.1 CURENT STATUS OF PROJECT 24 9.2 REMAINING AREA OF CONCERN 24 9.3 TECHNICAL AND MANAGERAL LESSONS 24 10 USER MANUAL 25 11 SOURCE CODE 28 12 BIBLIOGRAPHY 33 ix 1 INTRODUCTION: Driver drowsiness is a serious issue in real world where numerous road accidents are happening due to this issue. It is estimated that around 20% of all road accidents caused by drowsy driving. Driver drowsiness is caused by a variety of factors such as lack of sleep, medications, and long hours of driving. Drowsy driving is a particular concern for commercial vehicle drivers who often drive for long hours without sufficient rest. So, with the concern of good gratitude, we have developed a driver drowsiness detection system using machine learning type artificial intelligence method to detect driver drowsiness. In which our machine learning model learn from data and with the experience of this it detects the drowsiness of the driver. It involves the use of algorithms to identify patterns in data and make predictions. The data can be variety of sources such as video, image, vehicle sensor. Our system uses a camera to capture the driver’s face and extract facial landmarks such as eye positions and mouth shape. Then we use the algorithm to classify the drivers state to alert about the drowsiness. We can predict driver drowsiness in other ways such as using physiological and vehicular based methods. But we are confident and sure that our machine learning model will perform best and gives the best possible outcome to detect driver drowsiness. Face Reorganization: Face recognition is a part of a wide area of pattern recognition technology. Recognition and especially face recognition covers a range of activities from many walks of life. Face recognition is something that humans are particularly good at and science and technology have brought many similar tasks to us. Face recognition in general and the recognition of moving people in natural scenes in particular, require a set of visual tasks to be performed robustly. That process includes mainly three-task acquisition, normalization and recognition. By the term acquisition we meant detection and tracking of face-like image patches in a dynamic scene. Normalization is the segmentation, alignment and normalization of the face images, and finally recognition that is the representation and modelling of face images as identities, and the association of novel face images with known models 1 2 SCOPE OF STUDY: For some who find themselves getting drowsy, a brief state of unconsciousness, called a microsleep, may occur. In these instances, the driver might still even have their eyes open, but they are not in proper control of their vehicle. Exhaustion can be as bad as driving under the influence of alcohol. Research has shown that 24 hours of sleep deprivation causes the same level of impairment as someone whose blood alcohol (BAC) level is at 0.10% a number that’s over the legal limit. There are safeguards in place for those whose jobs rely on long periods of driving. For example, truck drivers are forbidden from driving past 14 hours after their shift starts. But for the average driver, there are no such safeguards. Drowsiness is a significant cause of accidents, with studies finding that loss of concentration is responsible for 25% of road accidents. Preventing drowsy drivers from getting behind the wheel is important. Being able to detect drowsy drivers and remind them to be safe and take a break if they’re feeling sleepy is one way to address this issue Drowsiness detection works to prevent accidents created by microsleep, fatigue and lack of attention. Driver drowsiness detection systems generally come as one tool, one part of Advanced Driver Assistance Systems (ADAS). These are various programs and technologies designed to make driving safer and lessen the chances of human error resulting in catastrophic road traffic incidents. These can range from warning drivers if there’s something in their blind spot, to automatic emergency braking. • • • • • Drowsiness is a serious problem: Drowsiness is a significant cause of accidents, with loss of concentration responsible for 25% of road accidents. Research has shown that drowsiness can be as bad as driving under the influence of alcohol, making it a significant concern for road safety. Microsleep can be dangerous: The microsleep can occur even if the driver's eyes are open, and they are not in proper control of their vehicle. This state of unconsciousness can result in accidents, making it important to address the issue of drowsy driving. The importance of safeguards: Some industries, such as trucking, have safeguards in place to prevent drivers from driving past a certain number of hours. However, there are no such safeguards for the average driver. This emphasizes the importance of implementing systems that can detect drowsy drivers and remind them to take a break. Drowsiness detection as part of ADAS: The drowsiness detection is one tool that can be part of Advanced Driver Assistance Systems (ADAS). ADAS is a collection of various programs and technologies designed to make driving safer and reduce the chances of human error resulting in road traffic incidents. Prevention is better than cure: The importance of preventing drowsy driving rather than dealing with its consequences. Drowsiness detection systems can detect drowsy drivers and remind them to take a break, preventing accidents before they occur. 2 3 EXISTING SYSTEM: INTRODUCTION: Driver drowsiness can be detected by many ways using technologies and the most common ways to detect driver drowsiness is by vehicular based, Physiological based, behavioural based, and hybrid-based drowsiness detection EXISITNG METHOD: Driver drowsiness detection by hybrid method which combines of both vehicular based and physiological based method. the data for this method to perform has been collected from vehicular based sensors and physiological based censors for example ECG sensors for physiological type sensor and acceleration speed monitoring sensor for vehicular based system. The data’s collected from this system is been normalized and analysed by using various machine learning algorithms such as convolutional neural network and k nearest neighbour algorithms Hybrid way of drowsiness detection with the combination of environmental and vehicular or physiological based method. Environmental data’s can be collected from GPS location or by weather report environmental system will be helpful for detecting drowsiness when the vehicle position frequently changing from one lane to other or by missing the routes frequently to be followed. With environmental data analysis this system should have extra features such as vehicular based or physiological based drowsiness detection system. PROBLEMS FACED IN EXISTING SYSTEM: VEHICULAR BASED DROWSINES DETECTION PROBLEMS: False positives: Vehicular-based drowsiness detection systems can generate false positives, which can be distracting and irritating for the driver. False positives can occur due to factors such as environmental noise, vehicle vibration, or sudden changes in driving conditions. User acceptance: Some drivers may find the drowsiness detection system intrusive or annoying, leading to resistance in using the system. Lack of user acceptance can affect the overall effectiveness of the system. Compatibility with different vehicles: Vehicular-based drowsiness detection systems may not be compatible with all types of vehicles, especially older models. This can limit the widespread adoption of the system. Calibration and maintenance: Vehicular-based drowsiness detection systems require regular calibration and maintenance to ensure accurate performance. Failure to calibrate or maintain the system can lead to inaccurate detections or system malfunction. 3 Power consumption: Vehicular-based drowsiness detection systems require power to operate, which can drain the vehicle's battery. Efficient power management is critical for the system's reliability and practicality. Ethical concerns: The use of vehicular-based drowsiness detection systems raises ethical concerns related to privacy and data collection. Ensuring that the data is collected and used ethically is critical for the successful implementation of the system. PHYSIOLOGICAL BASED DROWSINES DETECTION PROBLEMS: Individual differences: People can exhibit different physiological responses to drowsiness, making it challenging to develop a universal physiological-based drowsiness detection system that can accurately detect drowsiness in all individuals. Sensor placement: The placement of sensors is critical for accurate data collection in physiological-based drowsiness detection systems. Incorrect sensor placement can result in inaccurate data and reduced system performance. User discomfort: The use of sensors and other devices for physiological measurement can cause discomfort or inconvenience to the user, which can affect the user's willingness to use the system. Environmental factors: Environmental factors, such as temperature, humidity, and lighting, can affect the accuracy of physiological measurements, leading to false positives or false negatives in drowsiness detection. Real-time monitoring: Real-time monitoring of physiological signals requires continuous data collection and processing, which can be challenging for portable or wearable devices that have limited processing power and battery life. Ethical concerns: The use of physiological data for drowsiness detection raises ethical concerns related to data privacy and informed consent. Ensuring that the data is collected and used ethically is critical for the successful implementation of physiological-based drowsiness detection systems. HYBRID BASED DROWSINESS DETECTION PROBLEMS: Data collection: Collecting physiological data such as electroencephalography (EEG), electrooculography (EOG), and electromyography (EMG) can be difficult and require specialized equipment. This can result in limited or incomplete datasets, which can affect the accuracy of the model. Feature selection: Choosing the right features to represent drowsiness in the hybrid model can be challenging. Different physiological and behavioural indicators may have different levels of relevance and importance, and selecting the most informative features can be difficult. Inter-individual variability: Individuals can have different physiological responses to drowsiness, and there can be significant inter-individual variability in physiological and behavioural indicators. This can make it challenging to develop a universal model that works for everyone. 4 Real-time monitoring: Real-time monitoring of drowsiness can be challenging due to the need for continuous data collection and processing. The hybrid model should be designed to handle large volumes of data in real-time to provide accurate and timely predictions. Model training: Hybrid models require large datasets for training and validation, and the process of training and tuning the model can be time-consuming and computationally intensive. Ethical concerns: The use of physiological data for drowsiness detection raises ethical concerns related to data privacy and informed consent. Ensuring that the data is collected and used ethically is critical for the successful implementation of the hybrid model. 5 FLOW DIAGRAM OF THE PRESENT SYSTEM: PHYSIOLOGICAL BASED MODEL FLOW CHART: ECG PPG PREPROCESSING: RR INTERVAL NOISE FILTERING RESAMPLING NORMALIZATION LABELLING LABELED SAMPLES FEATURE EXTRACTION: BIN-RP CONT-RP RELU-RP RQA FEATURES REC DET RATIO ENTR LMAX LMEAN VMAX VMEAN DIV LAM LR KNN SVM RF CCLASSIFICATION: CNN DROWSY OR WAKE Figure 3.1 Physiological based model flowchart 6 HYBRID BASED MODEL FLOW CHART: start Receive Data from signal processing unit LOW What is the value? HIGH Activate alarm Figure 3.2: Hybrid based drowsiness detection flowchart FEATURES OF THE SYSTEM DEVELOPED: • • • • The features of the implemented driver drowsiness detection system are: Eye detection which is the way of detecting drowsiness by measuring the closing and opening of the eye by calculating the aspect ratio of the eye Mouth detection is the way of detecting drowsiness by calculating the interval of the frequency in yawning by calculating mouth aspect ratio Head nodding detection is the way of detecting drowsiness by analysing the head posture of the driver 7 4 PROBLEM ANALYSIS: PRODUCT DEFINATION: There are so many behavioural symptoms to detect drowsiness like percentage of eye closer (PERCLOS), eye blinking and facial expression have been used to detect drowsiness. One of the most famous is Percent of eye Closure (PERCLOS). proposed that they found from their study to encourage the creation of the "first-ever" real-time sleepiness detection sensor, which would calculate the percentage of closed eyelids. They have done two experiments about various drowsiness detection systems where PERCLOS had more ability to detect drowsiness The study about driving fatigue detection with the steering wheel has been derived introducing there has been a novel approach based on neuro-fuzzy logic that claims to be a mix of filter methods and neuro-fuzzy-wrapper method. Parameter training that is adaptive has been considered in the initial phase. The use of four distinct filtering techniques, including the Fisher-T test, correlation, and Mutual information and used to determine the important indexes, then assistance for fuzzy inference has been developed, and for these inputs are taken from the classification of four filter methods have been used to find the feature. Adaptive Neuro Fuzzy Inference System (ANFIS) was trained using PSO, and evolutionary optimisation approach. Classifier performance has served as feedback for adjusting fuzzy membership functions settings the support vector machines binary classifier (SVM) uses characteristics whose significance degree exceeds a predetermined threshold value. The use of a person's physical condition, including their body temperature, breathing rate, respiratory rate, heart rate, and pulse rate, among others, is the foundation of physiological parameters-based systems for drowsiness detection in drivers. For example, Electroencephalogram (EEG), Electrocardiogram (ECG) and Heart Rate (HR) are some of the physiological markers that can be used to identify drowsiness. In the EEG approach, a helmet with various wires and other components is worn. ECG can be used to measure heart rate. These types of parameters reflect the driver's physical state, these biological markers are thought to be more reliable and accurate in identifying drowsiness. Drowsiness detection devices based on physiological characteristics notice these alterations and warn the driver before they fall asleep or close to it. FEASABILITY ANALYSIS: Economically Feasibility: The system being developed is economic with respect to vehicle manufacturers or for customers buying at aftermarket. It is cost effective in the way that camera quality’s is not required to be too high. The system is generating response quickly as there is no delay or lag can be faced in the proposed system. The result obtaining is also highly accurate with the input data. 8 Technical Feasibility: The technical requirement for the imposed system is very simple and it does not require any additional software or hardware Behavioural Feasibility: The system working is quite easy to use and understand. As it has no special requirement of software or program to be runed depending on any particular operating system DISADVANTAGES OF PRESENT SYSTEM: Difficulty in result generating: This system takes more input as if some of the input does not meet the criteria of testing and training value the prediction of output will be not so accurate. Not user friendly: With the existing system there is chance of facing discomfort or distraction for the drivers while driving when they are steady due to physical devices should be wear for some proposed system. Not reliable: As the sensors in the present system are not mainly user reliable as it may get damaged easily by any means in an unexpected way. CHARECTERISTIC OF PRESENT SYSTEM: More reliability: The proposed system work mainly depends on camera as it can be fixed at a place where it can capture the drivers face and further need not to be disturbed. User friendly: The proposed system is simpler and it not so complicate as the user can see the output and functioning of the system in an easy way Better performance: The outcome of our system will be more accurate than the existing system as our model is trained with the real time data and it provides the result based on the fixed ratio or point 9 5 SOFTWARE REQUIREMENT: LIBRARIES: OPEN CV The OpenCV library, or OpenCV (Open-Source Computer Vision) library, is an opensource computer vision and machine learning software library. OpenCV was developed by Intel Corporation and is now maintained by Willow Garage. It is used in both industry and academia for a wide range of applications, including object tracking, image recognition and object detection. OpenCV is written in C++ and uses the C++ Standard Template Library (STL). It is designed to be both efficient and easy to use. OpenCV is capable of processing images and videos to identify objects, faces, or even handwriting of a human. It also contains various image processing functions such as colour conversion, edge detection, morphological operations, and more. OpenCV also provides various machine learning algorithms such as support vector machines (SVMs), decision trees, random forests, and more. OpenCV is widely used in applications such as facial recognition, gesture recognition, medical image processing, computer vision, robotics, biometrics, and more. OpenCV supports a wide range of platforms and operating systems, including Windows, Linux, Mac OS, and Android. OpenCV is also compatible with many programming languages, including C/C++, Python, Java, and MATLAB. NUMPY NumPy is a Python library for scientific computing. It is the core library for scientific computing in Python, and is used for a wide variety of mathematical operations. NumPy provides a high-performance multidimensional array object, and tools for working with these arrays. It also provides functions and methods for linear algebra, random number generation, Fourier transforms, and more. NumPy is designed to be efficient and easy to use. It is written in C and uses the C++ Standard Template Library (STL). It also provides a wide range of mathematical functions, such as numerical integration, optimization, linear algebra, and more. NumPy is used for a variety of scientific and engineering applications, including data analysis, machine learning, and image processing. It is also used in a wide range of fields, from finance to astrophysics. DLIB Dlib is a modern C++ library for machine learning, computer vision, and image processing. It is used in a wide range of applications, such as facial recognition, object tracking, and automatic photo retouching. Dlib is written in C++ and is designed to be easy to use and extend, with a wide range of algorithms and features. Dlib provides a set of powerful tools for building machine learning models. It includes a wide range of pre-trained models for image classification, object detection, and face recognition. It also provides a library of optimization algorithms for solving complex machine learning problems. Dlib also provides tools for training, validating, and testing machine learning models. Dlib is optimized for speed and performance, and is designed to be highly scalable. It is used in a variety of applications, from medical 10 imaging to autonomous vehicles. It is also used in a wide range of industries, such as finance, marketing, and advertising . IMUTILS Imutils is a Python library for image processing. It provides convenience functions and classes that make it easy to work with images and videos. Imutils is built on top of OpenCV, the open-source computer vision library. It is designed to be easy to use and extend, and provides a wide range of features, such as image augmentation, face detection, object tracking, and more. Imutils provides a number of convenience functions for resizing, rotating, and flipping images, as well as functions for drawing shapes and text on images. It also provides functions for detecting faces and objects in images, as well as functions for image segmentation. Imutils also provides functions for image manipulation, such as colour channel manipulation and histogram equalization. Imutils can also be used for video processing, such as object tracking and stabilization. CMAKE Cmake is an open-source cross-platform build system generator that uses a simple scripting language to define how software is built, tested, and deployed. It is designed to be platform- and compiler-independent, which means that it can be used to generate build files for a wide variety of operating systems, compilers, and build environments. Cmake is not a library, but rather a build system generator that can be used to build libraries and applications. It generates platform-specific build files such as Make files for Unix-like systems, Visual Studio projects for Windows, and Xcode projects for macOS and iOS. Cmake can be used to build libraries written in a variety of programming languages including C, C++, Fortran, and more. It supports a wide range of build options and configurations, such as static and shared libraries, debug and release builds, and many other options. SCIPY SciPy is an open-source scientific computing library for the Python programming language. It provides a collection of algorithms and functions for scientific computing, optimization, signal processing, and statistical analysis. SciPy is built on top of NumPy, which is another Python library for numerical computing. 11 SYSTEM REQUIREMENTS: TABLE 1: System requirement for the application to run Operating system Windows 11 Processors Any intel or AMD Processor Disk Space 3-4 for typical installation Windows 10 Any intel or AMD Processor 3-4 for typical installation Windows 8.1 Any intel or AMD Processor 3-4 for typical installation Windows 8 Any intel or AMD Processor 3-4 for typical installation Windows 7 Any intel or AMD Processor 3-4 for typical installation Windows XP service Any intel or AMD Processor 3-4 for typical installation Ubuntu from12.04 Any intel or AMD Processor 3-4 for typical installation Red Hat enterprise Linux from 6.x Any intel or AMD Processor 3-4 for typical installation 12 RAM 1024 MB (At least 2048 MB recommended) 1024 MB (At least 2048 MB recommended) 1024 MB (At least 2048 MB recommended) 1024 MB (At least 2048 MB recommended) 1024 MB (At least 2048 MB recommended) 1024 MB (At least 2048 MB recommended) 1024 MB (At least 2048 MB recommended) 1024 MB (At least 2048 MB recommended) 6 DESIGN: SYSTEM DESIGN: Step1 Initialize the face detector using dlib to detect the face of the driver and then create the facial landmark predictor. Step2 Initialise the video stream and sleep ratio landmarks to analyse in the face area Step3 Process through the frames from the video stream. and grab the indexes of the facial landmarks for mouth Step4 Grab the frame from the video stream and resize it smaller image and convert it into grayscale Step5 Detect faces in the grayscale frame. Check the face is found and draw the frame of the face captured and loop over the face detection Step6 Determine the facial landmarks for the face region and convert it into (x, y) coordinates Step7 Extract the left and right eye coordinates and compute the eye aspect ratio for both eyes. And average the eye aspect ratio. And check if the eye aspect ratio is below the blink threshold. if yes increase the blink frame counter. Step8 Extract the mouth coordinated and compute the mouth aspect ratio. and average the mouth aspect ratio. Step9 Extract the head tilt coordinates and compute and mark the key points according to the landmarks. Step10 Print the image points and exit if it matches the exit condition . DETAILED DESIGN: Install all the required libraries. Initialize dlibs face detector and then create the facial landmark predictor. And initialize the video stream and sleep ratio and set the video stream display ratio according to the screen size Loop over the frames from the video stream and mark the vector points for the image which will be captured using face_utils. And point the threshold value for eye mouth and limit allowed to eye blink values. Grab the indexes of the facial landmarks for the mouth using imutils. resize function. Grab the frame of the threshold video stream .and convert it into grayscale using cv2.cvtcolor function and detect the faces in the grayscale frame 13 Check to see if the face was detected and draw the total number of faces on the frame using cv2 and loop over the face detections. Determine the facial landmarks for the face region, then convert the facial landmarks into two coordinates namely x and y using NumPy Extract the left and right eye coordinated then use the eye aspect ratio which is calculated by taking Euclidean distance of vertical eye landmark and horizontal landmark and compute the eye aspect ratio and return the value Average the eye aspect ratio together for both eyes and compute the convex hull for the left and right eye then visualise each of the eyes and check to see if the eye aspect ratio is below the blink threshold value. If the eye aspect ratio is above the threshold value increment the blink frame counter and if the counter reaches the sufficient number of times, then show the warning. Otherwise, the eye aspect ratio is not below the threshold value so reset the counter. Compute the convex hull for the mouth and visualise the mouth frame. Draw the text if the mouth is open. Compute the mouth image and calculate the Euclidean distance between the vertical mouth landmark and horizontal mouth landmark and mark the coordinates as x and y and calculate the mouth aspect ratio and return the value. And check whether the mouth points is above the threshold value. If it is above display warning message Loop over the x and y coordinates for the facial landmarks and draw the enumerate shape according to the key landmarks and save to key point list if the value meets the threshold mouth value condition and display it as green in frame If the value not matching the threshold mouth value display it as red colour in the frame Draw the determinant image points onto the persons face by pointing on nose tip, chin, left eye corner, right eye corner, left mouth corner and right mouth corner as the default points and check the matrix is a valid rotation matrix using math And calculate the rotation matrix to Euler angels and get the head tilt coordinates according to the size, image points and frame height. and calculate the head tilt angle in degree with these values and also calculate the starting and ending points for the two lines for illustration and return the values. If head tilt position matches. And press q to terminate the system 14 FLOW CHART: Figure 6.1 Flowchart of Drowsiness detection Start Face reorganization Eye aspect ratio above threshold value Yes Show Eye is closed message No Mouth aspect ratio above threshold value NO Calculate the head tilt angle value No If Q Yes Stop 15 Yes Show yawning message PSEUDO CODE: Step 1: START Step 2: Place the face according to camera position Step 3: Facial land mark position will be marked around face Step 4: loop over the face detection Step 4: Mark the threshold value of eye position by taking extreme corner value of both the eyes separately. Step 5: Mark the threshold value of mouth position by taking extreme corner value of mouth Step 6: IF the threshold value of eye is below the aspect ratio of the eye mark the position Step 7: If the threshold value of mouth is above the mouth aspect ratio increment the count and if count exist the limit Mark the position Step 8: Take the head pose by marking points in some important face position and calculate the rotation matrix value using NumPy with help of values marked in face position. Step 9: And return the head tilt angle Step 10: If q is pressed terminate the loop Step 11: END 16 7 TESTING: FUNCTIONAL TESTING: Functional testing is a type of black box testing that bases its test cases on the specifications of the software component under test. Functions are tested by feeding them input and examining the output, and internal structure program is rarely structured. Test case design focuses on a set technique for the cases that meet overall testing objectives. In test case design phase, the engineer creates a series of test cases that are, intended to “demolish” the software that has been built. Any software product can be tested in one of two ways: • Knowing the specific function that a product has been designed to perform, test can be conducted that demonstrate each function is fully operational, at the same time searching for errors in each function. This approach is known as black box testing. • Knowing the internal working of a product, test can be conducted to ensure that internal operation performs according to specification and all internal components have been adequately exercised. This approach is known as white-box testing. Black box testing is designed to uncover errors. They are used to demonstrate that software function are operations; that input is properly accepted and output is correctly produced; and that integrity of external information is maintained (e.g., data files.). A black box examines some fundamental aspects of a system with little regard for the internal logical structure of the software. White box testing of software is predicated on close examination of procedural details. Providing test cases that exercise specific set of conditions and loops test logical paths through the software. The “state of the program” may be examined at various points to determine if the expected or asserted status corresponds to the actual status. STRUCTURAL TESTING: Structural system testing is designed to verify that the developed system and programs work. The objective is to ensure that the product designed is structurally sound and will function correctly. It attempts to determine that the technology has been used 17 properly and that when all the component parts are assembled, they function as a cohesive unit. The quality of a product or item can be achieved by ensuring that the product meets the requirements by planning and conducting the following tests at various stages • Unit Tests at unit level, conducted by development team, to verify individual standalone units. • Integration Tests after two or more product units are integrated conducted by development team to test the interface between the integrated units. • Functional Test prior to the release to validation manager, designed and conducted by the team independent of designers and coders, to ensure the functionality provided against the customer requirement specifications. • Acceptance Tests prior to the release to validation manger, conducted by the development team, if any supplied by the customer. • Validation Tests prior to customer, conducted by the validation team to validate the product against the customer requirement specifications and the user documentation. Regression Testing is the re-execution of some subsets of tests already been conducted to ensure that changes are not propagated unintended side effects. LEVEL OF TESTING: In order to uncover the errors, present in different phases, we have the concept of levels of testing. The basic levels of testing are: Client Needs Acceptance Testing Requirements System Testing Design Integration Testing Code Unit Testing INTEGRATION TESTING: In this process of testing, it is incremented approach to construction of program structure. Modules are integrated moving downward beginning with main control module. Module’s subordinate structure to main control module is incorporated into structure. 18 This form of testing is performed of software in five steps: 1. Main control module is used as test driver and stubs (modules) are substituted for all components subordinate to main control. 2. Depending on integration selected subordinate stubs are replaced one at a time. 3. Tests are conducted as each component is integrated. 4. On completing each set of tests another stub is replaced. 5. It is also tested to ensure that new errors have not been introduced. In well-factored program structure decision-making occurs at upper levels in hierarchy and therefore encountered first. If major control problem does exist, early recognition is essential. This is termed as top-down integration testing. Bottom-up integration testing begins construction and testing with atomic modules as the components are integrated from the bottom-up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated. Low-level components are combined into clusters that perform a specific software function. • A driver (a control program for testing) is written to coordinate test case input and output. • The cluster is tested. • Drivers are removed and clusters are combined moving upward in the program structure. Each time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O can occur, and new control logic is invoked. These changes cause problems with functions that previously worked flawlessly. In context of integration test strategy Successful tests result in discovery of errors and errors must be corrected. When software is corrected some aspect of software configuration is changed. SMOKE TESTING: It is an integration testing that is commonly used when “shrink wrapped” software products are being developed. It is designed as pacing mechanism for time critical projects, allowing us to assess the project on frequent basis. 19 This consists of steps: • Software components are translated into code are integrated into a “build”. A build includes all data files, libraries, reusable modules and engineered components. • A series of tests is designed to expose errors that will keep the build from properly performing its function. • The build is integrated with other builds and the entire product is smoke tested daily. Validation testing prior to customer, conducted by the validation team to validate the product against the customer requirement specifications and the user documentation IMPLEMENTATION TESTING: The best testing is to test each subsystem separately as we have done in our project. It is best to test a system during the implementation stage in form of small sub steps rather than large chunks. We have tested each module separately i.e. have completed unit testing first and system testing was done after combining /linking all different Modules with different options and thorough testing was done. Once each lowest level unit has been tested, units are combined with related units and retested in combination. These proceeds hierarchically bottom-up until the entire system is tested as a whole. Hence, we have used the Top Up approach for testing our system 20 8 IMPLEMENTATIONS: IMPLEMENTATION OF THE PROJECT: Input processing: Input processing constitutes a pivotal step in the driver drowsiness detection system as it encompasses the reception, analysis, and interpretation of data originating from diverse sensors and sources. The driver drowsiness detection system undertakes the following common input processing steps: Data acquisition: Initially, the system captures data from varied sensors, such as a camera to capture facial images, infrared sensors to detect the driver's eye movements, and steering wheel sensors to establish driving behaviour. Preprocessing: Following data acquisition, the data is preprocessed to eliminate any unwanted noise and distortions. For example, the camera's images may undergo filtering to eliminate any background noise, and the infrared sensor readings may be normalized to eliminate any interference. Feature extraction: After preprocessing, features are extracted from the data to accurately represent the driver's condition. This entails analyzing the data to identify crucial characteristics that indicate drowsiness, such as sluggish eye movements, drooping of the head, and yawning. Feature selection: Extracted features may be redundant or insignificant to the system's objective. Therefore, feature selection involves identifying the key features that contribute to the system's accuracy and discarding irrelevant ones. Classification: Subsequently, after identifying relevant features, the data is classified into distinct categories based on the driver's level of drowsiness. This may entail using machine learning algorithms to train the system to differentiate between different levels of drowsiness based on the input data. Alert generation: Lastly, based on the classification outcomes, the system generates alerts to warn the driver if they are drowsy or fatigued. These alerts may take the form of audible warnings, visual cues, or steering wheel vibrations. 21 CONVERSION PLAN: In this part of the research, we developed software that could receive colour frames from the camera placed in front of the driver and calculate coordination of facial details. Finally, by comparing and fissuring, the information obtained from interpreting assessment criterion about the alertness or sleepiness together with information resulted from image processing, the software for detection of the levels of drowsiness was developed and promoted. This method is conducted in several steps as follows: first, the image of driver’s facial features taken by the camera is transferred to a processor to be processed Next, the facial features and location of the eyes are determined by Violla-Jones algorithm. For convenience, categorizers are utilized in the cascade sequence. In this method, driver’s face was detected with regard to the oval shape of the head, hue and the eye sockets. Then, the zone of the eyes was recognized by considering the various changes in derivatives (between pupil and white part of eyes) of the visual information Finally, to recognize whether eyes are closed or open, images are converted into gray scale format. Then, by calculating the mean of component V, illumination of images is normalized. Afterward, image of eyes is converted into binary by determining threshold via OTSU method. This conversion reduces the volume of data. After that, the image is divided into upper and lower part. When eyes are open, due to the colour of pupils and eyelash, the ratio of dark pixels in upper part of eye is greater than the state of closed eyes because in the case of closed eyes, eyelids with light colour cover pupils. In addition to this, in this state eyelash is in the lower part. To improve images, combining and extending wear were utilized to remove black spots. Then, the ration of black pixels to the whole pixels was calculated in both upper and lower parts. Finally, the ration of these amounts was used to recognize whether eyes are open or closed SOFTWARE MAINTANCE: • Regular updates: Perform regular updates to the software to ensure that it stays current and compatible with other software and hardware. 22 • Bug fixes: Identify and fix any bugs that may arise in the software to ensure that it functions correctly. • Security updates: Stay up-to-date with security updates to ensure that the system is secure from any potential vulnerabilities. • User feedback: Listen to user feedback to identify any areas of improvement and address them through software updates. • Backup and recovery: Regularly back up the software and data to ensure that it can be recovered in case of any system failure or data loss. 23 9 PROJECT LEGACY: CURENT STATUS OF PROJECT: The project is completed and ready for testing drowsiness of driver using face landmarks of the driver in stimulation REMAINING AREA OF CONCERN: 1. Driver drowsiness detection while driving vehicle can be made with this by adding raspberry and camera in real time. 2. Integrating vehicle’s ecm chip with this one can be made it possible by adding camera and it will help in reducing cost and drowsiness detection can be implemented during the manufacturing of the vehicle itself. TECHNICAL AND MANAGERAL LESSONS: TECHNICAL LESSONS: • Understanding the module with all the requirements. • Gathering all the information about how it is to be developed. Coding accordingly. • In case of exceptions or errors, understanding them. Getting out of errors and exceptions. Designing professional UI. MANGERAL LESSONS: • Working in a team. • Understanding what we want to develop. In case of doubt, taking help of other team members. • Adapting to professional environment 24 10 USER MANUAL: Figure 10.1: Execution of code in command line Step 1: First we are running the source code of the project in the command prompt with the command “python Drowsiness_Detection.py” Step 2: Execution will be started with loading facial landmark predictor it is a file where all total 68 landmark are present to identify various landmark of face Ex. Eye, mouth, head shape etc. Step 3: we have used Video stream with source (src) =0 for initializing the webcam and capture the frames from webcam. 25 Figure 10.2 Prediction with eye aspect ratio Step 4: Here, we are using (EAR) Eye aspect Ratio to predict if the driver is sleeping or not If the eye aspect ratio is greater than the threshold value it will show the message “Eye closed” Figure 10.3: Calculation of mouth aspect ratio 26 Step 5: In this step, MAR (Mouth aspect ratio) is calculated with the help of landmark Predictor and calculating the mouth aspect ratio if the mouth aspect ratio is greater than the threshold value set it will display the message “yawning” with mouth aspect ratio value Figure 10.4: Landmark predictor to get head pose Step 6: In this step we have used landmark predictor to get the head pose and Calculates rotation matrix to Euler angles so that we can calculate the tilt degree of head and we can check whether the driver is looking active and forward. Figure 10.5: output 27 This is a simple output of screen when code will be executed and when the driver will be active. The code will give output messages like 1 face found, head tilt degree and MAR (mouth aspect ratio). 11 SOURCE CODE: Figure 11.1: Drowsiness1.py 28 Figure 11.1: Drowsiness21.py Figure 11.1: Drowsiness3.py 29 Figure 11.1: Drowsiness4.py Figure 11.1: Drowsiness5.py 30 Figure 11.1: Eyear.py Figure 11.1: Mouthar.py 31 Figure 11.1: Headpose.py Figure 11.1: Headpose1.py 32 12 BIBLIOGRAPHY: [1] “Drowsy Driving: Avoid Falling Asleep Behind the Wheel | NHTSA.” https://www.nhtsa.gov/risky-driving/drowsy-driving (accessed Apr. 05, 2023). [2] “ROAD ACCIDENTS IN INDIA ROAD ACCIDENTS IN INDIA ROAD ACCIDENTS IN INDIA,” 2021, Accessed: Apr. 05, 2023. [Online]. Available: www.morth.nic.in [3] “Road Accidents in India.” https://www.drishtiias.com/daily-updates/dailynews-analysis/road-accidents-in-india-4 (accessed Apr. 05, 2023). [4] H. He et al., “A real-time driver fatigue detection method based on two-stage convolutional neural network,” Elsevier, Accessed: Apr. 05, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S2405896320330263 [5] R. Wang, Y. Wang, C. L.-2015 7th I. Conference, and undefined 2015, “EEGbased real-time drowsiness detection using Hilbert-Huang transform,” ieeexplore.ieee.org, Accessed: Apr. 05, 2023. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/7334684/?casa_token=RwmSSW32Dk MAAAAA:wJNcgUnoWi7gsj7nZ6CqIo6bL2IY7T_xWjSUuvErGbTfftdnXBlsDKj6r gzd5BDppfsdfdFPsxAN [6] J. S. Bajaj, N. Kumar, and R. K. Kaushal, “AI Based Novel Approach to Detect Driver Drowsiness,” ECS Trans, vol. 107, no. 1, pp. 4651–4658, Apr. 2022, doi: 10.1149/10701.4651ECST/META. [7] D. Dinges and R. Grace, “Perclos: A valid psychophysiological measure of alertness as assessed by psychomotor vigilance,” 1998, doi: 10.1037/E449092008001. [8] O. Sinha, S. Singh, A. Mitra, S. G.-… in Communication, undefined Devices, and undefined 2018, “Development of a drowsy driver detection system based on EEG and IR-based eye blink detection analysis,” Springer, Accessed: Apr. 05, 2023. [Online]. Available: https://link.springer.com/chapter/10.1007/978-981-10-7901-6_34 [9] J. S. Bajaj, N. Kumar, R. K. Kaushal, H. L. Gururaj, F. Flammini, and R. Natarajan, “System and Method for Driver Drowsiness Detection Using Behavioral and Sensor-Based Physiological Measures,” Sensors 2023, Vol. 23, Page 1292, vol. 23, no. 3, p. 1292, Jan. 2023, doi: 10.3390/S23031292. [10] S. Arefnezhad, S. Samiee, A. Eichberger, A. N.- Sensors, and undefined 2019, “Driver drowsiness detection based on steering wheel data applying adaptive neurofuzzy feature selection,” mdpi.com, 2019, doi: 10.3390/s19040943. [11] G. Du, H. Wang, K. Su, X. Wang, S. Teng, and P. X. Liu, “Non-Interference Driving Fatigue Detection System Based on Intelligent Steering Wheel,” IEEE Trans Instrum Meas, vol. 71, 2022, doi: 10.1109/TIM.2022.3214265. [12] A. Eskandarian, A. M.-2007 I. intelligent vehicles, and undefined 2007, “Evaluation of a smart algorithm for commercial vehicle driver drowsiness detection,” ieeexplore.ieee.org, Accessed: Apr. 05, 2023. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/4290173/?casa_token=4xwbONoKxhQ 33 AAAAA:v5DgmJ2tg2OAMuNMEGCtM3pqOAhjWVoaP4JzQceh9ez7NHL9iCD5bT LUtDlXl0UkSCTKPRhuK-dH [13] A. Picot, S. Charbonnier, A. C.-I. T. on, and undefined 2011, “On-line detection of drowsiness using brain and visual information,” ieeexplore.ieee.org, Accessed: Apr. 05, 2023. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/6029307/?casa_token=1xiM3FdXJkYA AAAA:0glgW76iWPAYJzwFyLxzYrlTOUZFyFd_Srs4iOfx0n1ku-ox-Z4jaSF1VqmlAhD_Utpkg86YXlb [14] M. Samir Doudou, A. Bouabdallah, V. Cherfaoui, V. A. Cherfaoui, M. Doudou, and V. Charfaoui, “A light on physiological sensors for efficient driver drowsiness detection system,” hal.science, vol. 224, no. 8, 2018, Accessed: Apr. 05, 2023. [Online]. Available: https://hal.science/hal-02162758/ [15] N. Gupta, D. Najeeb, V. Gabrielian, and A. Nahapetian, “Mobile ECG-based drowsiness detection,” 2017 14th IEEE Annual Consumer Communications and Networking Conference, CCNC 2017, pp. 29–32, Jul. 2017, doi: 10.1109/CCNC.2017.7983076. 34