Abstract
Child care is necessary for parents, but nowadays taking care of a child has become a lot more challenging, especially for working mothers. It has become increasingly difficult for parents to continuously monitor their baby’s condition. Thus, a smart baby monitoring system based on IoT and machine learning is implemented to overcome the monitoring issues and provide intimation to parents in real-time. In the proposed system, the necessary monitoring features like room temperature and humidity, cry detection, and face detection were monitored by exploiting different sensors. The sensor data is transferred to the Blynk server via controllers with an Internet connection. The system is also capable of detecting the facial emotions of the registered babies by using a machine learning model. Parents can monitor the live activities and emotions of their child through the external web camera and can swing the baby cradle remotely upon cry detection using their mobile application. They can also check the real-time room temperature and humidity level. In case an abnormal action is detected, a notification is sent to the parent’s mobile application to take action thus, making the baby monitoring system a relief for all working parents to manage their time efficiently while taking care of their babies simultaneously.
1. Introduction
Nowadays, childcare challenges have become a hurdle to work, especially for working mothers. Child care is necessary for parents, but parents cannot always take care of their babies and can spend time monitoring their babies because of their other responsibilities. Raising a child with care these days where both parents are working is a difficult task. It has become a challenge for many families to keep an eye on their children with everyday routines. Many parents are not satisfied with the daycare facilities provided in many countries. Babies need care and monitoring 24 hours a day which is difficult for working parents because they cannot always carry their babies with them at the time. Hiring a caregiver to watch infants is an option when parents are busy or the nursery is an alternative solution. But, it is not always affordable for parents to hire a babysitter to monitor their babies. Thus, an IoT-based smart baby monitoring system is the best solution to minimize the gap between babies and parents. The revolution in technology proves that the Internet of Things (IoT) is a mixture of multiple technologies. The IoT connects billions of smart objects and sensors to the Internet, to collect data from the physical environment [1]. These smart objects and sensors are aimed at reducing human intervention and expand automation to daily life. Moreover, these smart objects and sensors lead to various smart industries, such as smart homes, smart cities, intelligent transport systems, smart healthcare, and smart agriculture. [2, 3]. Currently, machine learning also plays an important role in the field of IoT, especially in smart homes, bioinformatics, computer vision, and agriculture [4–7]. By considering the progress of IoT and machine learning (ML) in this context, an IoT-based smart real-time baby monitoring system using machine learning has been proposed to overcome the issues faced by working parents.
Many research works have been done but they mainly focus on adding sensors to the existing designs [8–10] and [11–13]. However, there is a lot of room for enhancement in those proposed studies. We, therefore, have proposed the idea of a smart baby monitoring system, which is a kind of notification system that can detect the activities of the babies in real-time and send the message about the status of the babies to only the registered parents. In particular, we have focused on the vital parameters that are important in maintaining the comfort of the baby. The camera is also attached for live streaming and live updates of the baby using real-time facial expressions. By using the mobile application, the registered parents can control the hardware remotely, get the live stream of the baby, detect the crying sound, and monitor the humidity and temperature of the surroundings. The control system of the proposed system is equipped with a NodeMCU, a Raspberry Pi with a camera, a DC motor, a mic, and a DHT11 sensor for reading vital parameters to monitor the condition of the infant. These connected sensors send the values to the controllers and immediately notify the parents about abnormal conditions. The proposed system also identifies the unknown face and detects the real-time emotion of a baby using the (SVM) classifier. The rest of this paper is organized in the following manner. Section 2 explains the related work. Our proposed methodology is discussed in Section 3 to implement the system. Section 4 presents the experimental setup. Section 5 analyzes the results. Finally, we conclude our work in Section 6.
2. Related Work
IoT is considered a new paradigm, and a lot of research is being examined on IoT-based baby monitoring systems. Several research studies have made use of IoT sensors to collect the movements and activities of babies and assist parents to monitor their children remotely. Goyal and Kumar [14] designed a low-cost e-baby cradle that is capable to swing only when a baby cry is detected. The user can change the speed of the swinging cradle. It also has a buzzer alarm that alerts the user that the baby’s mattress is wet and when the baby’s cry is detected for a certain time. A GSM network-based smart system was suggested in [15]. The proposed system measures the health conditions such as body temperature, movement, pulse rate, and moisture condition of the baby. The parents get SMS alerts on their mobile numbers using the GSM interface. Palaskar [16] proposed an automatic low-budget baby monitoring system based on a microcontroller with multiple features. The proposed system can swings the baby cradle automatically only when the cry sound is detected. A camera is also mounted above the cradle to observe the baby’s condition. It also has an alarm that notifies the parents that the mattress is wet or the baby’s cry is detected. The notifications via SMS were sent to the parents. However, parents cannot control the system using any application. Another similar approach is used in [17]. The authors make use of the Raspberry Pi along with a Pi camera, condenser mic, and PIR motion sensor to detect the crying and motion condition of the infant. The Pi camera is turned on only when the sound or motion is detected. The information is sent to the Pi which further processes the data and sends it to the LCD and activates the buzzer. However, there is no application or any other way to remotely monitor child conditions. In [18], authors designed a smart cradle that supports video monitoring. It also swings the cradle automatically on detection of cry sounds and rotates a toy mounted on the cradle to comfort the baby. The primary feature of this system is a mobile application that sends alerts to parents when values exceed the threshold range. A new approach to an automatic baby monitoring system has been presented in a paper [19] in which authors used the wireless video monitoring technique to detect sound, motion, and other factors, the information is displayed on the device that acts like an online server. Another similar study based on GSM has been carried out by Levy et al. [20] that focuses on the monitoring cry, nonrespiratory, and respiratory movements of the baby using ultrasonic and accelerometer sensors. The FN-M16P module is used to record the voice of the mother and plays it whenever the baby’s cry is detected. However, there is no live monitoring or face detection feature to notify the condition of the baby. In another research study [21], the author makes use of the Arduino and PIR sensors to notify the parents about the child’s conditions through the GSM interface to android-based handsets. In addition, the system provides a predefined nutrition food chart to help the baby remain healthy. The system proposed in [22] has all the features that have been used in the previous systems. It supports the Arabic language which is the key feature. It also uses the Firebase cloud to send notifications to the parent’s android application. Several similar studies on infant monitoring systems have been conducted in [8, 23] and [24]. Most of these studies conducted are usually based on the use of sensors to collect different information about the child using the previous techniques. In [25], additional methane sensor is used for detecting methane content, and it sends a live image of the baby along with various parameters information to the android application. The system developed in [26] incorporates IoT sensors for measuring vital parameters like moisture, temperature, and pulse rate. A microphone camera is also used for posture monitoring of the child. The parents will be updated about the infants’ status through notifications using the Blynk application. In [27], Jabbar et al. designed a low-cost IoT-based system for real-time baby monitoring using NodeMCU and AdaFruit MQTT server. The system is capable of detecting the temperature, moisture, and crying of the baby and notifying the user through the IFTTT mobile application. The parents can monitor the real-time status of the baby through the mobile application within the same network. However, such a solution is inappropriate if the parents are on the other network. Some similar solutions with few improvements were proposed in [9, 28]. These proposed systems can be improved by implementing face and emotion detection features. Cheggou et al. [29] designed an intelligent system that assists parents to monitor the vital parameters of the baby using a web application either on a local network or on a remote network. The authors make use of a convolutional neural network (CNN) to identify the baby’s position. This feature is unique and has not been implemented in the previous systems. In addition to the above research, many studies have been carried out that focus on the use of face and emotion detection in the context of baby monitoring systems. A software architecture for an intelligent baby cradle was proposed in [10]; the purpose of this study was to enhance the quality of the cry management module in the existing IoT cradles. The authors proposed four submodules in the cry classification process including face image analysis, voice analysis, body gesture analysis, and decision fusion for accurately detecting the reason for a baby’s cry. Dubey and Damke [30] designed a baby monitoring system to detect the activities like crying, motion, and the present position of the baby. The main feature of this system is that they use image processing to detect the baby’s movement. If the baby is near the edge of the bed, the Pi camera activate, click the snap, and send a notification to the parents through the mail. The body movement detection feature is unique in this system. However, there is no emotion detection module to detect why the baby is crying. In [31] a procedure based on a maximum likelihood approach using HMMs to recognize infants’ emotions through their cries was proposed. One such study was carried out by Aiswarya et al. [32]. The authors proposed a monitoring system using Raspberry Pi 3 with different features like camera monitoring for emotion recognition, detecting the crying voice of a baby, automatic swinging of the cradle, monitoring the presence of the baby in the cradle, and sensing the wetness of the baby’s bed. It also incorporates a notification module that notifies parents using a buzzer and LCD regarding the baby’s condition. This system can be improved by adding mobile application notifications. In [33], automatic pain detection system through image analysis has been proposed. RBC is used to recognize the facial expressions of infants. This system could be accessed through wearable or mobile devices. Another study carried out by Salehin t al. [34] focuses on monitoring the baby’s crying, temperature, and moisture level. They also use mobile calls and text message services to notify parents of the baby’s condition. Moreover, a web page was designed for real-time monitoring purposes. The proposed system can not only stream the real-time video of the baby but also uniquely identify multiple babies. The key feature of this system is the face recognition technique. In another research study, the authors proposed an AI-based solution to monitor the baby’s laying posture, crying, and emotion, such as happiness [35]. Al-ishamol et al. [36] designed a system for recognizing the baby cry emotion using DNN. The proposed baby care system records the infant’s cry, detects the humidity of diapers, and monitors the body and outdoor temperature by using appropriate sensors. The system proposed employed the GSM module to send the cry emotions values to the parent’s mobile. Some authors [12, 13] and [37] incorporates different cloud platform to design a smart monitoring system. In addition, many researchers focus on the use of machine learning and deep learning in the context of the smart monitoring system [12, 38]. From the literature, we concluded that many research studies have been carried out that focus on designing a smart baby monitoring system. However, there is a need to improve the approaches in terms of face detection and emotion detection. Several types of baby cradles are available in the market, but they are very expensive, and not everyone can afford them. Moreover, from the literature, we conclude that the existing automatic cradles have many limitations in terms of cost, functionality, alert systems, and communication technology. We, therefore, propose a system that makes use of all the necessary sensors and emotion recognition techniques to make a smart baby monitoring system. A comparison between our proposed system and the earlier systems is illustrated in Table 1. We have compared our system based on six key features; the temperature and humidity of the room, live video streaming, cry detection, swing cradle, connection to a mobile application, and emotion detection.
3. Proposed Work
In the previous section, an extensive literature review is conducted to identify the limitations of the existing systems. Thus, a smart baby cradle system is proposed in this paper, which merges the concepts of machine learning and IoT along with the baby monitoring system to overcome the limitations in the existing systems. Our methodology is divided into four parts: methodology, system component selection, system design, and implementation.
3.1. Waterfall Methodology
The task’s structure and the variety of participants made the waterfall technique the obvious choice for this project. The waterfall technique follows a series of processes in a certain sequence, starting with planning and analysis, moving on through implementation, testing, and ultimately assessing the finished product [7]. Before any development was started, qualitative research interviews with parents were done to better understand the needs and expectations of the project. This project was ideally suited for a combination of qualitative and quantitative methodologies since a thorough specification had to be prepared before any work could start. Overall, the waterfall technique was an effective structure to use for this project because it was essential to have a thorough grasp of the issue before any development could begin. All involved participants first agreed on a specification, which eventually evolved into a design concept. The system’s implementation then got underway and did not stop until all of the agreed-upon features were finished to a high quality. Due to the short time window, the waterfall technique was effective at this stage of development. The waterfall technique provided a systematic list of features that had to be finished before the application could be declared functional. It did not permit the addition of new features or ongoing feature changes, which would have significantly lengthened the development period.
3.2. System Component Selection
In this section, the hardware and software components that we have used to design and implement the proposed system are discussed.
3.2.1. Hardware Components
In this work, a Raspberry PI and NodeMCU are used to process the data from sensors. The Raspberry Pi OS is used as the main connecting link between the camera and mic sensors. Temperature and DC motor are connected to the NodeMCU. The hardware components included the following: (i)Raspberry Pi 4(ii)NodeMCU 8266(Wifi)(iii)Camera(iv)Condenser mic(v)DHT 11 temperature sensor(vi)12 V DC motor(vii)Baby cradle
3.2.2. Software Components
The software components are used to connect the sensors with the controllers. The sensor readings were taken from the babies and sent to the controller for further action. (i)Raspberry Pi OS(ii)Arduino IDE
3.2.3. Communication Platforms
The Communication Platforms include the following: (i)Blynk Server: Blynk Server [39] is an IoT-based server that is responsible for forwarding messages between Blynk mobile application and numerous microcontroller boards and SBCs. The Blynk server is used to connect the NodeMCU, Raspberry Pi, and the application(ii)Blynk Application
3.3. System Design and Architecture
This section will provide the details of our system design and its architecture. Although there are countless manual cradle designs available on the market that do not require any power supply to operate, the guardians have to cradle their children themselves whenever needed. We, therefore, propose a system that is automatic and equipped with the necessary sensors for monitoring the baby remotely in real-time.
3.3.1. Cradle Design
The baby cradle system is designed in a way that the baby feels comfortable and gets a good sleep. The cradle has a rectangular shape with boundaries that protect the baby from falling off. In addition, a DC motor is also attached, which is used to swing the cradle whenever the cry sound is detected and can also be initiated by the user. The cradle design is shown in Figure 1(a). The cradle was assembled using glue, screws, and nuts to hold each part firmly and to provide smooth motion to swing the cradle. The connection of the DC motor on the cradle with the help of nuts and screws is shown in Figure 1(b). The cradle designed in this work is small in size for a normal baby, but using the same sensors we can implement the monitoring system on a larger size cradle.

(a)

(b)

(c)

(d)
3.3.2. Architectural Details
The control system of the smart cradle is equipped with a NodeMCU, a Raspberry Pi with the camera, a Dc motor, a condenser mic, and a DHT11 sensor for reading vital parameters to control and monitor the current status of the baby. The complete hardware setup is shown in Figure 1(c). The memory of the Raspberry Pi is limited, so the proposed system incorporates two controllers to divide the computational task between these controllers to reduce the processing time. The DC motor and DHT11 are attached to the NodeMCU, while the camera and microphone are attached to the Raspberry Pi, and both of these controllers are linked together with the help of the Blynk server. Figure 1(d) shows how the controllers are attached to the cradle. The camera with a Raspberry Pi is placed near the cradle for real-time emotion recognition and live streaming. These sensor values are then uploaded to the Blynk Server, which is accessed by the concerned person through the Blynk application. Using the application, parents can control the hardware remotely, get the live stream of the baby, detect the crying sound, and monitor the humidity and temperature of the surroundings. The foremost goal is to design a smart cradle with several monitoring sensors to detect the live baby’s condition and notify the parents for attention through the mobile application. Our smart system is also capable of uniquely identifying different babies’ faces using facial recognition techniques which makes it easy for the parents to have the same monitoring system for multiple children. It also detects the emotions of the babies in real time. The overall architecture of the smart baby system is depicted in Figure 2. NodeMCU is used as the microcontroller with the embedded Wi-Fi module to receive and upload data from the temperature sensor and DC motor to the Blynk server. Similarly, a Raspberry Pi, a low-cost credit-card-sized microcontroller, is used to receive the values from the camera and mic and forward them to the parent’s application using the Blynk server. The Blynk application can be accessed by users via Wi-Fi or mobile data communication. Furthermore, it also provides the facility to registered users so that they can monitor the baby’s condition remotely. For this purpose, the data is retrieved from the sensors connected to the microcontrollers.

3.3.3. System Monitoring Features
Our work is aimed at developing a smart baby monitoring system, which provides the following key features: (i)Cry detection: the mic connected to the cradle continuously detects the baby’s crying sound and sends a signal to the Raspberry Pi whenever a crying sound is detected. A notification is sent to the parent’s Blynk application to alert them that the baby is crying(ii)Swing cradle: NodeMCU switches the relay that is coupled to a DC motor and thus associated with the cradle for swinging purposes. Parents can control the cradle from swinging using the application(iii)Live streaming: the user can remotely monitor the real-time conditions of the baby through the mobile application. Since the Raspberry Pi device does not support a built-in camera, thus a web camera is plugged into the Raspberry Pi for live monitoring purposes(iv)Room temperature and humidity: DHT11 sensor measures the surrounding air and records the sensor values in the NodeMCU and uploads them to the Blynk server at the same time(v)Face and emotion detection: in this work, a machine learning model is employed for infant facial and emotion recognition. A notification is sent to the parent’s Blynk application to notify them about the current emotion(vi)Mobile application: the parents can observe the normal data collected from the sensor attached to the cradle, such as ambient temperature and humidity and live video of the baby, whereas the abnormal conditions are conveyed to the parents using Blynk notifications to take appropriate action
4. System Implementation
The implementation details of our system are divided into three modules: the connection of the controller with the sensors, emotion recognition, and the mobile application module. The overall methodology adopted in this research is shown in Figure 3 in the form of a flowchart.

4.1. Connection of Controllers with the Sensors
The program for the sensors was deployed in the Arduino IDE. The same IDE was used to transfer the program to the NodeMCU using the USB cable. The controllers must be connected to the Wi-Fi network whenever the microcontroller turns on. A 5 V power connection is supplied for powering the Raspberry Pi and NodeMCU. We use Python as the programming language to implement and configure the Raspberry Pi after installing the Raspbian OS on an SD card. After powering up the PI, the OS initializes the Python script. The general-purpose input/output (GPIO) port will then activate to operate. The OS will check the GPIO states. Consequently, Pi collects information regarding the baby’s cry level and emotion from the feedback from the camera. Similarly, NodeMCU will receive information from the DC motor and temperature sensor. All the data collected from the controllers will then be uploaded to the server, and then the server will forward the real-time information of the child to the parent’s application. The user can control the baby system through the application. Parents who are far from their children need to have a quick notification whenever the baby is crying.
In our proposed system, we take into our consideration this feature. A mic is used to detect the crying of the baby and provide the signal to the Raspberry Pi which in turn runs a Micmon script continuously to check the baby’s crying level. If the sound is detected, it sends notifications to the user’s Blynk application and automatically activates the DC motor for swinging the cradle. The parents can also control the motor using the application. If the user turned the motor on, the DC motor automatically starts swinging the cradle. A 12 V relay is used for controlling the current. Micmon [40] is an ML-powered library to detect sounds in an audio stream, either from a file or from an audio input. For cry detection, we have generated our dataset by visiting children’s hospitals and recording the crying sound of different babies. After recording the audio, we label the audio as a positive or negative sound. In the next phase, the system collects the data to analyze the baby’s sounds based on the labels. For continuously measuring the real-time surrounding humidity change and temperature, the DHT11 sensor is attached to the NodeMCU controller, which sends the values to the Blynk server. If the temperature degree is higher than 23C or less than 16C, the GPIO pins will be activated and a notification is sent to the parents about the irregular temperature level. For live streaming, a camera is used which is connected to a Raspberry Pi. We have created a server and assigned the IP address of the server to the Blynk application. There is no restriction for the parents to have the same network for the cradle and a mobile application. Using this application, parents can see a live video of their baby at any time and any place.
4.2. Face Detection and Emotion Recognition
The most important part of our system is the use of machine learning technique to automatically detect the emotion of the baby using live stream data. The web camera is set up in such a way that it can capture the baby’s face correctly. The user will log in and upload a picture of the baby at the time of device registration. These pictures are stored in the system’s user profile folder. The Viola-Jones algorithm is used to detect the baby’s face from a live webcam feed. If the image captured by the camera during the live stream does not match the picture stored in the system, the system will send the notification “unknown face” to a user application. In this research work, a novel face emotion recognition system from video frames is proposed. The facial expressions of a human being convey information about the person’s emotions. The facial expression of a person provides information about his emotions. Furthermore, these expressions are produced by the changes in the specific facial features, for example, mouth, eye, and eyebrow as well. Along, the recognition of the expressions results in detecting the basic emotions, such as anger, disgust, fear, happiness, sadness, surprise, and many more. However, the most common way to identify emotions from the facial expression comprises the processing of the images and distinguishing the changes in facial features.
|
Algorithm 1 depicts the overall working methodology of the proposed work. We have used a dataset [11] from Kaggle to train our machine learning model to identify the baby’s emotions. We have classified these images into six predefined classes. These classes are categorized as anger, happiness, fear, cry, disgust, and surprise. We have divided this module into three phases, as described in Figure 4. In the initial phase, the baby’s image is detected by capturing the video stream using OpenCV. These captured images are resized and converted to grayscale. Dlib has been used to extract the face from the image. In the second phase, the captured image data is passed to our classifier. The support vector machine (SVM) has been used for the recognition and classification of emotions from the captured face. Finally, the detected facial expressions are classified into appropriate emotions. Emotions such as sadness, happiness, and surprise are only notified in the parent’s application.

4.3. Mobile Application
The process of controlling sensors using a mobile phone requires an application operating under the Android or iOS operating system. An application named “Blynk” was thus used for this purpose. For making the connection between controllers and the Blynk application, we need an authentication key. The Blynk application will send the authentication token to the registered email address which is then synchronized with the Blynk server. We employed the Blynk application for developing the dashboard due to its various graphics interfaces, as well as it provides a user-friendly environment with simplicity concerning usage and widgets. Moreover, the data measured by the utilized sensors is updated on the Internet, and the Blynk server and mobile application are used to retrieve the data. The functions of the mobile application are depicted in Table 2.
5. Results
In this section, the experimental results of our proposed system are presented. We have performed various experiments to test the feasibility of our smart system design. We have successfully connected all the sensors with the controllers and the server. The utilized sensors’ data are updated on the Internet and can be accessed via the Blynk server and Blynk mobile application. Figure 5(a) shows the main screen of our user’s Blynk application with buttons to swing the cradle and to check the temperature and live stream of the baby. The values uploaded by the microcontrollers connected to the baby monitoring system to the user’s application via the Blynk server are shown in Figure 5(b). The real-time humidity and temperature of the baby’s surroundings were determined, and a notification was sent to the user’s application if the measured temperature was not within the specified threshold. The function of detecting and sending notifications of a baby crying was tested by playing a baby-crying ringtone. The sensor connected acquired the sound level as soon as the audio started, and a notification was sent to the parent’s mobile phone to notify them that the baby was crying. The smart cradle swings automatically when a crying sound is detected. A notification, shown in Figure 5(c), is forwarded to the parents to inform them that their child is crying. The icon below the sensor values shows the notification alert. An external web camera is used to achieve real-time baby monitoring. The algorithm implemented for face recognition will check if the recognized baby is in the cradle or not. If not, it will generate a notification and send it to the user application, as shown in Figure 5(d). Our proposed system is intended to develop a smart baby monitoring system with emotion detection using the ML technique. For experimentation, we played a YouTube video of a baby, and different baby emotions were recorded. The captured video frame is taken as an input image, and then the face region of the baby is detected from the input image after every 10 seconds. Finally, the facial emotion of the baby is identified in front of the web camera connected to the baby’s cradle, and different emotions were captured by the system, as shown in Figures 5(e) and 5(f). These emotions were notified to the user application.

(a) Blynk mobile application interface

(b) Blynk mobile application interface with sensor values

(c) Notification to the parent’s mobile when the baby’s crying is detected

(d) Notification to the parent’s mobile when the unknown face is detected

(e) “Surprise” emotion detection using ML model on the input image

(f) “Sadness” emotion detection using ML model on the input image
6. Conclusion and Future Work
Monitoring a baby is a challenging task for guardians. It is almost impossible for parents to continuously monitor their babies all the time. An IoT-based smart baby monitoring cradle system with emotion recognition has been designed as the best solution for working parents to monitor their babies anywhere and anytime. The memory of the Raspberry Pi is limited, so the proposed system incorporates two controllers to divide the computational task between these controllers to reduce the processing time. NodeMCU and Raspberry Pi were used as the main controllers to connect the sensors to measure the crying condition, humidity, and ambient temperature. The system is designed to provide ease to working parents by sending instant notifications to their application when abnormal activity is detected, such as the crying of the baby. The prototype was tested by playing a crying baby ringtone on a phone and placing it near the cradle to detect crying and send the notification instantly to the parent. The system started swinging the cradle upon detecting the crying sound of the baby, and a notification was also sent to the mobile phone user. The system is capable of detecting unknown faces and the emotions of only the registered baby faces. The prototype was tested by using the baby video on YouTube. The camera detects the face and sends a notification to parents as “unknown face detected”. Moreover, we also tested the viability of the emotion by applying a machine-learning model to a video of a registered baby. The model has been trained to detect six types of emotions, but only happy, sad, and surprise emotions are notified to the parent on their application. The proposed system helps the parents to monitor their baby through online streaming via camera. There is no restriction for the parents to have the same network for mobile applications and the external camera. This enables the parents to monitor their baby in real-time by simply setting up the key which has been provided by the Blynk application.
6.1. Future Work
In the future, the cost and complexity of the system can be reduced by connecting all the sensors to a single controller. The system’s user interface also needs improvements; although Blynk provides an interactive GUI, there are some limitations in built-in applications. Furthermore, will develop our own set of dashboards for PCs, laptops, and smartphones to add more monitoring functionalities. In addition, some other possible future works to improve emotion and crying detection by implementing other machine learning algorithms to detect the actual cause of baby crying can be conducted. This paper is just detecting the baby’s facial expression. Moreover, different wearable sensors can also be coupled with the system to provide accurate detection of various health conditions.
Data Availability
The data used to support the findings of this study are from previously reported studies and datasets, which have been cited in this article.
Conflicts of Interest
The authors declare that they have no any conflict of interest regarding this publication.