Abstract

Computer vision is a significant component of human-computer interaction (HCI) processes in interactive control systems. In general, the interaction between humans and computers relies on the flexibility of the interactive visualization system. Electromyography (EMG) is a bioelectric signal used in HCI that can be captured noninvasively by placing electrodes on the human hand. Due to the impact of complex background, accurate recognition and analysis of human motion in real-time multitarget scenarios are considered challenging in HCI. Further, EMG signals of human hand motions are exceedingly nonlinear, and it is important to utilize a dynamic approach to address the noise problem in EMG signals. Hence, in this paper, the Optimized Noninvasive Human-Computer Interaction (ONIHCI) model has been proposed to predict human motion recognition. Average Intrinsic Mode Function (AIMF) has been used to reduce the noise factor in EMG signals. Furthermore, this paper introduces spatial thermographic imaging to overcome the conventional sensor problem, such as gesture recognition and human target identification in multitarget scenarios. The human motion behavior in spatial thermographic images is examined by target trajectory, and body movement kinematics is employed to classify human targets and objects. The experimental findings demonstrate that the proposed method reduces noise by 7.2% and improves accuracy by 97.2% in human motion recognition and human target identification.

1. Introduction

Nowadays, with the rapid development of information technology, human beings are trying to communicate with computers more naturally [1]. The conventional human-computer interaction input devices such as the mouse, keyboards, and remote devices lack flexibility, and there is no longer a natural way of interacting [2]. In general, voice commands and body language are natural ways for people to communicate with computers, including many online commercial products [3]. The interaction between humans and computers is the most important application of computer vision for autonomous structures [4]. It is essential to acquire precise data like shape, behavior, and motions for efficient human-computer interaction [5]. An effective characteristic analysis of these human targets can accurately recognize and identify the targets [6]. Human target identification and objects surrounding play a crucial role and pose many challenges before interaction between computers and humans [7]. We rapidly determine the number of important facts and qualities about each other during human-to-human interaction, including identification, age, facial expressions, and gestures. These visual cues/features have an impact on the content and flow of a conversation, and they provide contextual information such as situation and speech context. A gesture or a facial expression, for example, could be intended as a signal of understanding, or the gaze direction can be used to differentiate between the object referred to like this and the direction over there in speech. As a result, other communication channels such as speech and gestures are both coexpressive and complementary to the visual channel.

Conversely, conventional sensors do not deliver an acceptable field of view (FOV) to monitor various targets to examine human movement and body features [8]. The process of human movement identification needs sufficient space to map command gestures and different human targets [9]. It is significant to recognize the human targets and surrounding objects, and the computer aims to satisfy the requirements of the human interaction environment [10]. The user’s gesture would be consistent with the physical space of the virtual world; i.e., the user’s action should be matched with the gesture in the virtual field, and it is more appropriate to estimate the gesture for human-computer interaction [11]. A rich user experience and more effective and efficient interaction can be obtained by integrating visual information with other input modalities (such as keyboard and mouse). In addition to standard desktop computing, vision-based interaction could be beneficial in a variety of scenarios, including mobile, immersive, and ubiquitous computing.

Presently, sensor technology such as electromyographic (EMG) and signal processing has been extensively utilized in the field of human-computer interaction and multifunctional prosthetic hand control [12, 13]. Electromyographic (EMG) signal collects superficial muscle and nerve trunk activity bioelectric signals through electrodes on the surface of the skin and performs muscle processes evaluations and simulations via the recorded, filtered, amplified, transmitted, and feeding of the collected bioelectric signals [14, 15]. Since the EMG signaling of the human leg or hand movements during object usage easily interferes with noise [16]. The main problem to accomplish a difficult understanding of hand motion is successfully gathering signals, extracting features, and classifying diverse hand movements for human-computer interaction [17]. From the object’s characteristics such as weight, size, and shape, it is possible to identify human targets/emotions [18].

In this paper, the Optimized Noninvasive Human-Computer Interaction (ONIHCI) model has been proposed to address gesture recognition and human target identification problems. The electromyographic (EMG) signal can represent the muscles’ active conditions; the data of the neural activities can be determined [19]. The advantage of electromyographic is noninvasive; thus, it executes well in studying neurological rehabilitation, motion detection, and artificial control. Besides, the AIMF algorithm has been employed to reduce noise in the EMG signal. The present models focus on human-computer interaction, emphasizing a specific target or the behavioral analysis of a set of targets [20]. The spatial thermographic images for human motions are analyzed to explain the trajectory actions and the kinematics motion of the human and objects. Our approach defines human targets precisely and allows them to improve their restricted vision and overcome traditional methodological problems related to gesture recognition and human target identification [21]. Target trajectory examines human motion behavior in spatial thermographic images, and body movement kinematics is used to classify human targets and objects.

The rest of the paper is arranged as follows. Section 1 and Section 2 discussed the overview of computer vision for human-computer interaction and related works. In Section 3, the Optimized Noninvasive Human-Computer Interaction (ONIHCI) model has been suggested. In Section 4, the experimental results have been performed. Finally, Section 5 concludes the research paper.

2. Related Works and Features of This Research Article

Qi et al. [22] introduced the linear discriminant analysis and extreme learning machine (LDA-ELM) method for smart human-computer interaction based on surface EMG gesture recognition. The proposed method can minimize the useless data in Surface Electromyography (SEMG) signals and enhance identification accuracy and efficiency. This paper concentrates on time variances optimization in surface [23]. EMG pattern recognition and the numerical outcomes are advantageous to decreasing the time variances in gesture identification based on surface EMG. Chen et al. suggested the Motor Unit Spike Trains with Blind Source Separation Algorithm (MUST-BSSA). That is how well high-density EMG signals recognize motor unit movements during hand postures. They characterize the precision in recognition of motor unit actions during hand postures from high-density EMG signals. The results demonstrate the possibility of recognizing motors during the assigned motor operations and the high precision of hand gestures classification for human-computer interface perspectives [24].

Song et al. [25] discussed the Guidance framework for tracking by detection (GFTD) for hand detection based on the thermal image. They introduced an Adaptive Hand Detection (AHD) based automatic tracking-by-detection algorithm utilizing the Kernelized correlation filters tracker to enhance the performance of the proposed model. The proposed model detects hands in real time by decreasing calculation utilizing a single sensor instead of fusing manifold sensors, enables precise tracking, and enhances hand tracking precision. Xiao et al. [26] proposed the variational mode decomposition and composite permutation entropy index (VMD-CPEI) method to classify hand movements. The approach suggested in this paper uses the VMD procedure for decomposing the initial SEMG signal into multiple VMFs and measuring the related CPEI of each signal component. The model proposed can enhance the quality of life of amputees, disabled persons, and others.

Sekhavat et al. [19] introduced Affective User Interface Design and Evaluation (AFFUIDE) to evaluate the effect of using facial expressions emotions as a user interface (UI) and system input in virtual scenarios. The data suggests that the traditional usable user interface is the most useful and that the full affective user interface feels the best fun and user experience. Choudhary et al. [20] recognized someone even when they were not in a neutral condition or had any facial expression. Using a hidden Markov model with singular value decomposition, they could identify persons whose top half face is visible (HMM-SVD). Singular value decomposition parameters were used in this article to build a series of blocks for each picture of a face [21]. To cover the entire face, a seven-state HMM was employed. In this paper, the Optimized Noninvasive Human-Computer Interaction (ONIHCI) model has been proposed to address gesture recognition and human target identification [23]. This paper proposes a novel approach to determine the human targets in spatial thermographic images [24, 27] using vision perception. This system estimates the capacity of the human targets to view when they have restricted visualization and supports targets perceiving their surroundings by informing them about their condition through the visualization system [28, 29]. Identifying human targets is the main aim and, at the same time, recognizing the scenario for human targets. The signals of human hand motions have been gathered utilizing an EMG acquisition device [30, 31]. A motor unit classification-based gesture recognition method was proposed in [32]. MUSTs were first classified into 11 categories, one per motion. The averaged discharge timings of MUSTs in each group are then used to measure the activation level of the cerebral drive to each motion. By comparing the estimated activation levels of each motion, the output gesture class was determined. AIMF has been used to denoise the obtained real signs and feature sets for the hand movement classification process [22, 25, 33]. The proposed method is briefly described in the following section.

3. Optimized Noninvasive Human-Computer Interaction Model (ONIHCI)

In this paper, the Optimized Noninvasive Human-Computer Interaction (ONIHCI) model has been proposed to address gesture recognition and human target identification problems. The requirement for computer vision-based human-computer interaction and human movement recognition has been increased in fields like intelligent monitoring, security, and surveillance system. The most significant and common human movement is walking and running. Many studies aimed to develop a computer model of motion. The role of human movement recognition is a challenge in human-computer interaction. Motion analysis includes measuring, analyzing, and evaluating the motion functions associated with walking or running activity to determine the human target in an HCI system.

Case 1. Human kinematics analysis and trajectory analysis for human target identification

Solution 1. Analysis of human kinematics is the initial measure to determine a human in the course of its movement or static prospect. Human frames are extracted from the spatial thermographic image, and areas are calculated from dynamic targets to explore trajectories and kinematics to calculate the proposed approach. The key targets know the instructions and are in an upright position. This paper utilizes three human targets for the identification of human targets.
Figure 1 shows the human body kinematics analysis. The human body orientation around the arm region , legs , and head for the upper part of the targets spot is determined for every human.Human Target 1. The human body orientation around the arm region is denoted as , left leg , right leg , and head target sport Human Target 2. The human body orientation around the arm region is denoted as , left leg , right leg , and head target sport Human Target 3. The human body orientation around the arm region is denoted as , left leg , right leg , and head target sport The human body provides a few signs of motion or static state disposition. The inclination for direction is extracted from the determination of the direction between these kinematics. If the legs and head position are open at a certain angle, the motion direction from the slope of the head is provided by the angle between the legs and head. Figure 1 shows the human body kinematics; here, the human target is being demonstrated with a running direction to the head and right sloped toward a similar direction. In the instance of a static condition of the target, body, legs, and head orientation comprising arms might be parallel to human targets and . The interactive system point of view transferred kinematics is provided with and the kinematics from the initial interactive system, are multiplied with orientation and translation, and the interactive systems and rotation between the interactive systems are expressed asAs inferred from equations (1a), (1b), and (1c), the human body target rotation matrix, kinematics, and translation vector of the transformation vector can be computed. The relationship between two groups of kinematics can be utilized to choose the categorized targets from two interactive systems to connect similar targets in two various thermographic views.
Figure 2 shows the position data of every computer or interactive system is utilized to convert the human body kinematics of the other view direction. In Figure 2, the spatial thermographic interactive system coordinate system is denoted as , the single perspective interactive system is denoted as , and the interactive stereo system denoted is as . The rotation and translation between interactive systems are denoted as and . The human target trajectories are examined in the thermographic scene concerning the trajectory trend of the respective target’s feature points. The height and width of the target area are evaluated for a ratio to determine the first data about the trajectory target trend. The ratio variations are reliant on the target alignment through the center rotation. The height and width variations for the target’s backward and forward motion concern the interactive system.
In Figure 3, the backward and forward motion calculations and the magnitude and direction vector are provided for every interactive system denoted as dotted lines. The variations of ratios from every interactive systems perspective are presented in Figure 3. Stereo vision-based identification combines features extracted from two-dimensional stereo images with reassembled three-dimensional object features to sense humans in an interactive setting. The interactive stereo system is already monitoring the target , the computer endowed perspective sensor; thermographic camera altered its direction to the absorbed target as a portion of the human-computer interaction task. The human target coordinate system is represented by the direction vector , width , and the height ; the ratio indicates the target moving toward a single perspective interactive system. The ratio indicates the target moving toward the interactive stereo system; is the direction. The rotation and translation between interactive systems are denoted as and . These variations are noted during the movement of the target and updated for a time by equation (2). To determine the method vector direction , the height and width are utilized in the thermographic image.As shown in equation (2), denotes rotation width, denotes rotation height, and denotes the rotation ratio. If there is no variance between successive proportions, the direction vector is acquired from the variance between the successive target’s width in successive pictures. If the proportion is varying, let us think that the target is creating a rotation around itself.
The horizontal target motion is utilized for right and left directions in its trajectory. Let us assume the overall extracted feature point for this particular target is and the feature extracted point for a particular target is . The horizontal motion feature point is followed after the evaluation of their mean coordinates until the overall number of . The final horizontal position average at the time is deducted from the present mean. This variation supports the determination of the direction and horizontal vector magnitude from the next expression.Condition 1:Condition 2:After computingAs discussed in equations (3a), (3b), and (3c) denotes the extracted feature points for a particular target. denotes the overall extracted features point for specific targets. The final horizontal position average at period is deducted from the present mean . denotes the direction and horizontal vector magnitude. denotes the final trajectory vector. A trajectory trend is produced for every target in the spatial thermographic picture, and it is equated with another interactive system trajectory assessment. Every interactive system assessment utilizes a horizontal and method vector to determine the last trajectory vector.
The target areas identified are labeled with two criteria for their characteristics. The human target areas are identified and bounded by a rectangle frame. The frame is split into number of cells to examine every segment distinctly. The number of cells in the horizontal direction and the vertical direction is provided with . The human target is chosen utilizing associated elements and distinct from added substances in the target frame. The pixels in each cell are added by equation (4), and the overall human target field is determined for the respective cell. Let us consider that is the respective cell, is the overall cell pixel value, and and are the height and width of the target frame, respectively. Each row and the cell column are added to a bird’s eyesight and perception target spectrum as an added target indistinguishable signature:As shown in equation (4), is the overall cell pixel value. Each pixel in a cell is provided with the pixel and the coordinates of this pixel in the cells are . The final pixel directs from the bottom and left of each cell can be evaluated from the subsequent expression:Equations (5a) and (5b) show the thermographic image final pixel coordinates of each cell, where the number of cells in the horizontal direction and the vertical direction is provided with . and are the target frame height and width, respectively. The beginning coordinates of every cell and are determined from the present directs of the target frame and frame height and width from the resulting expression:As derived in equations (6a), (6b), and (6c), every cell index is evaluated from utilizing floor function after division and . After determining the kinematics for arm, leg, and head areas, chooses the relationship of these kinematics for each thermographic image of a similar target. Additional correlation outcome originates trajectories utilizing the human target trajectory directions [X Y Z] with to choose the last trajectory direction with the maximum association outcome. Human motion has been identified based on human body kinematics and target trajectory analysis; human activity has been identified accurately for the human-computer interaction process. With the collective thermographic images of pixel and edge variances, human motion detection has been performed. The spatial thermographic images have many noises, low image resolution, and low contrast to resolve these problems. AIMF algorithm has been proposed.

Case 2. EMG based human motion recognition.

Solution 2. A better human-computer interaction can be accomplished utilizing electromyographic (EMG) based human motion recognition. The signals of common human gestures have been gathered using the EMG acquisition device. Electrode’s locations have been chosen in line with the musculoskeletal of these 5 muscles and definite by contractions of the muscle-specific, which involve physically repelled finger abduction and extension. EMG signals are bioelectrical reactions affected by muscle fiber movement during multiple muscle contractions.
Figure 4 displays the human-computer interaction procedure based on the EMG signal. Data acquisition, preprocessing, classification model, and feature extraction are the significant and main stages in human gesture classification. The gesture classification can identify discrete body gestures and cannot be utilized for control of the interactive system and, thus, continuous motion regression, which has assessments for more motion information. Compared to the EMG signal-based musculoskeletal model for gesture recognition, the mapping between EMG and angular acceleration, joint moment, joint angular velocity, and angle can be recognized. The commonly utilized feature can be primarily separated into a time-frequency domain feature and a frequency domain feature. The initial EMG signals gathered include a variety of noise signals, like electromagnetic noise, electrode noise, and pervasive noise, caused by an external environment and acquisition system. Therefore, reducing the noise of the real signal is necessary to generate actual and efficient information for the extraction of features.
As shown in Algorithm 1, the proposed AIMF algorithm with an adaptive time-frequency data analysis can be distinctly a time series into a finite number of elements, known as Empirical Mode Decomposition (EMD). For the self-adaptive decomposition, the data processing method is used for the nonstationary and nonlinear signals, where t is the acquisition channel, input is the EMG signal, and output is the intrinsic mode function. To reduce the noise in EMG signal, let us compute all the extreme points of ; i.e., . We fit the high points by interpolation approach: upper envelope represented as and lower envelope represented as ; subsequently, we compute average envelope value; i.e., after that, we obtain a stationary data sequence; . The algorithm decomposes the actual signal divides noise from the efficient signal with high time-frequency resolution and better adaptability in various intrinsic mode functions. Therefore, the collecting the features of the EMG signal, the AIMF algorithm is a perfect method for decreasing the EMG signal noise.
As shown in Algorithm 1, Empirical Mode Decomposition is executed on every EMG signal channel; a set of AIMF can be determined asAs inferred from equation (7), denotes the residual of the actual EMG signal after extracting AIMFs . The data signal determined after noise reduction using Algorithm 1 guarantees the feature extraction of original motion signals.
Figure 5 shows the communication between interactive systems. Without loss of generalization, it is preassumed that the HCI system records data of heterogeneous variable to be evaluated and heterogeneous variable to interact, which produce a computation vector and communication vector , where and are calculated values of the computation variable and communication variable at HCI system, respectively. As a reminder for computation, the interactive system engages itself in computing targeted function through the following equation:As discussed in equation (8), is the ideal computation output, and and denote the preprocessing function at the interaction system. Let us consider indicating the preprocessed computation signal at the interaction system. For simplicity of exploration and without loss of generalization, let us preassume that the communication signals and computation signals are distributed with the unit norm; that is, and . Therefore, the interactive system builds the coded transmit EMG signal asAs inferred from equation (9), and indicate the transmit beams for the communication and computation signal, respectively.Thus, the received signal at the interactive system is expressed asAs shown in equation (11), is the noise vector with variances . Initially, the processing of the computation signals has been discussed. Because of the one-to-one mapping between and in equation (8), let us take a precise at the interactive system as the targeted function signal. It is anticipated to execute a receive beam at the interactive approach to reduce the distortion of the targeted function signal affected by channel noise, fading, and interference. Therefore, the received signal for computation at the interactive system is expressed asAs derived in equation (12), is a receive beam for computation outcomes at the interactive system. As a rule, the distortion of computation at the interactive system is calculated by the Root Mean Square Error (RMSE) between and which is stated asReplacing (12) into (13), the computation distortion can be calculated as the resulting RMSE function of transmitting and receiving beam:Subsequently, communication signal processing has been discussed. The received EMG signal for communication at the interactive system can be evaluated asAs inferred from equation (15), indicates the receive beam vector for communication signal at the interactive system. As a result, the received EMG signal to inference and noise proportion at the communication receiver system can be calculated using Besides, the transmit signal of EMG relies on the energy beam sent by the human-computer interaction system.
EMG signal-based human motion recognition, Lyapunov exponent, is utilized to detect the numerical features in a complex system and denoted the system sensitivity to the first value as the parameter progress with a period . The M-dimensional systems have Lyapunov exponents, creating an exponential spectrum. Thus, it is broadly utilized in system fault diagnosis along with muscle activity and muscle contraction identification. It is stated asAs discussed in equation (17), denotes the distance between two adjacent 0 points at the time , is the sampling time, and N indicates the overall step length. The distance between end-to-end paths is generally reproduced by the forecast fault on the function to attain the Lyapunov exponent of the complete set of IMF. It is stated asAs shown in equation (18), denotes the distance between phase points, is the adjacent point to , and is the distance between and after convolution step length time. M indicates the aggregate number of phase points. The two typical EMG signal features embedded in dimension and delay time are two essential variables for evaluating the Lyapunov exponent. The motion characteristics of the EMG signal have been chosen and represented using these two parameters. The delay time was calculated using the mutual information technique as follows:As shown in Figure 6 and equation (19) representing the delay time calculation, denotes the EMG signal delay time and , and are likelihood values.
By Algorithm 1, the embedded dimension has been evaluated, which is stated asAs discussed in equation (20), and are the mean thresholds of the distance between every two adjacent neighbors in the recreated dimensional space and n-dimensional space, respectively.
The proposed AIMF algorithm reduces the EMG signal noise and obtains active human motion features for the human-computer interaction. Finally, the proposed Optimized Noninvasive Human-Computer Interaction (ONIHCI) model addresses the problems such as accurate gesture recognition and human target identification and reduces the noise in EMG signal for an effective HCI process. The following section briefly describes the experimental results.

Input: the actual data sequence of the EMG signal, ,
Output: intrinsic mode function, ;
Repeat
Whiledo
for all j such as do
Compute i.e.,
Obtain upper envelope: ,
Obtain lower envelop: ;
Compute average envelope value ;
Obtain a stationary data sequence, ;
 Update ;
Until convergence

4. Experimental Results and Discussion

The proposed Optimized Noninvasive Human-Computer Interaction (ONIHCI) model experimental results have been performed in a computer vision-based human-computer interaction environment. Different situations have been used analyzed using the training and testing dataset [21]. From the large dataset, a 70 : 30 ratio of training and testing data has been formed. A different dataset was randomly chosen among the testing data and observed various performance metric values. The following figure 7 and table 1 give the results observed from this analysis. Human targets are running, walking, and slow waving; the data is divided into multiperson and single-person behavior. The analysis of human trajectory and human kinematics has been discussed in this section. The human targets are detected in multiple thermographic images.

Furthermore, EMG-driven musculoskeletal model-based motion recognition has been utilized, to the map between EMG and joint angular velocity, angle, joint moment, or angular acceleration. The experimental results show that the HCI suggested method achieves lesser noise and enhances the accuracy in human motion recognition and identifying human targets with high performance. Simulation parameter has been used for the human motion recognition based on EMG signal acquisition with a simulation time of 4 seconds, and sampling frequency utilized is 25 kHz, muscle length is 200 mm, the number of electrodes used is 64, muscle radius is 20 mm, muscle fiber length is 45 mm, muscle fiber diameter is 35.77, and fiber length is 150 mm.

4.1. Root Mean Square Error (RMSE) Ratio Analysis

The Root Mean Square Error is often used in continuous motion prediction. It compares the actual values to the projected values. The conflict is squared to prevent canceling negative and positive values. The distortion of the computation at the interactive system is evaluated by the mean square error and which is stated as

Replacing (12) into (13), the computation distortion can be calculated as the following RMSE function of receive and transmit beam:

Therefore, the computation outcome should have a low RMSE. Figure 7 demonstrates the RMSE using the suggested system.

The performance ratio evaluation of the recommended ONIHCI system is shown in Table 1. To detect the movement of a human, it is necessary to track the body’s motion throughout physical activity. Data processing to better portray such movement aids in the operation’s detection, which helps this study effectively. Compared to other technologies, the human-computer interface performance based on human motion recognition findings is quite successful. Table 1 shows that the highest performance ratio of 95.4% by the proposed model outperforms the existing models in the literature survey. For all sets of data inputs, the ONIHCI gives the highest performance ratio.

4.2. Recognition Accuracy Ratio Analysis

The human thermographic view is examined for comprehending the trajectory behaviors and the motion kinematics during the target’s movement. Humans are initialized using the kinematics system by features like the number of limbs, degrees of freedom, limb length, etc. Humans are represented as images, and personality traits like form or region are extracted and stored in the image model. The acknowledgment, as well as the complexity of the motion, can be determined effectively. The accuracy from every sensor is provided in Table 2 concerning every sensor’s kinematics and trajectory analysis utilizing thermographic images. The simulation outcomes demonstrated that the suggested approach could assess the human target angle with high accuracy compared to other existing approaches. Figure 8 demonstrates the recognition accuracy ratio using proposed ONIHCI methods.

4.3. Delay Time Determination and Noise Reduction Ratio Analysis

In reflection of the nonlinear dynamic EMG signal, the actual EMG signal has been decomposed into a set of IMF when noise reduction, as demonstrated in Figure 9(b). The actual EMG signal has been decomposed into and deviation . is an oscillation function with various frequencies and amplitudes. is a monotonic signal, denoting the drift element determined by deducting every from the actual signal, and no longer meets the decomposition states. Embedded dimension and delay time are two significant constraints for the manipulative Lyapunov exponent. Moreover, the chosen delay time is too short and not advantageous to EMG signal optimization.

The two typical EMG signal features embedded in dimension and delay time are essential for evaluating the Lyapunov exponent. These two parameters are appropriate for depicting EMG gesture time sequence data; the motion features of the EMG signal have been selected and represented.

The mutual information approach has been utilized for calculating delay time which is stated as

As shown in the above equation, , and are likelihood values.

Based on Algorithm 1, the embedded dimension has been evaluated, which is stated as

As discussed in the above equation, and are the mean thresholds of the distance between every two adjacent neighbors in the recreated -dimensional space and n-dimensional space, respectively. Figure 9(a) shows the delay time using the proposed ONIHCI method.

The quality of the EMG signal measurement is demonstrated by the ratio of the EMG signal calculated to unwanted environmental noise inputs. A high-quality signal offers more information to predict the intention so that it increases prediction accuracy. Nevertheless, noises from various sources are possible, and the analysis of EMG signals may be contaminated. To maximize the signal-to-noise relation in this respect, amplifiers are designed and used to reject or eliminate noises. The accuracy of EMG signals is affected by noise and artifacts from various causes (including electric devices, power lines, and physiological factors), which may contribute to the inaccurate analysis of data or a misunderstanding of motion parameters. In the proposed ONIHCI method, the AIMF algorithm reduces the noise level seen in the raw EMG signal. Figure 9(b) shows the noise reduction ratio using the proposed method.

Table 2 shows the precision ratio using the proposed ONIHCI method. The present system implemented in this study is user-friendly compared to a command-based system or standard device and robust in detection and recognition. The proposed AIMF algorithm is extensively utilized in the classification and regression of HCI because of its simple execution, high precision, and antinoise capability.

4.4. Normalized Computation Error Evaluation

Normalized computation error is a statistical assessment utilized to compare proficiency testing outcomes where the uncertainty in the measurement outcomes is included. The cause for error with the spatial thermographic sensor has the minimal image size of the targets and unexpected changes. Single perspective and stereo sensors provided a benefit to decreasing the computation error with surplus views. The distance between the human target and interactive system positions at every degree of target force direction is averaged over a tracking cycle known as normalized tracking error. The proposed method has lesser computation error compared to other existing methods. Figure 10 shows the normalized computation error using the proposed ONIHCI method. The proposed Optimized Noninvasive Human-Computer Interaction (ONIHCI) model achieves high recognition accuracy and lesser delay time and noise when compared to other existing linear discriminant analysis and extreme learning machine (LDA-ELM) method, Motor Unit Spike Trains with Blind Source Separation Algorithm (MUST-BSSA), Guidance framework for tracking by detection (GFTD), and Hidden Markov Model and Singular Value Decomposition (HMM-SVD) methods.

5. Conclusion

This paper presents the Optimized Noninvasive Human-Computer Interaction model (ONIHCI) to address gesture recognition and human target identification problems. Human trajectory analysis and human kinematics analysis have been introduced, and the human targets are detected in the multitarget scenarios based on spatial thermographic images and using different sensors. Furthermore, EMG-driven musculoskeletal model-based human motion recognition has been utilized to map EMG and joint angular velocity, angle, joint moment, or angular acceleration for HCI. To reduce the noise in EMG signals, the AIMF algorithm has been introduced. The experimental findings demonstrate a lower noise ratio of the proposed system of 7.2% and enhance the accuracy ratio of 97.2% in human motion recognition, identifying human targets with high performance.

Data Availability

The data that support the findings of this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare that they do not have any conflicts of interest.

Acknowledgments

The authors extend their gratitude to the Deanship of Scientific Research at King Khalid University for funding this work through research group program under grant number. R.G.P. 1/77/42.