Abstract
For the 6R robot, there is no analytical solution for some configurations, so it is necessary to analyse inverse kinematics (IK) by the general solution method, which cannot achieve high precision and high speed as the analytical solution. With the expansion of application fields and the complexity of application scenarios, some robots with special configuration have become the research hotspot, and more high-speed and high-precision general algorithms are still being explored and studied. The present paper optimized two general solutions. Elimination is a numerical solution, which has high accuracy, but the solution process is complex and time-consuming. The present paper optimized the elimination method, derived the final matrix expression directly through complex coefficient extraction and simplifying operation, and realized one-step solution. The solving speed was reduced to 15% of the original, and the integrity of the method was supplemented. This paper proposed a new optimization method for the Gaussian damped least-squares method, in which the variable step-size coefficient is introduced and the machine learning method is used for the research. It was proved that, on the basis of guaranteeing the stability of motion, the average number of iterations can be effectively reduced and was only 4-5 times, effectively improving the solving speed.
1. Introduction
For 6R robots, most configurations satisfy the Pieper criterion; that is, the adjacent three axes are parallel or intersect at one point, and there is analytical solution, which can be solved with high speed and high precision. Part of the configuration does not meet the Pieper criterion and has no analytical solution due to special operating requirements, so it needs to be solved by the general solution method. To meet the requirements of smoothness, higher speed, and higher precision, some new general methods and optimization methods are still being explored and studied. Among numerous methods, the elimination method is time-consuming and complex, but it has the highest precision and can get all the solutions under the same terminal posture. This paper optimized the elimination method to improve the solving speed and make the algorithm more complete. On the basis of the Gaussian damped least-squares method, this paper introduced the variable step-size coefficient, studied the influencing factors of the coefficient, and determined the coefficient through the regression algorithm of machine learning. The speed is superior to other iterative optimization methods. The average number of iterations is only 4 to 5 times, which effectively improves the solving speed and basically approximates the speed of analytical solutions. For inverse kinematic problems, there are three main categories of algorithms: iterative algorithm, numerical and geometric methods, and soft computing methods.
The Jacobian matrix pseudoinverse method is proposed in [1], and it is the most commonly used method of robot inverse kinematics. The algorithm is unstable when the robot approaches the singular configuration. Four Jacobian-based methods of solving the inverse kinematics are evaluated in [2]. A novel formula for calculating the pseudoinverse is introduced for a class of redundant robots in [3]. The optimal approximation of the Jacobian pseudoinverse algorithm by the extended Jacobian algorithm is described in [4]. On the basis of the pseudoinverse method, the damped least-squares (DLS) method [5, 6] introduces the damping factor to balance the solution accuracy and the joint velocity near the singular configuration, so as to make the motion more stable. The DLS method is used to analyse the inverse kinematics of the 7R 6-DOF robots with the hollow nonspherical wrist in [7]. Based on different damping factor selection strategies, the selectively damped least-squares (SDLS) method is proposed in [8]. The Gaussian damped least-squares (GDLS) method is described in [9]. Based on the Gaussian distribution of the damping factor, the Gaussian function is applied to determine the damping factor, which can continuously transition from undamped to damped, making the motion more stable and reducing errors. A deeply learnt damped least-squares method is proposed for solving IK of the spatial snake-like robot in [10]. A deep network is built for prediction of the unique damping factor required for each target point. A hierarchical iterative inverse kinematic algorithm is proposed in [11].
An elimination method for inverse kinematics of a 6R robot is proposed in [12, 13], which simplifies the inverse kinematics into a 16-degree polynomial problem. Then, the matrix eigenvalue decomposition is used to replace the solving equation of higher degree. Inverse iterations of the Jacobian matrix and the elimination method are combined in [14] to study the 6-DOF manipulator and polymers. Literature [15] optimized the elimination method and verified it with an example. A new method for inverse kinematics while considering jerk limits is proposed in [16] to efficiently handle acceleration and jerk constraints. A generalized solution for a subproblem of inverse kinematics based on an exponential formula product is proposed in [16]. Conformal geometric algebra is used to develop analytical inverse kinematic solutions for the KUKA Agilus robot and the UR5 robot in [17]. In [18], to avoid the problem of singularity, the product of exponentials method based on screw theory is employed for kinematic modeling. In addition, the inverse kinematics is solved by adopting analytical, geometric, and algebraic methods combined with the Paden–Kahan subproblem. Various optimized numerical and iterative methods failed to achieve the same performance as analytical solutions.
Various soft computing methods [19–26] based on the artificial neural network and genetic algorithm are also the research hotspots in recent years. A comparative study between different soft computing-based methods (artificial neural network, adaptive neuro-fuzzy inference system, and genetic algorithms) applied to the problem of inverse kinematics is presented in [19]. A genetic algorithm based on extreme learning machine and sequential mutation is presented in [20]. A hybrid Taguchi deoxyribonucleic acid (DNA) swarm intelligence for solving the inverse kinematic redundancy problem is presented in [21]. In [22], an online adaptive strategy based on the Lyapunov stability theory is presented to solve the inverse kinematics of redundant manipulators. The neural network and genetic algorithms are used together to solve the inverse kinematics of a six-joint manipulator to minimize the error at the end effector in [23]. A neural-network committee machine (NNCM) is designed in [24] to solve the inverse kinematics of a 6R redundant manipulator to improve the precision of the solution. An approach for solving the inverse kinematics of manipulator robots based on soft computing algorithms is introduced in [25]. In [26], the inclusion of the current configuration of joint angles in the ANN significantly increased the accuracy of ANN estimation of the joint angle output. Various soft computing methods can achieve accuracy up to the micron level, but the solution speed is still a problem.
Some machine learning regression models are used in this paper. The decision tree (DT) [27] is a prediction model to establish a mapping between object attributes and object values. It has fast speed and easily visualizes the model. The K-nearest neighbor (KNN) [28] algorithm is based on the limit theorem. Decision-making depends on a very small number of nearest samples. The random forest (RF) [29] integrates the set of decision trees with control variance by using the idea of bagging. The gradient boosting decision tree (GBDT) [30] generates a weak classifier in each iteration through multiple iterations. Each classifier trains on the basis of the residual of the previous one. The bagging method constructs a series of predictive functions and combines them into a predictive function. ExtraTreeRegressor (ETR) is similar to the RF and consists of many decision trees. Each decision tree is obtained by using all training samples, and the bifurcation values are obtained completely randomly.
2. Optimized Gaussian Damped Least-Squares Method
2.1. Gaussian Damped Least-Squares Method
The Jacobian matrix pseudoinverse method, which was proposed by Whiteney [1], is most frequently used to solve inverse kinematics. Based on the known speed of the end effector, the robot joint speed is determined aswhere denotes the pseudoinversion of the Jacobian matrix. It can minimize the joint speed norm and tracking error by adopting the Jacobian matrix pseudoinverse method. However, this method is unstable near the singular points. The Jacobian matrix is decomposed as follows:where is a diagonal matrix comprising singular values of , which are arranged in the descending order as , and then the joint speed is
When the robot is close to the singular configuration, the minimum singular value is approximate to 0 and joint speed is infinitely great, which is the reason why the pseudoinverse method is unstable. The damped least-squares (DLS) method incorporates the damping factor on this basis to minimize , and as a result, as the robot approaches the singular configuration, the solving accuracy and joint speed are balanced, and then the joint speed is
Based on the DLS method, according to different strategies for damping factor selection, many methods have been developed. The Gaussian damped least-squares (GDLS) method allows for Gaussian distribution characteristic of the damping factor; the damping factor is determined through the following Gaussian function:where is the maximum value of the damping factor and is a scalar quantity indicating the region of singularities. In comparison with other methods which take constant or piecewise function as the damping factor, the GDLS method includes a better strategy for selecting the damping factor. On the one hand, when the robot approaches the singular configuration, the joint speed transitions from the undamped into the damped state in a continuous way so that the motion is steadier. On the other hand, each singular value corresponds to a damping factor, so the damping only acts on the singular vectors, thus avoiding unnecessary damping and reducing the error. can be determined through the genetic algorithm, the details of which are given in literature [9].
2.2. Variable Step-Size Coefficient
2.2.1. Concept and Distribution of Variable Step-Size Coefficient
In order to elevate the solving speed, the variable step-size coefficient k and optimal variable step-size coefficient are introduced based on the GDLS method.
Definition. In the iteration process, is changed into so that the iteration proceeds rapidly towards convergence and is called the variable step-size coefficient. As shown in Figure 1, the number of iterations changes with . When is a certain value, the number of iterations is minimized, and the value is defined as the optimal variable step-size coefficient .
It can be seen from Figure 1 that when is close to , the number of iterations presents a continuous -dependent change and gradually decreases. Hence, the number of iterations will approach the minimum value as long as is close to .
In order to investigate distribution and the range of the minimum number of iterations, poses are randomly selected, and they satisfy the following: D-H parameters are arbitrary, the iterative convergence error is , and initial and target joint angles are controlled within . The GDLS method is used to solve inverse solution, and is obtained each time. Based on the statistics of data, the probability distribution of and minimum number of iterations are shown in Figure 2.
As shown in Figure 2, is mainly distributed within [20, 120]. Statistical data show that, for 95.4% of the poses, the minimum number of iterations corresponding to can be controlled under 20.


(a)

(b)
2.2.2. Influence Factors of
Through the analysis of the iterative process, it is estimated that the possible influence factors of include the iterative convergence error , the difference between the current pose and the target pose , the initial iterative value , and singularity.
The difference between the current pose and the target pose is , where belong to position errors, belong to Euler angular direction errors, and .
Generally, the minimum singular value of the Jacobian matrix or the operand is adopted to characterize the singularity.
When the other conditions remain unchanged, the changes of and minimum number of iterations with are worked out, as shown in Figure 3.

(a)

(b)

(c)

(d)
It can be seen from Figure 3 that when , does not affect . When is small, the effect on is minor. Within a certain range, the effect on can be great. When exceeds a certain value, that is, when the current pose and target pose are quite different, the iteration will not converge. The initial iteration value has a great influence on , and different joint angles can exert different effects on . The minimum singular value and operand have no direct influence on . On the whole, the main factors influencing are and .
2.3. Determination of Based on Machine Learning
The analysis of influence factors of shows that the goal is to determine through and . Though all kinds of methods are tried, it is very difficult to directly obtain their function expressions. Therefore, with and as the input and as the output, the machine learning regression method is selected.
poses are randomly selected. When other conditions are unchanged, and are changed to obtain the corresponding which acts as original data for training and testing. Three frequently used statistical indicators, that is, root mean square error (), coefficient of determination (), and mean absolute error (), are used to evaluate the model performance:
The regression models are frequently adopted in machine learning; 6 effective models are selected and compared, and the performance analysis is shown in Table 1. A total of 1,000 groups of data are selected by adopting four methods to generate the regressive fitting curves in Figure 4.

(a)

(b)

(c)

(d)
As shown in Table 1, training effects of the decision tree and ExtraTree on the training samples are good. The effects of the random forest and bagging on test samples are better. It can be intuitively seen from Figure 4 that the random forest and bagging generate better fitting results. Through a comprehensive comparison, the random forest brings the best result. Meanwhile, the trained random forest model is used to predict , and the average prediction time is only , which can be neglected in comparison with solution time.
2.4. Flowchart of Optimized Method
The flowchart of the optimized GDLS method is shown in Figure 5.

2.5. Case of Optimized GDLS Method
A randomly selected 6R robot is taken as an example for the test, and D-H parameters are listed in Table 2. The optimized method is named “E-GDLS,” and inverse solution is solved according to the flow in Figure 5. , and the iterative convergence error is set as .
In Table 3, IK results of 4 random data points of this method in the robot working space are summarized, and the number of iterations of the method before and after optimization is compared, as shown in Figure 6.

The test results of four arbitrary data points show that the E-GDLS method can effectively predict ; the solving accuracy meets the requirements; the number of iterations is all within 10. The GDLS method needs about 200 iterations to converge to the error threshold in the same situation. This method can greatly reduce the number of iterations to improve the solving speed.
Three trajectories are randomly selected in this study. Trajectory 1 only displays the position increment, while trajectory 2 displays increments of both the position and the Euler angular direction. Firstly, the number of iterations before and after the optimization is compared, and accuracy and stability of the optimized method are analysed through the changes of the spatial joint angle and trajectory tracking error. Trajectory 3 represents the status when the initial pose is approximate to a singular configuration, and it is mainly adopted to analyse the controlling effect of the optimized method on the joint speed relative to the Jacobian matrix pseudoinverse method. Trajectory 1: the initial configuration is , and the increment is . Trajectory 2. The initial configuration is , and the increment is . Trajectory 3. The initial configuration is , and the increment is .
It can be seen from Figures 7 and 8 that the number of iterations of the optimized method in this paper is obviously reduced compared with that of the GDLS method, so it can effectively improve solving speed. The joint angle in the joint space changes steadily with a minor trajectory tracking error, so stability and accuracy of the motion are guaranteed. As shown in Figure 9, when the robot is approaching the singular configuration, compared with the Jacobian matrix pseudoinverse method, the optimized method can still effectively control joint speed just as other damping methods.

(a)

(b)

(c)

(a)

(b)

(c)

(a)

(b)
In order to further analyse the solving performances of different methods, poses are randomly selected, the iterative convergence error is set as , the maximum number of iterations is , and the difference value between the initial pose and the target pose is controlled within a certain range. The test environment is 3.2 GHz CPU, 8 GB RAM, and python. The Jacobian matrix pseudoinverse methods , DLS, and GDLS and the optimized method E-GDLS in this paper are adopted to solve the inverse solution, respectively. The average number of iterations, the average solving time, and the proportion of failed convergence due to exceeding maximum number are calculated for each method. The results are listed in Table 4.
From Table 4, it can be found that the optimized method in this paper can effectively reduce the number of iterations and the average solving time and make iterative convergence faster so as to improve the method’s convergence performance.
3. Optimized Elimination Method
3.1. Algorithm Description
The process of the elimination method is as follows: by reversible transformation of the matrix, it is obtained thatwhere , in which the elements in columns 3 and 4 of the right and left matrices are equal, and 6 equations unrelated to are obtained aswhere .
Through the vector calculation of and , , 8 equations are derived as
Eliminating and simplifying the 14 equations , 6 equations irrelevant to are obtained. The 6 equations are written in the form of a matrix equation aswhere and is a 6 ∗ 9 matrix containing only . We convert into and obtain the matrix equationwhere and is a 6 ∗ 9 matrix containing only . We multiply each row of the matrix equation by and merge the equation obtained with the original one, to get the matrix equationwhere and is a 12 ∗ 12 matrix containing only .
This is an overconstrained linear system. In order for this system to have a nontrivial solution, the coefficient matrix must be singular. The determinant of the coefficient matrix is a 16th degree polynomial in . The roots of this polynomial give the values of corresponding to the solutions of the inverse kinematic problem. And it is simplified to find the matrix eigenvalues and eigenvectors. is written as , and equation (26) is written as
A, B, and C are known 12 ∗ 12 matrices. Equation (27) is written as
The eigenvalue of is , and is obtained from . The eigenvector is .
3.2. Optimization and Supplement of Elimination Method
The simplification process is given in this paper for the first time. are eliminated and 6 equations are obtained through 14 equations .
Expressions of are derived through and then substituted into to obtain 4 equations. Equations , and are simplified through the following two formulas:where .
The final expressions of matrices are derived for the first time so that the one-step solving of the matrix can be realized in this paper.
The complicated coefficient extraction process of the elimination method results in a large workload, so in most cases in other papers, the known quantities are substituted into the coefficient matrix, followed by extract variables, and the whole solving process should be implemented in multiple steps. In this paper, through the complicated coefficient extraction and simplified operation, the intermediate solving steps are streamlined, and the final expressions of matrices , and are directly derived, so the whole solving process is simplified as much as possible. One-step solving of the matrix is realized, and only 1 ms or so is needed. This is the greatest contribution that this paper has made to the simplification of the elimination method. The expressions are not presented because they are too long. For the specific expressions, refer to the uploaded MATLAB code at https://github.com/Wangxiaoqi1031/robot1.
Solving formulas for other joint angles , and in the simplest forms are given for the first time in this paper, so as to improve the integrity of the whole method.
Through a large number of experiments, and are obtained by solving the ratio of large absolute values in the eigenvectors of the matrix more accurately. Through analysis, it is possible that the maximum values are , and the calculation formulas are as follows:
3.3. Flowchart of Optimized Method
The flowchart of the optimized elimination method is shown in Figure 10.

3.4. Case of Optimized Elimination Method
A randomly selected 6R robot is taken as an example for the test, and D-H parameters are listed in Table 5.
We set . The matrix of the target pose is obtained as , where
The inverse solution is solved according to Figure 10. The real results are shown in Table 6.
We evaluate the performance of the methods with two indicators of accuracy and speed. We randomly select data points in the working space of the robot and test them under the environment of 3.2 GHz CPU, 8 GB RAM, and MATLAB and use the elimination method before and after optimization to solve IK, respectively. The worst, best, and average cases are calculated separately. The statistical results are shown in Table 7.
The results show that there is not much difference in the accuracy of the method before and after optimization, and the solution speed of the method after optimization decreases from 11–13 ms to 1.4–1.8 ms. Therefore, it is concluded that the optimization of the elimination method in this paper can effectively improve the solution speed while ensuring high accuracy.
When the elimination method is adopted, the proportion of poses which cannot be accurately solved is 0.086% as the matrix is singular or the robot is close to the singular configuration, and this can be solved through the following scheme: If A is singular, inverse solving is changed into pseudoinverse solving. The singular region is evaded in the trajectory planning. Poses which cannot be accurately solved can be processed through the iteration method.
4. Experiment
There is an ordinary 6R configurational robot experimental platform in the laboratory, as shown in Figure 11. The processor of the controller is OMAPL-138, a DSP + ARM dual-core processor, the demonstrator is based on WinCE 6.0, and the maximum speed can reach .

(a)

(b)
In order to compare ordinary analytical methods with the two optimized methods in this paper in terms of their solving performance, 1000 tracks are randomly selected, the iterative convergence error is set as , and the maximum number of iterations is ; data calculated from the DSP are shown in Table 8.
It can be seen from Table 8 that the accuracy of the optimized elimination method is approximate to that of the analytical solution, and the solving time of the optimized Gaussian damping method is approximate to that of the analytical solution, too. There is no analytical solution for the robot configuration which does not conform to the Pieper criterion, and then it can be solved through the two optimized universal methods in this paper. The proper method can be selected, or the two methods can be combined according to different accuracy and speed requirements.
5. Conclusion
This paper optimized the elimination method and the Gaussian damped least-squares method. After optimization, for the elimination method, the average accuracy of can be achieved, and the solution time is 1 to 1.5 ms. For the Gaussian damped least-squares method, the average number of iterations is only 4 to 5, and the solving speed is close to that of the analytical solution. The precision and speed of these two optimized algorithms cannot be achieved by the other optimized algorithms. The elimination method is simplified to the extreme. Time is mainly spent in the calculation of the eigenvalues and eigenvectors of the matrix. If this step can be improved, the speed of the algorithm can be further improved. The Gaussian damped least-squares method has the advantage of smooth motion itself. In this paper, by introducing the variable step-size coefficient, the number of iterations is greatly reduced, and the solving speed is effectively improved. Applying the method of machine learning to study the coefficient could achieve better results than the general fitting method. In practical application, if high speed is required, the iteration method can be applied to solve the problem. If high precision is required, the elimination method can be applied, or the two methods can be combined to solve the problem.
Data Availability
No data were used to support this study.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (no. 61573272) and Shaanxi Science and Technology Project (no. 2019ZDLGY01-01-02).