Abstract

Scale and standardization are essential to the prosperity of the breeding industry. During large-scale, standardized breeding, the selective breeding of good livestock breeds hinges on the accurate measurement of body parameters for live animals. However, the complex shooting environment brings several urgent problems, such as the missing of many local data in the point cloud and the difficulty in the automatic acquisition of body data. To solve these problems, this paper proposes a method for parameter measurement of live animals based on the mirror of multiview point cloud. Firstly, the acquisition and stitching principles were given for the multiview point cloud data on body parameters of live animals. Next, the authors presented a way to make up for the data missing areas in the point cloud. Finally, this paper acquires the body mirror data of live animals and scientifically calculates the body parameters. The proposed measurement method was proved effective through experiments.

1. Introduction

Precision animal husbandry refers to the scientific breeding and management of live animals by arranging regular daily ration based on information technology. As an important aspect of intelligent agriculture, precision animal husbandry can improve the output benefit of animal husbandry products and ensure product quality and safety [14]. Large-scale, standardized breeding can effectively improve the output and profit of pigs, cattle, and sheep. During large-scale, standardized breeding, the selective breeding of good livestock breeds hinges on the accurate measurement of body parameters for live animals [512]. The manual measurements with tools like caliper and tap measure are greatly affected by subjective human factors. By contrast, the body measurement of three-dimensional (3D) body parameters, which cover the geometry of live animals, is relatively accurate. The measured data help to assess the health state of livestock, evaluate their body shapes, and identify their behavioral features [1316].

Focusing on parameter measurement based on 3D point cloud data, Jo et al. [17] relied on the point cloud data of the 3D human body model to construct an objective interpolation function, which can describe the morphological changes of human body (e.g., gender, age, weight, height, and body proportion). Then, the independent elements were weighted reasonably according to the linkage been element changes. In this way, the independent elements were adjusted and updated. After that, the needed human body model was derived from the intermediate human body. Sato [18] proposed a hardware and software system capable of synchronization precise acquisition of point cloud data on live animals. The system consists of an FM810-GI depth camera and its fixation structure, a point cloud data processing module, and a repeater. Rao et al. [19] improved the stereo calibration method of point cloud data on live animals based on the location relationship between the multiple depth cameras used to collect the data. Then, the three-view point cloud data on live animals underwent stitching and duplicate removal by the interactive closest point (ICP) algorithm and k-means clustering (KMC). Finally, a precise 3D point cloud data was established for live animals.

To evaluate the health state of pandas, Turner et al. [20] introduced the skinned multianimal linear (SMAL) model to the 3D model reconstruction of these first class protected animals in China and obtained the base shape and base pose of the 3D panda model based on principal component analysis (PCA) and bone movements. Further, they derived the parameterized description of the shape and pose of the model. Zhang et al. [21] manually extracted animal contours from two-dimensional (2D) images, set up the objective function of Euclidean clustering between SMAL model and contour segmentation maps, and estimated SMAL parameters by minimizing the objective function. Ahsan et al. [22] provided an effective and accurate way to measure the length, width, and depth of pavement cracks. Specifically, watershed segmentation was adopted to segment and mask the background of damaged pavement images, the coordinate system of pavement cracks was converted point by point, a 3D visual model was established on MATLAB for pavement cracks, and the computed results were compared with the measured data. To solve the precision product quality problems induced by manufacturing errors, Chen and Wang [23] proposed a 3D point cloud feature calculation method to compute the geometric and physical parameters of workpieces and combined area changes and centroid deviation into a dense layered part evaluation and adaptive stratification algorithm, which can reconstruct workpiece surfaces and adaptively stratify workpieces.

Some results have been achieved on 3D point cloud and body parameter extraction, as well as weight prediction [2427]. However, there are often holes in the point cloud, owing to the complex environmental factors, e.g., environmental interference (especially the fences of the breeding base) and low equipment precision. These holes severely impede the postprocessing of the point cloud. In addition, it is very difficult to automatically acquire the body data of live animals [2831]. To solve these problems, this paper proposes a method for parameter measurement of live animals based on the mirror of multiview point cloud. Section 2 introduces the acquisition and stitching principles of the multiview point cloud data on body parameters of live animals. Section 3 presents a way to make up for the data missing areas in the point cloud. Section 4 acquires the body mirror data of live animals and scientifically calculates the body parameters. The proposed measurement method was proved effective through experiments.

2. Data Acquisition and Stitching

During the 3D reconstruction of a specific object, the object must be extracted from the background to ensure the recognition and analysis accuracy. Owing to the complex environment of the breeding base, the point cloud data extracted from live animals contain the background, noises, and outliers. The data need to be preprocessed to remove the background and noises, facilitating further analysis. By visualizing the point cloud data of live animals, it is possible to obtain the left and right point cloud data of the background, including the ground, cameras, and noises. Considering the complex living environment of animals, the point cloud data were extracted from live animals in the following steps: (1) crop the point cloud data in the specified coordinate range, using passthrough filter; (2) remove ground point cloud data through planar template matching; and (3) eliminate outliers with redundant information by statistical filter.

Figure 1 shows the mirroring principle of multiview point cloud. It can be inferred that the multiview point cloud data on live animals contain multiple coordinate systems, such as rotation and translation. Point cloud registration is necessary to unify the coordinates of multiple point clouds under different coordinate systems.

Let G, p, and U be the rotation matrix, translation matrix, and perspective transform vector between two depth cameras, respectively, with U being a zero vector and A = 1 be the proportional factor of the multiview point cloud on live animals. Then, the mapping F of point cloud registration can be expressed as follows:

To directly stitch point clouds on live animals, it is necessary to determine the location relationship between depth cameras. The parameters can be obtained through the stereo calibration in the binocular visual system. Let OSJ be the coordinate of any point O in the world coordinate system; G1 and G2 be the rotation matrices of cameras 1 and 2 relative to the calibration object, respectively; and p1 and p2 be the translation matrices of cameras 1 and 2 relative to the calibration object, respectively. Under the world coordinate system, the coordinates of the two cameras can be described by the following:

The relationship between O1 and O2 can be established as follows:

Combing formulas (2) and (3),

According to the affine invariance of four point pairs in 4-point congruent sets (4PCS), the distance ratio can be fixed with three known colinear points U, V, and W:

Suppose U and W fall on the same straight line and V and Q fall on the same straight line. In addition, the two lines intersect at point H. Then, distance ratios and can be calculated by the following:

During affine transform, the distance ratios and determined by the four coplanar points of the source point cloud and the corresponding four points in the target point cloud are constant, i.e., completely the same. If there exists any point pair s1 and s2 in S whose lines intersect at points h1 and h2, which are the same within a certain error range, then s1 and s2 are the coplanar points corresponding to the given base in the world coordinate system. The intersections h1 and h2 can be calculated by the following:

If the point cloud data on live animals are stitched directly using the results of stereo calibration, the registration accuracy needs to be guaranteed through iterations by the precision matching algorithm ICP. Suppose the point set under the world coordinate system and the target point set is denoted as O = {oi|oi∈ℝ3, i = 1, 2, …, m} and S = {sj|sj∈ℝ3, j = 1,2, …, n}, respectively. Under the premise of minimizing the error function error(G, p) between the two point sets in formula (8), the least squares method can be adopted to iteratively perform the optimal coordinate transform and calculate the rotation matrix and translation matrix until the preset error threshold or maximum number of iterations is reached:

3. Repairing Missing Areas

To make up for the large nonclosed missing areas in the point cloud of live animals, this paper proposes the cubic B-spline curve fitting method based on the projections of point cloud slices.

The slicing of the point cloud on live animals was carried out along the a-axis. The first step is to determine the minimum distance εmin between the point cloud center and other points and the maximum amax and minimum amin of the center along the a-axis. Next, point cloud splices were sampled from amin in the positive direction of a-axis, with an interval of εmin. The sampling number MS can be calculated by the following:where the square brackets stand for rounding operation. The sampling interval of the i-th point cloud slice Oi can be described as [amin + (i − 1) εmin, amin + min]. Then, the maximum bi − max and minimum bi − min of Oi along b-axis were determined, and the point cloud was sliced into Mi parts with an interval of εmin:

The curve fitting effect is greatly affected by the number of new points appearing through the expansion of the interval of point cloud slices. Therefore, this paper selects the center Oil of the l-th interval [bi − min + (l − 1) εmin, bi − min + l εmin] of Oi long b-axis as the representative point of that interval. Suppose the interval contains Nl points, with Oilk being the k-th point. Then, we have

The processed Oi was projected onto plane boc. The projection point was then fitted. When restoring the fitted point cloud to the space, the points in interval along a-axis should be configured uniformly:

Suppose the slice plane or space of the point cloud on live animals contains vertices. Then, Oi has a -dimensional parametric curve segment:

The -dimensional B-spline curve can be derived from the -dimensional B-spline curve segment Ojv(p) above. The base function Riv(p) of the curve can be calculated by the following:

The -dimensional B-spline curve can be defined by adjacent vertices. Then, the cubic B-spline curve can be expressed as follows:

The corresponding base function can be expressed as follows:

The j-th segment of the cubic B-spline curve can be described by the following:

Figure 2 shows the projection and fitting of point cloud slices on the forelimbs of a cow. After the projection and fitting, the distance between adjacent points averaged at 6.842 mm, the standard deviation was 1.514 mm, and the approximate error was 0.426 mm. The number of points increased by 67.2% to 270. The fitted range of points was close to the original range of points.

4. Mirror Data Acquisition and Parameter Calculation

4.1. Key Point Positioning and Mirror Data Acquisition

Figure 3 presents the side view of a live animal. To obtain the exact values of the body parameters, it is necessary to locate the key points of the body of the live animal. Cattle, pigs, and sheep share the same key points, including the point of maximum abdominal width, the shoulder point and its transition points, the point of ischial tuberosity, and the point of withers.

During the preprocessing, the ground plane equation was defined for each frame of the point cloud. The side view and top view correspond to planes aob and aoc, respectively. Specifically, the point of maximum abdominal width P1 is the point furthest away from the line connecting the left and right end points in the fitted point range. The transition point P2 of the shoulder point P4 characterizes the point at which the slope of the cloud segment P1P4 turns from positive to negative. Along the positive direction of axis a, the number MEK of centers in the cloud segment P1P4 was calculated with P1 (a1, c1) as the starting point. Then, the angle ωi between axis a and the line connecting P1 with each point in Oi (ao, co) (i = 1,2, …,MEK) can be calculated by the following:

The nearby transition point P3 at which the slope also turns from positive to negative was identified in a similar manner as point P2. Along the positive direction of axis a, the number MFK of centers in the cloud segment P1P4 was calculated with P2 (a2, c2) as the starting point. Then, the angle ωj between axis a and each point in Oi (ao, co) (i = 1,2, …, MFK) with P1 (a1, c1) as the starting point and satisfying coj ≥ c1 can be obtained by the following:

After obtaining P2 and P3, the shoulder point of the live animal P4 was determined as the farthest point in the point cloud segment P2-P3 from the line connecting P2 and P3. Then, the point of ischial tuberosity P5 could be obtained as the center of the K nearest points to the point of minimum a. After that, the point of withers P6 could be solved by computing the center coordinates of all the tallest points in the 2 slice point clouds extended to the left and right of the axis a coordinates of the midpoint of P2 and P4. Finally, the upper point PU and lower point PD of the depth could be solved by computing the center coordinates of all the tallest points and all the lowest points in the 2 slice point clouds extended to the left and right of the axis a coordinates of point P1, respectively.

To get an accurate plane of symmetry for the body of the live animal, the normal vector γop of the ground supporting the animal and the horizontal direction vector ξp of the animal were aligned with the positive directions of axes a and b, respectively, to normalize the poses. Then, the normal vector ϕp of the plane of symmetry is the product between γop and ξp:

The above analysis shows that the tail point of the live animal is the extreme point in the negative direction of axis a, whose coordinates are (a0, b0, c0). From (a0, b0, c0) and ϕp, the planar equation of the live animal can be determined as c = c0. The mirror data on one side of the plane of asymmetry could be obtained by setting up the homogeneous coordinates of the point Ot1 = {(a, b, c)|c > c0} on one side of the plane:

After obtaining the symmetric data Ot2 = {(a', b', b'|a', b', c ∈ Ot2} of the live animal, it is assumed that Ot = Ot1 + Ot2, where Ot is the mirror of the point cloud on the complete animal in the 3D space.

4.2. Calculation of Body Parameters

The Euclidean distance from P4(a4, b4, c4) to P5(a5, b5, c5) was defined as the diagonal length of the Euclidean distance:

The horizontal distance from P4 (a4, b4, c4) to the vertical line of P5 (a5, b5, c5) was defined as the horizontal distance:

The shoulder width was defined as twice the distance from P4(a4, b4, c4) to the plane of symmetry :

The abdominal width was defined as twice the distance from P1 (a1, b1, c1) to :

The height was defined as the distance from P6 (a6, b6, c6) to the ground τaa+τbb+τcc+υ = 0:

The depth was defined by the vertical heights of PU (aPU, bPU, cPU) and PD (aPD, bPD, cPD):

5. Experiments and Result Analysis

During the acquisition of point cloud data from live animals, it is difficult to shoot an image and complete 3D calibration using normal calibration targets. Thus, this paper performs 3D calibration with large and small infrared calibration targets. Depending on the deployment of depth cameras, the overhead camera was calibrated separately with the left infrared lens of the left camera and the right infrared lens of the right camera. The calibration errors are recorded in Figures 4(a) and 4(b). It can be inferred that the mean reprojection error between overhead camera and the left infrared lens of right camera was 1.31 pixels, and that between overhead camera and the right infrared lens of left camera was 0.87 pixels. Both results meet the precision requirements.

The axis a coordinates of chest circumference measuring points on a live pig were recorded on interactive measuring software of point cloud data. The fitting parameters were set as follows: the order of the curve, 4; the number of iterations, 50; and the number of control points, 100. Figure 5(a) shows the point cloud within 0.005 before and after the coordinates obtained by passthrough filter. Figure 5(b) shows the curve obtained by our cubic B-spline curve fitting method, which is marked in red. The curve circumference could be estimated from the length of the approximate polygon composed of the curve control points.

Our point cloud repairing method was applied to the missing areas in the 240 frames of point clouds on 50 pigs. These areas went missing due to the occlusions of railings. The fitting errors of the traditional method and our method are displayed in Figure 6. The mean, maximum, and minimum fitting errors of traditional cubic B-spline curve were 2.524 mm, 4.452 mm, and 2.346 mm, respectively; those of our method, i.e., cubic B-spline curve fitting based on projection of point cloud slices, were 1.924 mm, 3.754 mm, and 1.859 mm, respectively. This further confirms that the curve fitted by our method is closer to the original point cloud.

To verify its effectiveness, the proposed algorithm was compared with two other models through experiments. The processing results of different models are listed in Table 1. Our algorithm achieved relatively good results on segmenting live animals in point cloud data: the recall was as high as 82.7% and the accuracy as 88.9%. The recall and accuracy of region growth + threshold judgement were 80.4% and 82.7%, respectively. The recall and accuracy of watershed segmentation were merely 55.1% and 80.4%, respectively. The comparison shows that our algorithm boasts a good precision and high recall and accuracy. As for the two contrastive models, region growth + threshold judgement outperformed watershed segmentation in both recall and accuracy.

The body appearance of live animals can be well displayed by the side view captured by depth cameras and characterized by the projection of point range on plane aoc. In general, the local curves of the projected point range are not smooth enough despite being relatively simple. The local point range could be fitted ideally, using a cubic polynomial in one variable with 2 extremes. The fitting effect is shown in Figure 7(a). The key points of live animal body are presented in Figure 7(b), including the point of maximum abdominal width P1, the shoulder point P4 and its transition points P2 and P3, and the point of ischial tuberosity P5. The positions of the point cloud on the live animal projected to the 3D space could be derived from the projected position of each of these points.

The point cloud data obtained from one side only cover half of the body of the live animal. This calls for the restoration of the mirror of the point cloud. After determining the symmetric longitudinal section along the line connecting the center of the head and the center of the tail of the live animal, the shoulder width is equivalent to twice the distance from the point of maximum shoulder width to the plane of symmetry. Similarly, the abdominal width is equivalent to twice the distance from the point of maximum abdominal width to the plane of symmetry. The tallest points on the point cloud slices between the following points were merged into a point range: point of maximum abdominal width P1, shoulder point P4 and its transition points P2 and P3, and point of ischial tuberosity P5. Then, the outliers were removed from the point range, and the center coordinates were solved. The resulting new point range was fitted by a linear equation. Then, the straight line was translated to the mean distance from every point in the range to the center coordinates, producing the line of symmetry of the live animal. Then, it is possible to derive the mirror data of the point cloud of that animal. Figure 8 shows the fitted line of symmetry.

The body parameters were extracted and measured from the 240 frames of point clouds on 50 pigs. Table 1 presents the measured results. Table 2 compares the point cloud measurements with manual measurements.

As shown in Table 3, the MAE of height measurement was minimized at 0.0032. The MAEs of other parameters were within 0.0270. Specifically, diagonal length and horizontal length had relatively large MAEs (0.0262 and 0.0232), 35% greater than the MAEs of other parameters. The MRE of height was also minimized at 0.9127%. The MRE of horizontal length was 5.0327, above the MREs of all the other five parameters. Regardless of MAE or MRE, the measurement errors of horizontal and diagonal lengths were relatively large, while those of height and depth were small. The main reason is that the slight changes of body position of the live animals during the measurement affects the accuracy of key point identification. Besides, the subjectiveness of manual measurement also influences the determination of key points. Of course, the errors were relatively small. The above results show that our measurement method for body parameters is accurate enough for application.

6. Conclusions

This paper develops a parameter measurement method for live animals based on the mirror of multiview point cloud. After being acquired from the target animal, the point cloud data from multiple views were preprocessed and stitched, followed by the eliminating of redundant background points. Next, the features of the point cloud data were analyzed, and a 3D point cloud data model was established for live animals. After that, the authors explained how to repair the missing part of the point cloud data, acquired the mirror data on animal body, and scientifically computed the body parameters. Experimental results confirm the scientific nature of the calibrating overhead camera separately with the left infrared lens of the right camera and the right infrared lens of the left camera. In addition, the chest circumference measuring points were fitted into a curve, and the errors of different methods were compared for repairing missing areas in the point cloud. The relevant results demonstrate the effectiveness of our fitting algorithm. Further, the line of asymmetry of a live animal was fitted, which proves the feasibility and effectiveness of our point cloud acquisition method. Finally, the measuring errors of body parameters were presented, suggesting the high accuracy of our body parameter measuring method for live animals.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding the publication of this paper.

Acknowledgments

This study was supported by Research Program supported by Basic Research Operating Expenses for Provincial Institutions of Higher Learning, Heilongjiang Province, China (Grant No. 135309457).