[Retracted] Sports Auxiliary Training Based on Computer Digital 3D Video Image Processing
Algorithm 1 Feature detection and recognition algorithm.
Input: Image input data, network weight, etc.
Output: Feature position array F.
Step 1: Texture conversion. For the wind field data , the three direction components of the XYZ axis are respectively corresponded, and the vector field velocity value is converted into a texture. The mapping function is defined as shown in the formula: .
Among them, α ∈ (0,255) is the segmentation parameter.
Step 2: Fragment processing.
(1) Calculate the speed value of the texture fragment in the sample texture in reverse, as the BP neural network input layer data; calculate the hidden layer and output layer value according to , where is the output value of the layer above the i-th layer.
(2) Calculate the error value between the characteristic texture output value and each specified class according to ; among them, represents the i-th ideal output of the k-th standard class, and represents the actual i-th output.
(3) Choose a smallest . If is less than the specified error threshold, the fragment is considered to be the specified -th flow field feature, and the fragment color is set to the specified color ; otherwise, the fragment color is set to background color B.
Step 3: Save the result.
After the vector field is converted into a GPU easy-to-process texture using texture processing, the core code of the relevant feature recognition fragment program is as follows:
uniform vec2 tc_of f set[243]; //Adjacent grid point texture coordinates.