Abstract

In this paper, we present an innovative approach for the discovery of involutory maximum distance separable (MDS) matrices over finite fields , derived from MDS self-dual codes, by employing a technique based on genetic algorithms. The significance of involutory MDS matrices lies in their unique properties, making them valuable in various applications, particularly in coding theory and cryptography. We propose a genetic algorithm-based method that efficiently searches for involutory MDS matrices, ensuring their self-duality and maximization of distances between code words. By leveraging the genetic algorithm’s ability to evolve solutions over generations, our approach automates the process of identifying optimal involutory MDS matrices. Through comprehensive experiments, we demonstrate the effectiveness of our method and also unveil essential insights into automorphism groups within MDS self-dual codes. These findings hold promise for practical applications and extend the horizons of knowledge in both coding theory and cryptographic systems.

1. Introduction and Preliminaries

Error correction is a critical aspect of various fields, including telecommunications, data storage, and digital communication. In these domains, ensuring the integrity and accuracy of transmitted or stored information is of paramount importance. Errors can occur due to noise, interference, or other factors, potentially leading to data corruption or loss. To address this challenge, error correction techniques are employed to detect and correct errors, enhancing the reliability and performance of systems.

One widely used approach for error correction is based on encoding data using polynomials and matrices over finite fields. Finite fields provide a mathematical framework for representing and manipulating data elements in a structured manner. The encoding process involves mapping the original data into a set of symbols, which are then transformed into polynomials or matrices. These encoded representations incorporate redundancy, enabling the detection and correction of errors during the decoding phase.

Finding suitable matrices is essential for error correction codes. Specifically, maximum distance separable (MDS) matrices play a crucial role in achieving maximum error correction capability [13]. MDS codes can correct the maximum number of errors possible for a given code length, making them highly desirable for error-prone environments [1]. Additionally, MDS matrices facilitate efficient error correction algorithms and reduce complexity in error correction procedures, thus enhancing overall system performance.

Moreover, special matrices, such as involutory matrices, have gained attention for their unique properties. Involutory matrices have their own inverses, simplifying the decoding process and reducing computational overhead. The search for MDS involutory matrices is particularly significant, as it allows for efficient error correction with fewer computational resources, making them valuable in resource-constrained applications and enhancing the overall reliability and security of the system [4]. By leveraging special matrices, researchers can achieve a delicate balance between error correction capability, reliability, and performance, enabling the development of robust and efficient error correction techniques for various real-world applications.

Maximum distance separable (MDS) matrices not only find applications in coding theory but also play a crucial role in the design of block ciphers and hash functions [5, 6]. Their unique properties, including full rank and nonsingularity, make them essential for achieving error correction and data integrity. However, finding MDS matrices is a highly nontrivial task due to the stringent conditions they must satisfy. In recent years, various techniques have been explored to efficiently discover MDS matrices with desired properties, paving the way for enhanced reliability, security, and robustness of both communication and cryptographic systems [79]. The construction of MDS matrices, including involutory MDS matrices over , uses self-dual codes [10] or often involves utilizing specific matrices with desirable properties. Companion matrices, Hadamard matrices, Cauchy matrices, and Vandermonde matrices, along with the inverse of another Vandermonde matrix, are among the key matrices used for this purpose [5, 1115]. Hadamard matrices possess orthogonal properties, making them valuable for constructing MDS matrices that aid in error correction and data integrity. The Cauchy matrices, on the other hand, are essential in constructing MDS matrices with a high degree of redundancy, contributing to enhanced fault tolerance. Additionally, the Vandermonde matrices and their inverses play a pivotal role in generating involutory MDS matrices, ensuring that these matrices maintain their properties even after squaring to the identity matrix. By leveraging these particular matrices, researchers have been able to develop efficient and reliable methods for constructing MDS and involutory MDS matrices.

Let be the field of elements and be the polynomial ring with coefficients in . We denote by squared matrices with coefficients in .

A code of dimension and length over is a subspace . Moreover, if is a subspace of , then the code is said to be linear code; in this case, the Hamming distance between two vectors and in is defined as follows:

The minimum distance of a code is defined as the smallest Hamming distance between any two distinct elements within the code , denoted as .

Let be a linear code over . Then, is MDS if its meets the singleton bound:

The dual code of is defined by where .

Definition 1. is self-dual code if .

Definition 2. Matrix is MDS if and only if all its minors are nonzero.

Definition 3. Another definition using the fact that is an MDS codes is that matrix is MDS if and only if the generator matrix of is equal to .

Definition 4. Matrix is involutory MDS if is MDS and .

Definition 5. is orthogonal if .

Proposition 6. Let be an MDS self-dual code over of dimension and length and generator matrix ; then, is an orthogonal MDS matrix.

Proof. Since is a generator matrix of MDS self-dual code , then So, is an orthogonal matrix. Moreover, since is MDS, then is orthogonal MDS.

For our purpose, we only consider square MDS matrices. These matrices are redundant part of MDS self-dual codes of dimension and length over where .

Let be a permutation; is a permutation matrix related to defined as follows:

Also, is monomial matrix, where is a permutation matrix and is a matrix such that . As shown in [5], multiplying by a monomial matrix preserves the invariant properties of an MDS matrix. This means that if is an MDS matrix, are monomial matrices. Then, is an MDS matrix.

2. Genetic Algorithm

The genetic algorithm is a computational search heuristic, drawing inspiration from Charles Darwin’s theory of natural evolution [16]. This algorithm simulates the process of natural selection, wherein individuals exhibiting higher fitness levels are chosen for reproduction, thereby generating offspring for the subsequent generation. By mimicking the principles of natural selection, the genetic algorithm seeks to efficiently optimize solutions to complex problems through iterative improvement and selection mechanisms [1719].

The genetic algorithm addresses the issue of permutation in combinatorial optimization problems by efficiently exploring the search space. It achieves this by employing selection, crossover, and mutation operators, which contribute to the generation of better chromosomes at minimal cost [11]. Empirical studies have demonstrated the efficacy of evolutionary algorithms, including GA, in tackling various combinatorial optimization challenges [17].

GA offers several advantages over traditional algorithms:

It relies solely on the objective function’s evaluation, irrespective of its characteristics (e.g., continuity, and differentiability), providing greater flexibility and applicability across diverse problem domains [20, 21].

The generation process in GA operates in a parallel manner, allowing for simultaneous exploration of multiple points, in contrast to standard algorithms that typically involve single iterations.

Probabilistic transition rules, involving selection, crossover, and mutation probabilities, are employed in GA, offering stochastic and dynamic decision-making capabilities rather than deterministic approaches.

Overall, the utilization of GA and similar nature-inspired algorithms presents a promising avenue for efficiently addressing complex optimization problems with diverse applications. It may suffer from slow convergence and may not always find the global optimum due to their stochastic nature.

The algorithm begins with an initial population of potential solutions, where fitter individuals are selected based on a fitness function. These selected individuals then undergo crossover and mutation operations, which mimic the inheritance and variation mechanisms in natural evolution. The process iterates, generating new generations with increasingly fit individuals until a satisfactory solution is obtained. The five key phases of the GA include the following: (1)The initial population setup(2)Defining the fitness function(3)Selection of fitter individuals(4)Applying crossover to create offspring(5)Introducing mutation for genetic diversity

By following these phases, the genetic algorithm effectively explores the search space to find optimal or near-optimal solutions to the given problem. In our specific case, the optimal solution corresponds to the exact solution (or solutions), and there are no near-optimal solutions as the objective is to precisely identify the best possible outcome within the given problem domain.

3. Proposed Method

In this section, we present a comprehensive explanation of the method employed in this paper, which is based on the genetic algorithm. Our aim is to provide a detailed account of the GA’s underlying mechanisms and operations, thereby offering a clear understanding of how it functions to attain optimal solutions. Specifically, we will delve into the intricacies of its key operators, namely, the selection, crossover, and mutation operations. Through this detailed exposition, we seek to demonstrate the efficiency and efficacy of our GA-based approach in addressing the specific optimization problem under investigation, while emphasizing our focus on achieving exact optimal solutions which is the involutory MDS matrices.

3.1. The Search Space and the Fitness Function

The problem-solving process commences with the establishment of a population, consisting of a collection of individuals. Each individual represents a potential solution to the problem at hand and is defined by a unique set of parameters, referred to as genes (see Figure 1). To form a complete solution, these genes are combined into a string structure known as a chromosome. In the genetic algorithm framework, he Genes of an individual are typically represented using a list of genes, often employing binary values. However, in our case, they are represented as a list of integers ranging from 1 to the dimension of the code (the number of rows in a matrix). This process of encoding the genes within a chromosome allows for efficient handling and manipulation of the solutions during the evolutionary search, enabling the algorithm to explore and refine a diverse set of potential solutions over successive generations.

In our study, the search space is tied to the size n representing the number of matrix columns (or rows). The search space consists of all possible pairs of permutations, amounting to pairs of permutations, involving the rows or columns of matrix . However, the ultimate solutions are derived from the action of permutations on the arrangement of ’s rows or columns. To navigate this extensive search space effectively, we employ a fitness function that serves to evaluate candidate permutations. The fitness function assesses the optimality of each pair of permutations based on specific optimization criteria. During the selection process, the fittest candidates, those with higher fitness scores based on fitness function value (see Equation (6)) are identified by their lower values according to the fitness function, which are favored to advance to the next generation of the genetic algorithm. In our approach, pairs of permutations (chromosomes) are represented as lists of integers from 1 to for each permutation. These permutations are related to their permutation matrices, which in turn are randomly applied to the rows or columns of matrix . This randomness ensures exploration of the search space to find potentially optimal solutions (pairs of permutations).

Let be an MDS matrix, . where

We are adopting the Hamming distance as the measure of dissimilarity between any two elements.

3.2. The Selection, the Crossover, and the Mutation

The proposed genetic algorithm-based method is depicted in the diagram (Figure 2), taking several inputs such as the number of generations, initial population size, crossover rate, mutation rate, and the fitness function. The selection process employs elitism, where individuals are chosen based on their fitness values as determined by equation (6). Crossover and mutation operations are illustrated in Figure 3, detailing their implementation steps within the algorithm.

Figure 3 presents a comprehensive schema illustrating the crossover and mutation operators employed in the genetic algorithm. For the crossover operation, a single parent is selected, and subsequently, three positions are randomly chosen within the permutation set, denoted as acting on the left of the matrix. Similarly, the same process is applied to the permutation set that acts on the right of the matrix. This procedure facilitates the exploration of pairs of permutations that lead us to minimize the fitness function, thereby ensuring a thorough examination of potential solutions. Following the crossover, the mutation operator is implemented, involving the swapping of two genes within each permutation. This step introduces further diversity and exploration in the search process, enabling the algorithm to converge towards more optimal solutions. The inherent property of MDS matrices, where permuting rows or columns does not compromise their MDS characteristics, extends to our algorithm. Consequently, operations such as crossover and mutation (involving the swapping of two genes within the permutation) also uphold the MDS property. The combination of these operators contributes to the efficacy and robustness of the genetic algorithm in efficiently navigating through the search space and identifying MDS involutory matrices with improved characteristics.

4. Results and Discussion

The pursuit of self-dual MDS codes presents a challenging task, given the inherent complexities associated with their construction. Furthermore, the creation of MDS matrices over finite fields of characteristic 2 is no straightforward endeavor. Therefore, our approach primarily focuses on smaller code sets, where we endeavor to extract specific properties. These properties serve as foundational elements for the utilization of our algorithm in the identification of involutory MDS matrices. The application of our method holds significant value, as it enables the derivation of crucial matrices. These matrices, in turn, contribute to the establishment of an essential automorphism group. This development opens doors to important applications that extend beyond the realm of error correction coding, encompassing broader domains where automorphisms play a pivotal role. To execute the method, the default parameters mentioned in Table 1 are used.

Our initial population is comprised of permutations of length , where the product of represents the dimension of the MDS matrix. These permutations act upon the rows (left action) and columns (right action) of the MDS matrix iteratively, with the objective of identifying a pair of permutations that satisfy a specific condition, leading to the transformation of the MDS matrix into an involutory MDS matrix.

The size of our initial population is contingent upon the search space, which is equal to . In our particular case, given the relatively small size of the matrix, we have opted for an initial population size of 12 individuals. Also, a high crossover rate promotes diversity and accelerates convergence in our algorithm, while a low mutation rate maintains diversity and prevents local solution trapping. As a selection method, we employ elitism by selecting the top 6 chromosomes. This balance optimizes solution exploration. The solution involves finding pairs of permutations that lead to an involutory property in MDS matrix. By employing these predefined parameter values, the genetic algorithm-based method can efficiently explore the search space, find potential solutions, and converge towards an optimal the given problem.

In the context of MDS codes, the generator matrix may not always be in systematic form initially. However, through the application of the Gauss-Jordan elimination process, it can be transformed into systematic form. In our specific case, we focus on working with generator matrices that are already in systematic form. This form facilitates the representation of the code with easily identifiable systematic components.

Example 1. Let be a root of , where is a primitive element in , and let be the following generator matrix of an MDS code of dimension 3 and length 6 over [8]: Our MDS matrix is We can easily check that our MDS matrix is an orthogonal MDS matrix; also, is not an involutory MDS matrix.
After using the algorithm, we get the following matrix using the pair of permutations ((),(1,2)): is the involutory MDS matrix.

Example 2. Let be a root of , where is a primitive element in , and let be the following generator matrix of an MDS code of dimension 4 and length 8 over : For this example, our MDS matrix is After using the same algorithm, we get the following matrix using the pair of permutations ((24), (34)): is the involutory MDS matrix.

Example 3. Let be a root of , where is a primitive element in , and let be the following generator matrix of an MDS code of dimension 3 and length 6 over [14]: For this example, our MDS matrix is After using the same algorithm, we get the following matrix using the pair of permutations ((23), (23)): is the involutory MDS matrix.

It is important to note that the results obtained from the search for MDS involutory matrices using generator matrices of self-dual MDS codes and genetic algorithm-based methods are not unique. The nature of the genetic algorithm introduces an element of randomness in the selection, crossover, and mutation processes, leading to different solutions in each run. As a result, multiple valid involutory MDS matrices may be discovered, all satisfying the desired criteria. The nonuniqueness of solutions highlights the diversity and flexibility of the genetic algorithm in exploring the search space and presenting a variety of feasible solutions for the given problem, which may be advantageous in practical applications.

5. Automorphism Group of Some MDS Codes

By employing the Jordan-Gauss elimination technique, we transform the generator matrix into a systematic form , with being an MDS matrix. Subsequently, our genetic algorithms facilitate the derivation of an involutory MDS matrix from an existing MDS matrix. Let be an MDS code of generator matrix in systematic form and the general linear group of size over ; we consider this mapping which is also a group homomorphism defined by where is the permutation matrix associated with .

Theorem 7. is not trivial.

Proof. Let ; its corresponding permutation matrix , where The automorphism group always contains at least one nontrivial automorphism, distinct from the identity automorphism.

Corollary 8. Let .

Proof. The proof is an outcome of the preceding proof.

Proposition 9. Let such that , where and are two permutation matrices of permutations in :

Proof.

Proposition 10. Let with M as an involutory matrix and a group of permutation matrices; then, .

Proof.

Theorem 11. Let be an involutory matrix; then, (1) is an automorphism group of (2) is an automorphism set of

Proof. Let and be two permutation matrices.

Corollary 12. Let be an involutory matrix; then, is an automorphism group of .

Proof. Since , from Theorem 11, and are automorphisms of the code .
The identification of the automorphism group is straightforward when the permutation on the left side is the identity permutation. This is deduced through the application of the equivalence property, as exemplified in Example 1.

If , we say that is the full automorphism group of the code . Also, the number of distinct codes that are equivalent to is .

6. Conclusion

In this paper, we present an innovative methodology for the identification of MDS matrices within finite fields . This approach leverages genetic algorithms and draws from MDS self-dual codes, demonstrating its effectiveness through a practical example. The discovered matrices offer valuable applications, notably in establishing essential automorphism groups that play a pivotal role in decoding algorithms, further enhancing our understanding of code structures. Looking forward, our future endeavors will emphasize the exploration of larger matrices, expanding the scope of practical applications. Additionally, our focus will extend to the deliberate construction of matrices with specific and noteworthy properties, contributing to the advancement of coding theory and cryptographic systems. These efforts collectively deepen our understanding and open new avenues for research in this critical domain.

Data Availability

All data that were analyzed or generated are encompassed within this published article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.