Abstract

A system of fuzzy relational equations with the max-Archimedean t-norm composition was considered. The relevant literature indicated that this problem can be reduced to the problem of finding all the irredundant coverings of a binary matrix. A divide-and-conquer approach is proposed to solve this problem and, subsequently, to solve the original problem. This approach was used to analyze the binary matrix and then decompose the matrix into several submatrices such that the irredundant coverings of the original matrix could be constructed using the irredundant coverings of each of these submatrices. This step was performed recursively for each of these submatrices to obtain the irredundant coverings. Finally, once all the irredundant coverings of the original matrix were found, they were easily converted into the minimal solutions of the fuzzy relational equations. Experiments on binary matrices, with the number of irredundant coverings ranging from 24 to 9680, were also performed. The results indicated that, for test matrices that could initially be partitioned into more than one submatrix, this approach reduced the execution time by more than three orders of magnitude. For the other test matrices, this approach was still useful because certain submatrices could be partitioned into more than one submatrix.

1. Introduction

Solving a system of fuzzy relational equations is a subject of great scientific interest [1, 2]. This work considers a system of fuzzy relational equations of the form where , , for each , and for each , and represents a continuous Archimedean -norm function. System (1) can be succinctly written in the following equivalent matrix form: where is the matrix of unknowns, is the matrix of coefficients, is the right-hand side of the system, and the symb “ ” represents a max-Archimedean -norm composition.

Di Nola et al. [3] indicated that, given a continuous -norm for in system (1) and assuming the existence of solutions, the solution set of system (1) can be fully determined by the greatest solution and a finite number of minimal solutions. It is well-known that the greatest solution can be easily computed, but finding all minimal solutions is difficult. Li and Fang [4] demonstrated that the systems of max- -norm equations can be divided into two categories, depending on the function in the system. When is continuous and Archimedean, the minimal solutions correspond one-to-one to the irredundant coverings of a set covering problem. When is continuous and non-Archimedean, the minimal solutions correspond to a subset of constrained irredundant coverings of a set covering problem. Li and Fang [5] discussed the necessary and sufficient conditions for solving max- -norm equations. A survey of similar and other related works is described in [5].

This work focuses on system (1), with representing a continuous Archimedean -norm function. Although solving such a system is equivalent to solving a set covering problem, set covering problems are classified as NP-hard problems [5, 6]. Therefore, solving a system of max-Archimedean -norm equations is NP-hard. Wu and Guu [7] demonstrated that the number of minimal solutions of such a system can grow exponentially as the numbers of variables and equations (i.e., and in system (1)) increase. Therefore, solving system (1) that has hundreds or thousands of minimal solutions is not uncommon and can be a challenge.

The concept of partitioning involves grouping related variables and equations and separating unrelated variables and equations. A variable and the th equation (i.e., ) in system (1) are related if the value of can affect whether the th equation holds. Furthermore, two variables (or two equations or one variable and one equation) in system (1) are related if they are related to a common variable or equation. In system (1) with numerous variables and equations (i.e., and are high), it is likely that not all variables and equations are related to one another. Consequently, system (1) may be partitioned into several subsystems, each containing only the related variables and equations. Thus, the original problem is decomposed into several subproblems. Because solving system (1) is an NP-hard problem [5, 6], solving several smaller subproblems is considerably faster than solving the original problem directly. Therefore, partitioning can expedite the process of solving system (1). Notably, even if all variables and equations of system (1) are related, partitioning can still be applied to the subsets of all the variables and equations. For example, many approaches involve reducing system (1) by fixing the value of a certain variable (Rule 5 in Section 5 provides an example). Subsequently, the remaining variables and equations can be partitioned into more than one group such that each group contains only the related variables and equations. The concept of partitioning is discussed further in Section 4.

Based on the concept of partitioning, the first objective of this study was to develop a divide-and-conquer approach for finding all of the minimal solutions of system (1). In this approach, system (1) is first transformed into a binary binding matrix. We propose an algorithm, called PA, in which the concept of partitioning is applied to decompose the binary binding matrix into several submatrices, the irredundant coverings of each submatrix are constructed recursively, and, finally, they are used to form the irredundant coverings of the binary binding matrix. Once all of the irredundant coverings of the binary binding matrix are found, they can be easily converted into the minimal solutions of system (1).

Numerous studies on solving system (1) have been conducted [79], but few have provided a performance study of the methods used for solving system (1) in which hundreds or thousands of minimal solutions are involved. Wu and Guu [7] used their method to solve test problems for which the number of minimal solutions ranged from 6 to 100, and the results indicated (Figure 1) that all test problems can be solved in less than 300 ms by using an ordinary PC. However, test problems solved by using such a scale do not fully reflect the difficulty of solving system (1). One problem that hinders the process of performing tests on a large scale is that generating test cases in which system (1) has a high number of minimal solutions is complex.

To bypass the need to generate complex system (1) test cases, large binary matrices can be generated because any system (1) case can be reduced to a binary matrix, called a binary binding matrix, and the problem of solving system (1) can be reduced to the problem of finding all of the irredundant coverings of the binary binding matrix. According to Lin [8], both the reduction process and the conversion of an irredundant covering to a minimal solution can be conducted in polynomial time and, therefore, finding all irredundant coverings is the core process executed in solving system (1). In other words, the time required to find all of the irredundant coverings of the binary binding matrix accounts for most of the time required to solve system (1). This is especially true for complex system (1) cases because the time used for both the reduction of system (1) and the conversion of irredundant coverings to minimal solutions is insubstantial compared with the time used for finding all irredundant coverings. Therefore, the time required to find all irredundant coverings when adopting the proposed approach can be used to represent the performance of this approach. The second objective of this study was to develop an algorithm for generating binary matrices with various characteristics on a considerably large scale to enable the evaluation of the performance of various approaches in finding all irredundant coverings on the test matrices that are difficult to solve.

To evaluate the impact of partitioning on the execution time, the proposed algorithm (PA) was compared with another approach (denoted as the non-PA) proposed by Markovskii [6] for finding all irredundant coverings. The only difference between the PA and non-PA is that the PA contains an additional step for incorporating the concept of partitioning. Several test matrices were generated for this performance study. For test matrices that can be initially partitioned into more than one submatrix, the PA reduces the execution time required by the non-PA by more than three orders of magnitude. Even for test matrices that cannot be initially partitioned into more than one submatrix, the PA still offers more favorable performance than does the non-PA. This is attributed to the partitioning of the submatrices of the binary binding matrix.

The rest of this paper is organized as follows. In Section 2, the preliminaries of Archimedean -norm are given. In Section 3, the procedure for constructing the binary binding matrix and minimal solutions is presented. In Section 4, the concept of partitioning is discussed and an algorithm for finding all irredundant coverings is proposed in Section 5. In Section 6, the procedure for generating test matrices is described and the performance results are presented. Finally, conclusion is given in Section 7.

2. Preliminaries

This section describes the basic concepts of -norm, Archimedean -norm, and the greatest solution and minimal solutions of fuzzy relational equations. Please refer also to [35, 9, 10].

A triangular norm ( -norm for short) is a binary function mapping from to that satisfies the following conditions:(1) (commutativity),(2) (associativity),(3) , if (monotonicity),(4) and , for any (boundary condition).

A well-known fact is that for any -norm. The commonly seen “min” and “product” are both a -norm function.

Let denote the set of all solution vectors of (2), , let be two index sets, and let and , for all , be two vectors. For any , , the relation holds if and only if for all . A solution is called the greatest solution if for all . Conversely, a solution is a minimal solution if , where implies that . As described in Section 1, with a continuous -norm for in system (1), the solution set can be completely determined by the greatest solution and a finite number of minimal solutions. This study assumes that -norms are continuous.

The greatest solution of system (1) with a -norm for can be computed explicitly using the “ ” operator defined as where is a -norm and . If the solution set of system (1) is not empty, then the greatest solution can be calculated as follows:

Mostert and Shields [11] subdivided continuous -norms into three categories, namely, the “min” operation, Archimedean -norms, and ordinal sums of a family of properly defined Archimedean -norms. The Archimedean -norm is a -norm with for all [3]. Notably, the well-known “min” operation is not an Archimedean -norm. Wu and Guu [7] collected six Archimedean -norm functions as follows:(1)Algebraic product: ;(2) ukasiewicz -norm: ;(3)Einstein product: ;(4)Hamacher product: , where ;(5)Yu operation: , where ;(6)Weber operation: , where .

Simple formula for calculating the greatest solution of system (1) that uses any of these six Archimedean -norm functions for is also available in Wu and Guu [7].

3. Reduction of Fuzzy Relational Equations to Covering Problem

This section describes the procedure to reduce the problem of finding all minimal solutions of system (1) to the problem of finding all irredundant coverings of the binary binding matrix of system (1). The description follows the results of Lin [8]. Subsequently, in Section 4, we apply the concept of partitioning to the binary binding matrix to expedite finding all irredundant coverings.

Let denote a matrix with , and , where and denote the index sets for the rows and the columns of , respectively. Notably, both and are a set of positive integers. Let denote an index set for each and let denote an index set for each . A set is a covering of if . A covering is irredundant if each proper subset of is not a covering of . The term denotes the set of all irredundant coverings of .

Example 1. Consider the matrix below, where the index sets and are indicated on the left and on the top of the matrix, respectively. The set of all irredundant coverings of is

The binding matrix of system (1) is denoted by and is given by for and .

According to Expression (6), if , then for each ; that is, all elements in the th column of are 2. Such columns are referred to as all-2-columns in a binding matrix. Let denote a binding matrix with all of its all-2-columns removed. It is obvious that if is not a zero vector, then is a binary matrix containing one or more columns and the set of irredundant coverings of equals that of . The matrix is referred to as the binary binding matrix of system (1).

Example 2. Consider the following fuzzy relational equations with max- ukasiewicz -norm composition:

The binding matrix is the same as the matrix in Example 1, while the binary binding matrix is formed by the first three columns of .

The mapping vector of an irredundant covering is denoted by and is given by

Let denote the set of all minimal solutions of system (1). Lin [8] proved that if is not a zero vector, then equals the set of mapping vectors of all irredundant coverings of ; that is, . If is a zero vector, namely, for every , then it is obvious that . Therefore, can be determined with the following procedure.(1)If is a zero vector, then and stop.(2)Calculate the greatest solution using Expression (4).(3)If , then and stop.(4)Calculate the binding matrix using Expression (6).(5)Let be obtained from with all of its all-2-columns removed.(6)Construct by searching all irredundant coverings of .(7) , where each is calculated using Expression (8).

Example 3. Consider the fuzzy relational equations in Example 2. Since is not a zero vector, Step (1) is skipped. Step (2) obtains the greatest solution , and Step (3) then finds that holds (i.e., ). Step (4) obtains the binding matrix , which is the same as the matrix in Example 1. Step (5) obtains the binary binding matrix , which is the same as but without its fourth column. Step (6) yields . Finally, Step (7) yields .

It is obvious that Steps (1)–(5) can be done in time. Li and Fang [4] and Lin [8] proved the bijective (i.e., both one-to-one and onto) mapping between a minimal solution and an irredundant covering, and thus Step (7) can be done in time, where is the number of minimal solutions of (2). Step (6), finding all irredundant coverings, is the most time-consuming step. Therefore, the task of finding all minimal solutions of (2) is reduced to the task of finding all irredundant coverings of a binary matrix, which is the focus of the next two sections.

4. Partitioning

This section describes the concept of partitioning a binary matrix into one or more submatrices such that the irredundant coverings of the binary matrix can be derived from the irredundant coverings of these submatrices. First, the notation used is defined as follows to facilitate the discussion.

Let denote a binary matrix with and , where and denote the index sets for the rows and the columns of , respectively. Here, both and are a set of integers (not necessarily contiguous or starting from 1). Let denote a submatrix of , where and ; that is, and are the index sets for the rows and the columns of , respectively.

Example 4. Consider the matrices , , and below, where the index sets for the rows and the columns are, respectively, indicated on the left and on the top of each matrix. The index sets for the rows of , , and are , , and , respectively. The index sets for the columns of , , and are , , and , respectively. Here, is a submatrix of formed by rows 2 and 3 and column 3. Similarly, is a submatrix of formed by rows 2 and 3 and columns 1, 2, and 3. Notably, also equals and thus is a submatrix of

Definition 5. Let and be two binary matrices. The operator is defined by .

Given a binary matrix , then both and hold.

In Section 1, we described the concept of partitioning, which involves grouping the related variables and equations of system (1) to reduce the problem size. Because the variables and equations of system (1), respectively, correspond to the rows and columns of its binding matrix, we applied the same concept to the binary binding matrix by grouping the rows and columns that are related. Given a binary matrix , row and column are related if . Furthermore, two rows (or two columns or one row and one column) are related if they are related to a common row or column.

The matrix can be considered a bipartite graph , where the index sets and represent the two sets of vertices of and the matrix represents the edges that connect the vertices in and . If , then an edge connects the vertex and the vertex . Notably, no edge connects two vertices both in or both in . Therefore, if two rows (or two columns or one row and one column) are related, then a path connects the corresponding vertices in . Because a graph is connected if every pair of vertices is connected by a path, a connected subgraph of represents a set of related rows and columns in . Partitioning is used to find all components (i.e., the maximal connected subgraph) of . A connected subgraph of is maximal if vertices and edges that could be added to the subgraph and still leave it connected do not exist in . Finding all of the components in a bipartite graph is faster than finding all of the components in a general graph because the former can stop as soon as one of the two sets of vertices is fully explored. A formal definition of partitioning is given as follows.

Definition 6. Given a binary matrix , the partitioning of , denoted as , is formed by a set of submatrices of , where and is the number of matrices in , such that the following conditions are satisfied:(1) and ,(2) for any but ,(3) for any and , where but .

Example 7. Consider the binary matrix below. For ease of exposition, the index sets for the rows and the columns of are, respectively, indicated on the left and on the top of the matrix:

By Definition 6, we have , where and ,  are shown as

Theorem 8 states that if a binary matrix can be partitioned into several submatrices, as described in Definition 6, then the set of irredundant coverings of the binary matrix can be constructed by performing the operation on the sets of irredundant coverings of these submatrices.

Theorem 8. Given a binary matrix , if , then .

Proof. Let and for each . By Definition 6, holds. Assume to the contrary that there exists but . The set can be decomposed into disjoint subsets of such that , where each . Since , by Condition of Definition 6, each must be a covering of . If for some , then there exists a covering of . Then, is a covering of , and consequently , which contradicts . Therefore, .

The advantage of using partitioning is twofold. First, partitioning involves decomposing a matrix into several submatrices with a reduced number of rows and columns, the irredundant coverings of which can be found more efficiently than those of the original matrix. Second, if the covering of submatrices is not minimal, then it is immediately discarded and is not combined with the coverings of other submatrices to form new coverings. This substantially reduces the number of redundant coverings generated. Furthermore, once the set of irredundant coverings of each submatrix is found, Theorem 8 can be applied to obtain the set of irredundant coverings of the original matrix without generating any redundant coverings of the original matrix. Because an irredundant covering of the binary binding matrix of system (1) corresponds to a minimal solution, only minimal solutions are generated. The disadvantage of using partitioning is that if the partitioning of the binary binding matrix contains only one component (i.e., the matrix), then benefits cannot be obtained from partitioning, and the time used to perform partitioning is wasted. The impact of partitioning on performance is discussed further in Section 6.

5. The Divide-and-Conquer Algorithm

We propose a divide-and-conquer algorithm for constructing the set of irredundant coverings of a binary matrix . The algorithm follows a set of rules specified by Lin [8] either to obtain the irredundant coverings directly or to decompose a binary matrix into one or more submatrices. Rules 13 consider some trivial cases of whose irredundant coverings can be directly derived and are adopted as the termination condition in the proposed algorithm.

Rule 1. If is a singleton , then .

Rule 2. If , then .

Rule 3. If there exists a column such that , then .

Rule 4 reduces a binary matrix by identifying the indexes of the rows that are required in all irredundant coverings of the matrix.

Rule 4 (singleton removal). If there exists a singleton for some , then .

If none of the above rules are applicable, then the algorithm calculates the partitioning . Subsequently, if , then the irredundant coverings of can be derived from the irredundant coverings of those submatrices in , according to Theorem 8. However, if , then Rule 5 [6] is applied to decompose into three submatrices.

Rule 5 (forced binding). Let , and for some . Then, .

Algorithm 1 shows the proposed algorithm (denoted as PA). Notably, this algorithm does not find any covering that is redundant. If a matrix can be partitioned into more than one submatrix, then lines 10 and 11 of the algorithm are applied to expedite the process of executing the algorithm by using the concept of partitioning. When lines 10 and 11 are excluded, the resulting algorithm, denoted as non-PA, is identical to the algorithm proposed by Markovskii [6] or Lin [8]. The PA is demonstrated in the following examples.

Algorithm  
Input: a binary matrix with and .
Output:
 (1)If   is a singleton   then
 (2)return   ;
 (3)If     then
 (4)return   ;
 (5)If   such that ( ) =    then
 (6)return   ;
 (7)If  there exists a singleton ( ) =  for some   then
 (8)let   ( )
 (9)return     PA( ); // singleton removal
 (10)If   ( ) =  and   then
 (11)return  PA( )     PA( )     PA( ); // partitioning
 (12)Let  row be the row with the most 1s in ;
 (13)Let   M[ ; ] and M[ ; ( )];
 (14)Let     PA and   PA ;
 (15)Let   ;
 (16)Let   ;
 (17)Return   ;// forced binding

Example 9. Consider the matrix in Example 4. Select row 1 to apply forced binding (lines 12–17 of PA) and obtain three matrixes , , and , as shown in

Lines 1-2 of PA yield , and subsequently lines 15-16 yield . By lines 7–9 and then lines 1-2 of PA, we have . Finally, = = .

Example 10. Consider the matrix in Example 7. Since , by Theorem 8, . To compute , we select row 4 of to apply forced binding (lines 12–17 of PA) and obtain three matrixes , , and (shown below) such that :

By lines 1-2 of PA, . By lines 7–9 and then lines 1-2 of PA, . Consequently, .

To compute , we select row 1 of to apply forced binding and obtain three matrixes , , and (shown below) such that . Thus,

Then, can be further partitioned into and (shown below), and consequently :

By lines 1-2 of PA, . Calculating is similar to calculating , and thus we have . Consequently, = = .

can be further partitioned into and (shown below), and consequently . By lines 7–9 and then lines 1-2 of PA, . Similar to , . Thus, :

Therefore, , , , , , , . Since both and are found, can be derived from .

6. Performance Study

This performance study focused on evaluating the impact of applying the concept of partitioning. This was achieved by comparing the performance of the PA with that of the non-PA, because the algorithms differed only in whether the concept of partitioning was used. As discussed in Section 3, finding all of the irredundant coverings of the binary binding matrix is the most time-consuming step in solving system (1). Therefore, we measured only the time required to derive from to evaluate the speed at which system (1) can be solved using a given approach.

6.1. Test Matrices

We used binary binding matrices rather than system (1) for this performance study. This offered three advantages. First, a binary binding matrix is a binary matrix only and is independent of any specific Archimedean -norm function. Second, using binary binding matrices prevents the imprecision caused by floating point truncation from occurring when solving system (1). Third, generating a large binary matrix is easier than generating system (1) with a high number of equations and variables.

Algorithm 2 shows the procedure for generating the test matrices. This procedure involves four parameters, , , , and , and generates an binary matrix with density such that contains submatrices of with approximately the same number of rows and columns. The density of is defined as the number of nonzero elements in divided by . In this procedure, an irredundant covering is first injected into the matrix to avoid (lines 5–9). Then, the elements in the regions of that correspond to these submatrices are repeatedly and randomly chosen to assume the value of 1 until the number of elements with a value of 1 reaches (lines 11–15).

Input: and . Note: and are required.
Output: where and
 (1)Let   ;
 (2)Randomly divide into disjoint subsets such that , and for any ;
 (3)Randomly divide into disjoint subsets such that , and for any ;
 (4)Initialize to 0 for all and ;
 (5)For   to   do  // inject an irredundant covering to avoid .
 (6)For  each     do
 (7)  Randomly select from , and set ;
 (8)End  of  for  each
 (9)End  of  for
 (10)Let   ;
 (11)While     do
 (12)Randomly select and from and , respectively;
 (13)If   , for some , and   then
 (14)  let   and ;
 (15)End  of  while
 (16)Return   ;

In this study, test matrices were generated with , ranging from 1 to 8 and ranging from to with an increment of . Notably, if , then the matrix had at most one irredundant covering. Therefore, only test matrices with a density of no less than were generated in this study. In addition, when the partitioning of a matrix contains submatrices of equal size, the density of the matrix could not be greater than and, thus, the greatest density was set to .

Because this procedure is random, it does not always generate a binary matrix with numerous irredundant coverings. This is especially true when and are low or is near zero or one. Therefore, for each setting of and , this procedure was repeated five times to generate five test matrices. For example, when , we varied from to with an increment of and consequently generated 115 ( ) test matrices. In general, given a fixed value, test matrices were generated in this study. That is, 55, 35, 25, 15, and 10 test matrices were generated for , 3, 4, 6, and 8, respectively. For clarity, Figure 2 shows only the number of irredundant coverings of the test matrices with the lowest or the highest number of irredundant coverings among the respective five test matrices with the same and values. Among all the test matrices generated, the matrix with the highest number of irredundant coverings ( ) was generated using and .

To understand how affects the number of irredundant coverings, we performed a preliminary check on the number of irredundant coverings for the test matrices with . Specifically, we first grouped the generated matrices with by their densities. Then, for every two groups with their difference in density being , a Mann-Whitney test was performed to compare the difference of the number of irredundant coverings between both groups. The results show that when both groups’ densities are less than or equal to , the number of irredundant coverings is smaller in the group with smaller density; when both groups’ densities are greater than or equal to , the number of irredundant coverings is larger in the group with smaller density; when both groups’ densities are between and , there is no significant difference between the two groups in the number of irredundant coverings. Thus, although we did not include test matrices with density less than in this study, the generated test matrices have covered a wide range of density to provide meaningful analysis.

6.2. Performance Results

The performance study was conducted on a desktop PC with Pentium D (3.0 GHz) processor and 1 gigabyte main memory, running Windows XP. To evaluate the impact of using partitioning, each test matrix was subjected to two tests, a test in which the PA was used and a test in which the non-PA was used. The results are shown in Figures 35.

First, consider the group of test matrices with . Because , the 115 test matrices in this group were generated without intentionally making them capable of being partitioned into more than one submatrix. Prior to comparing the execution times of the PA and non-PA on this group of test matrices, we used the Kolmogorov-Smirnov test to check the normality of the execution time, and the results show that the assumption of normality failed for both the PA and non-PA. Because the assumption of normality of distribution was questionable, the Wilcoxon signed-rank test was used as a substitute for a paired -test to compare the difference between the execution time of the PA and non-PA. The results were in the expected direction and were significant ( and ). Thus, it is statistically significant to say that the execution time of the PA is smaller than that of the non-PA with this group of test matrices. This may be due to the fact that, although the test matrices in this group could not be partitioned into more than one submatrix (i.e., ), several of the submatrices could and, therefore, the concept of partitioning was still helpful. Figure 3 shows that the execution times of the PA and non-PA exhibited a similar pattern: both were linearly proportional to the square of the number of irredundant coverings of the test matrix. The correlation coefficient between the execution time and the square of the number of irredundant coverings was 0.99502 for the PA and 0.990939 for the non-PA.

For test matrices in which , Figures 4 and 5 reveal that the PA outperformed the non-PA by more than three orders of magnitude. Figure 4 shows that the execution time of the non-PA was still linearly proportional to the square of the number of irredundant coverings and was not affected by the value of . Figure 5 shows that, for any test matrix in which , the PA required less than 120 ms to determine . If a test matrix can be partitioned into more than one submatrix (i.e., ), then the PA first identifies the irredundant coverings of these submatrices. Because the number of irredundant coverings of the test matrix is the product of the number of irredundant coverings of these submatrices, finding the irredundant coverings of these submatrices is much faster than finding those of the test matrix. Consequently, the time required by the PA to find all the irredundant coverings can be reduced substantially. We also used the Wilcoxon signed-rank test to compare the execution times of the PA and non-PA on each group of test matrices with the same number of partitions. The results were all in the expected direction and were significant ( , −5.16, −4.372, −3.408, and −2.803 for the groups of test matrices with 2, 3, 4, 6, and 8 partitions, respectively, and all ).

7. Conclusion

In a system of fuzzy relational equations with numerous variables and equations, several variables and equations are likely to be unrelated, and when such a situation occurs, partitioning can be used to facilitate substantial reduction of the time required to determine all the minimal solutions. Therefore, considering partitioning when solving fuzzy relational equations is crucial.

Partitioning is not useful when all of the rows and columns of a binary binding matrix are related, and after applying forced binding (Rule 5 in Section 5) to derive two submatrices and , all of the rows and columns of both submatrices are still related. Intuitively, this situation occurs as the density of approaches one (i.e., is near 1 and , as shown in Figure 2). In this situation, the problem contains considerably fewer irredundant coverings, as shown in Figure 2, and, thus, can be solved efficiently with or without considering partitioning.

In addition to max-Archimedean -norm fuzzy relational equations, numerous types of fuzzy relational equations have been demonstrated to be equivalent to the set covering problem [12]. The partitioning concept and the divide-and-conquer approach discussed in this paper can also be applied to these fuzzy relational equations with little modification.

Instead of generating test matrices (as in Algorithm 2), Hu and Fang [13] proposed procedures for generating test problems with various characteristics for use in max- -norm fuzzy relational equations, with being the min, product, or the ukasiewicz -norm. Because both the product and ukasiewicz -norm are Archimedean -norms, test problems generated by conducting the related procedures can be used in future studies to evaluate the performance of the proposed method further.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This research is supported by the National Science Council under Grants 99-2221-E-155-048-MY3 and 102-2221-E-155-034-MY3.