Abstract

Open Shop Scheduling Problem (OSSP) is one of the most important scheduling problems in the field of engineering and industry. This kind of problem includes m machines and n jobs, each job contains a certain number of operations, and each operation has a predetermined processing time on its corresponding machine. The order of processing of these operations affects the completion times of all jobs. Therefore, the purpose of OSSP is to achieve a proper order of processing of jobs using specified machines, so that the completion time of all jobs is minimized. In this paper, the strengths and limitations of three methods are evaluated by comparing the results of solving the OSSP in large-scale and small-scale benchmarks. In this case, the minimized completion time and total tardiness are considered the objective functions of the adapted methods. To solve small-scale problems, we adapt a mathematical model called Multiobjective Mixed Linear Programming (MOMILP). To solve large-scale problems, two metaheuristic algorithms including Multiobjective Parallel Genetic Algorithm (MOPGA) and Multiobjective Parallel Simulated Annealing (MOPSA) are adapted. In experimental results, we randomly generated small-scale problems to compare MOMILP with the Genetic Algorithm (GA) and Simulate Annealing (SA). To compare MOPSA and MOPGA with the state of the art and show their strengths and limitations, we use a standard large-scale benchmark. The simulation results of the proposed algorithms show that although the MOPSA algorithm is faster, the MOPGA algorithm is more efficient in achieving optimal solutions for large-scale problems compared with other methods.

1. Introduction

Optimal job scheduling problems are classified into single machine scheduling, job shop scheduling, open shop scheduling, flow shop scheduling, and hybrid scheduling problems; where the single machine scheduling is the simplest problem of scheduling. The problem is to schedule the processing of n jobs with varying processing times on a single machine [1]. In the Job Shop Scheduling Problem (JSSP), the problem is more complex. Here, each job consists of a set of operations (O1, O2, …, On) that need to be processed in a specific order. Each operation has a unique machine on which it must be performed, and only one operation per task may be performed at the moment [2]. A more complex problem than JSSP is the Open Shop Scheduling Problem (OSSP). OSSP involves m machines () to perform n jobs (), each job involving m operations. Jth operation of is indicated by which should be processed on machine for units of time. Therefore, we will have a total of operations () [3]. Like OSSP, each job in the Flow Shop Scheduling Problem (FSSP) consists of precisely m operations. However, the ith operation of the job must be executed on the ith machine. No machine is capable of doing many operations simultaneously. The execution time for each action of each task is provided earlier [4]. The most complex problem in the class of job scheduling problems is the hybrid scheduling problem (HSFP). In this problem, n jobs (), must be processed in a multistage manufacturing facility in which each step has several concurrent and identical machines [5].

This paper addresses the problem of Biobjective Open Shop Scheduling. In case of complexity of the problem, the OSSP is in between the job shop scheduling and the flow shop scheduling problems. However, all the problems in this class are considered as NP-Hard problems. The OSSP has a broad range of applications across different sectors that include timetabling, Satellite communication, health care management, transportation, tourism, computer vision, and many other applications. In the following subsection, we introduce some related works that tried to solve the problem of open shop scheduling.

1.1. Related Works

Production scheduling problems have been widely studied in the past years. A detailed review of this type of research work can be found in [6, 7]. Breit et al. [8] solved the OSSP of two machines with a predetermined stop time for one of the machines and proposed a heuristic algorithm. Next, they set some predetermined stop times for both machines and again used heuristic algorithms to approximate the problem. Hsu et al. [9] examined two different periodic maintenance approaches simultaneously in the single-machine scheduling problem. Periodic maintenance approaches considered for their problem are such that the machine stops after a certain period of time (T) or after processing a certain number of jobs (K) in order to perform the maintenance process. Obviously, if either of these conditions occurs earlier, the machine will stop.

To quantitatively characterize the production and distribution optimization issue, Fu et al. [10] formulate a mixed-integer programming model to minimize the maximum completion time. To best handle the presented issue, an improved black widow optimization method is created to address the researched problem. In their suggested method, the solution representation, population initiation, procreation, cannibalism, and mutation, as well as a simulated annealing method, are uniquely constructed.

Another condition that arises in the presence of machines’ unavailability constraint is the impermissibility of cutting work due to which in the absence of sufficient time to perform an operation and face the machine stop, the desired operation is not loaded on the machine and its processing is postponed until machine stop is finished. Transportation and movement times between machines are other criteria that have been considered in scheduling problems. Strusevich [11] considered a time interval between the completion of the processing of one activity of a job on one machine and the beginning of the next activity of the same job on another machine for two-machine OSSP and due to occurring transportation in real conditions named it transportation time. He also points out that in chemical and metallurgical applications, these transportation times are equivalent to the times of the heating or cooling processes.

Recently, Fu et al. [12] offer a stochastic biobjective two-stage open shop scheduling problem that mimics a car repair process where duties are delegated to different third-party organizations with specialized equipment. The optimization issue was formulated by reducing the overall lateness and processing costs, subject to different resource restrictions. In order to tackle the open shop issue, a hybrid multiobjective optimization of migratory birds, along with a genetic operation and discrete event system, is developed by considering problem features.

The OSSP is known as an NP-hard problem [1315] and it is not possible to solve these problems in polynomial time except in small dimensions. Therefore, approximate solution achievements including heuristic and metaheuristic methods can be more efficient than exact methods. Panahi et al. [16] used multiobjective simulated annealing and Ant Colony Optimization (ACO) algorithm to solve Multiobjective Open Shop Scheduling (MOOSS) with the objectives of total delay and the longest completion time. Panahi and Tavakkoli-Moghaddam [17] combined general and multiobjective simulated annealing algorithms with ACO and used it to simultaneous optimization of two objective functions including the longest completion time and total delay in OSSP. Drezner [18] used the Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm for batch scheduling in the case of multifunction parallel machines.

Task scheduling has been studied in much different research works with an array of different applications from task scheduling in cloud computing [19] to the flow shops in factories [20, 21]. Most studies on scheduling problems have been conducted under the basic assumption that machines are available on all planning horizons, while this is not the case in the real world. Machines may not be available for reasons such as preventive maintenance, failure, rest periods, and residual work from the previous planning period that should be processed at the beginning of the new period, [22]. In many cases, unavailability times are known in advance, so the decision-maker can make a more efficient decision by considering them. Karimi et al. [23] considered a predetermined time for machine unavailability in the single machine problem and proposed two heuristic methods to solve it.

In this paper, an OSSP is addressed in which stops occur at different times on machines and its duration is different but specific for different machines.

The intended objective functions include minimizing the longest completion time (Cmax) and the sum of delays, which are presented as a weight combination in a single function. Also, due to the possibility of using different devices to transport different items in a shop, the movement time in a fixed route is different for each job. Also, the working path from one machine to another is considered different from its return path due to the minimization of material flow interference. Therefore, the work transferring matrix is asymmetric between different workstations. In order to solve problems with small dimensions and when the weight of each target is known, the exact approach of the mixed biobjective linear mathematical model is adapted to use as a problem-solving method, and later the results are compared with the results of GA and SA in solving small-scale OSSP. On the contrary, when the weight of each target is unknown, the achievement of Pareto optimal solutions using metaheuristic algorithms including Multiobjective Parallel Genetic Algorithm (MOPGA) and Multiobjective Parallel Simulated Annealing (MOPSA) algorithm are adapted to solve large-scale problems. The parameters of the metaheuristic algorithms are adjusted using the Taguchi experimental design method, as a result of which, algorithms show a state of stability in the face of various problems.

1.2. The Contributions of the Study to the Literature

This study has various contributions to the literature from various aspects. These are as follows:(i)The exact approach of the mixed multiobjective linear mathematical model is adapted to use as a problem-solving method, and later the results are compared with the results of GA and SA in solving small-scale OSSP.(ii)Multiobjective Parallel Genetic Algorithm (MOPGA) and Multiobjective Parallel Simulated Annealing (MOPSA) methods are adapted to solve large-scale OSSP.(iii)To improve the exploitation and exploration capabilities of MOPGA, the new candidate generation stage of MOPGA was developed by changing crossover and mutation operators.(iv)Variations in MOPGA and MOPSA were compared in detail with various heuristic algorithms selected from the literature according to the average RPD, execution time, and best solution criteria.

In the following, Section 2 presents a mixed linear mathematical programming model. The adapted metaheuristic algorithms and their details are described in Section 3. Section 4 describes the design of experiments and computational evaluation. Section 5 provides a conclusion and future studies.

2. Mathematical Model

In this section, the problem is introduced using simplifying assumptions as well as mathematical symbols and then a multiobjective mixed linear programming model is presented and analyzed to accurately solve small-scale problems.

To achieve the mathematical model, the available time of each machine is considered as a packet of a certain length Ti and jobs should be placed in such a way that their total processing time does not exceed the length of the packet. To clarify the issues, see Figure 1, in which indicates jth job in sequence and Bil indicates lth packet of machine i.

2.1. Problem Assumptions

Simplifying assumptions are considered as follows:(i)Each machine may process up to one task at a time.(ii)Each machine may process up to one task at a time.(iii)Jobs can be processed in any order by all or some machines.(iv)All jobs are available in the shop from the beginning of the planning horizon.(v)Job cutting is not allowed during processing.(vi)There is only one machine of each type in the shop (operations of each job are performed by only one specific machine).(vii)Unavailable times occur at specific times on each machine.

The length of available (and unavailable) times of machines is specific and depends on the machine and is fixed during the planning period.

2.2. Indexes and Parameters

j, k: indexes of job (j, k = 1,2, …, n)i, h: indexes of machines (i, h = 1,2, …, m)l, l′: indexes of packets (l, l′ = 1,2, …, b)pij: processing time of job j on machine iTrihj: job transferring time from machine i to machine hTi: the length of the available time of machine iti: the length of unavailable time of machine idj: delivery date of job j

2.3. Decision Variables

stij: start time of preparing job j on machine iCij: completion time of job j on machine iCj: completion time of job j on the last machine according to the sequence.Tardj: tardiness of completion time of job j from the delivery date

2.4. Multiobjective Mixed Linear Programming Model

In this section, using the above assumptions and symbols, a Multiobjective Mixed Linear Programming Model (MOMILP) is adapted to solve the open shop scheduling problem, taking into account the unavailability of machines, preparation and separation times of jobs and machines, and transmission times between machines to simultaneously minimize the criteria of the longest completion time and the sum of delays as a single function. The MOMILP model is as follows:

Equation (1) is the objective function of the problem, which includes the convex linear combination of the two criteria: (1) the longest completion time and (2) the sum of the delays.

and are positive coefficients that represent the weight of each target and . The completion time of each operation (Cij) is obtained by equation (2). If job j on machine i is processed before machine h, we have , then the start time of preparing job j on machine h () will be larger than the completion time of that job on machine i, this is shown using equation (3). If job j is processed on machine i before job k () and both jobs are located in the same packet l (), then the start time of preparing job k on machine i () will be larger than the completion time of job j on that machine, this criterion is shown in equation (4). Consequently, equation (5) ensures that no preparation takes place when machines are unavailable. Equations (6) and (7) together prevent the completion of operations during machines are unavailable. Equations (8) and (9) simultaneously specify the sequence of jobs on each machine and packets. Equation (10) determines the order of the machines for the operations of each job. Equation (11) ensures that each operation takes place in exactly one packet of each machine. Equation (12) controls the sum of the total processing times of the operations contained in a packet according to the maximum time of that packet, which is the access time of the relevant machine (Ti). The completion time of each job is calculated through equation (13) and according to the objective function. Equations (14) and (15) calculate the longest completion time and the delay time of each operation, respectively. Finally, equations (16) and (17) represent the negative variables and binary variables (0 or 1).

2.5. An Example

Here, to clarify the concept of the problem, we provide an example and solve it using the above MOMILP model. This example includes five jobs (n = 5), two machines (m = 2), and five packets for each machine (b = 5). The rest of the parameters are as follows:

Considering the weight of the targets , the global optimal solution Z = 90.5 is obtained by solving the MOMILP model in Lingo 8 software; the Gantt chart of which is depicted in Figure 2. As can be seen, no operation is performed during the unavailable times of machines, which are shown in gray. Thus, our main goal of modeling this design has been met. In addition, the sum of the total processing times associated with the operations contained in each packet does not exceed the time of that packet. Also, the obtained sequence indicates that the delivery date has been observed so that the jobs with an earlier delivery time are at the beginning of the sequence and the jobs with a later delivery time are at the end of the sequence. Another noteworthy point is the observance of transfer times; as an example, we can point to the distance between the completion of job J1 on machine M2 and starting this job on the next machine (Tr211= 2).

2.6. Model Analysis

In this section, the sensitivity of the MOMILP model to the number of jobs (n), machines (m), and packets (b) is investigated. To do so, all indexes related to variables and constraints are shown in Tables 1 and 2, respectively. The effect of increasing the number of jobs, machines, and packets on the number of variables and constraints can also be seen in Table 3.

3. The Genetic Algorithm-Based Method

As mentioned in the introduction, the OSSP is known as an NP-hard problem and only small-scale problems can be solved accurately in a reasonable amount of time [24]. Although the proposed mathematical model achieves an accurate solution, its efficiency decreases as the dimensions of the problem increase. Hence, heuristic methods or approximate algorithms are more effective for solving medium- and large-scale problems that commonly occur in the real world.

Genetic algorithms (GAs) are a class of general-purpose search strategies based on natural selection and genetics. GAs have been used effectively to a wide array of optimization challenges [19, 25, 26]. In contrast to local search algorithms such as SA, which are often focused on manipulating a single viable solution and are extremely quick, GAs retain and modify a population of possible solutions. Despite the fact that GAs have shown to be a diverse and successful search tool for addressing optimization issues, there are still several scenarios in which the basic GA performs poorly. Therefore, several hybridization methods have been reported in the literature. In this section, we also propose to use the Multiobjective Parallel Genetic Algorithm to solve the OSSP.

3.1. Chromosome Display

Chromosomes are in fact encoded solutions and points in solution space. In this paper, each operation is considered as a gene and each chromosome (solution) is represented as a permutation of genes (operations). For example, consider a problem with five jobs and 2 machines, and so 10 operations. A chromosome (or an encoded solution) can be looked at in Table 4 for reference.where Oij is the operation of job j on machine i. According to this chromosome, operation O21 will be processed before operation O11 (note that these two operations are related to the same job) and operation O14 will be processed before operation O11 (note that these two operations are related to the same machine).

3.2. Initial Population

Scattering in generations prevents rapid convergence and local optimums. Therefore, we randomly generate the initial population (initial generation) to maintain the scattering of solutions in the solution space as much as possible. That is, a random permutation from the set {1, 2, …, m.n} is considered as an individual where m.n indicates the number of operations.

3.3. Objective Function

In order to calculate the objective function for each chromosome (solution space points), these so-called coded solutions should be decoded. For this purpose, we place the operations on the machines in ascending order from the first operation to the last one; taking into account their unavailable times as well, we calculate their completion time. Then, taking into account the completion time of all operations, the longest completion time is obtained .

Also, in order to calculate the second expression of the objective function, i.e., the sum of the delays, first the completion time of each job is calculated as and then the delay of that job is obtained by . Finally, the sum of delays is calculated as . How to determine the values of the coefficients and is explained in the next sections. Therefore, the objective function of each chromosome is calculated as .

3.4. Selection

Selection is the process by which individuals present in a generation are selected in pairs for mating. There are various approaches for selection in the literature [27]. In this paper, we use the roulette wheel selection approach, according to which individuals with higher fitness have a better chance of mating and this is in fact the basis of Darwin’s theory. In order to apply the roulette wheel selection, two parameters are required: individual selection probability and individual cumulative probability, which are obtained from equations (18) and (19), respectively:

According to the roulette wheel selection approach, a number is randomly generated in the range [0, 1], then individual i that meets the condition is selected as one of the pairs. This process is repeated for the second one.

3.5. Crossover Operation

After selecting a pair of parents using one of the selection methods, a genetic crossover operator with the probability pc is used to combine the two parents and generate two children. There are several crossover techniques and combining parents: single-point crossover, two-point crossover, multipoint crossover, uniform crossover, and so on [28]. However, the main problem with the crossover operation is that the feasibility of new solutions (newborns) may not be guaranteed. In cases where new chromosomes or newborns produce infeasible solutions, corrective action is usually taken to turn them into feasible solutions, which will lengthen the solving time, by maintaining the feasibility of new chromosomes in each generation and without any modification process [29]. In the following, the crossover operation is described in four steps:Step 1: Select two operations randomly from chromosome 1 (parent).Step 2: Transfer the genes from the two selected operations along with all the genes between them from parent 1 to child 1 while maintaining their location.Step 3: Select the rest of the genes needed by child 1 from left to right until creating a complete chromosome and place them the same way from left to right in the empty spaces of child 1.Step 4: To create the second child, replace the two parents 1 and 2 and repeat steps 2 and 3 with the genes of the same operations selected in step 1.

For example, consider the two parents in Tables 58.

Step 1. Suppose the two operations selected by the parent 1 are O14 and O22, in other words, genes 2 and 9, respectively.Step 2. Transfer genes 2, 5, 1, and 9 from parent 1 to child 1 while maintaining their locations.Step 3. Select the rest of the genes needed by child 1 (from left to right) from parent 2 and place them (from left to right) in the empty spaces of child 1.Step 4. Exchange the two parents 1 and 2 and repeat steps 2 and 3 until creating child 2 as follows:

3.6. Mutation

In order to prevent the generated generations from being directed toward the local optimums, a mutation operator with a probability pm is applied to each of the produced children. This operator exchanges the two randomly selected operations and their genes [29]. For example, suppose that operations O12 and O24 are selected from child 1. Then, genes 6 and 8 are exchanged and mutated child will be simillar to the result given in Table 9)

3.7. Termination Condition

The GA continues until a termination criterion is met, which is s from the beginning and produces new generations [30]. Where is a coefficient whose value is adjusted through experiments. In the following sections, we explain how to set it up. Also, the reason for choosing this termination condition is that it gives the algorithm more time to solve larger problems.

3.8. Primary Genetic Algorithm

Before presenting the GA for solving multiobjective problems, here we describe the structure and performance of the original GA using the above explanations. The following is how the basic genetic algorithm works in 12 steps:Step 1. Generate popsize individuals (chromosome) randomly as the initial population.Step 2. Calculate the fitness of each individualStep 3. Select a pair of parents from individuals based on the selection strategy.Step 4. Apply crossover operator with the probability pc on the parents and generate two new children.Step 5. Apply mutation operator with the probability pm on each child.Step 6. Calculate the fitness of childrenStep 7. Repeat steps 3–6 until generating popsize new children.Step 8. Arrange all existing individuals, both old (i.e., the initial population) and new (i.e., generated children), that reach according to their fitness.Step 9. Transfer popsize number of individuals with the best fitness to the new generation.Step 10. Check the termination condition, if it is not met, go to step 3; otherwise, step 11.Step 11. Select the best solution (the most fitted one) of the last generation as the answer to the algorithm.Step 12. End.

Values related to the parameters of the GA including pc, pm, popsize, and will be determined by designing statistical tests in future sections.

3.9. Multiobjective Parallel Genetic Algorithm

There is a lot of research in the literature in which GA has been used to solve multiobjective problems due to its high ability to deal with multiobjective optimization. Since the GAs deal with a population of points (solutions), Pareto optimal multiple solutions can be found in a population of GAs [27]. In the following, we explain how to deal with a multiobjective problem.

In the mathematical model MOMILP, and are introduced according to the weight of each objective and , and the objective function is written as a single function (1). Now, our biobjective problem has become a single-objective problem. However, determining weights is a delicate and sensitive task. To tackle this problem, we can divide the population into several groups, with individuals in each group searching for one of the possible combinations of goal coefficients, while all groups search in parallel for Pareto optimal solutions. In this way, we can turn a multiobjective problem into a single-objective problem with different weights. Therefore, we consider a set of different weights with a little difference between the two consecutive weights. Equation (20) shows the set of possible combinations.

Similarly, we divide the population into subpopulations and assign weights of the above set to them. Thus, each subpopulation searches for Pareto optimal solutions in a separate path. Subpopulations act independently and unrelatedly as parallel GAs, but their other parameters and operators, including crossover and mutation, the size of the subpopulation, and termination conditions, are all the same.

In this way, the scattering of solutions is effectively strengthened. Also, Pareto solutions are stored to keep dominant solutions. This prevents losing dominant solutions during the optimization process. This set is updated frequently to get closer to Pareto’s optimal solution. The process of MOPGA is as follows:Step 1. Generate the initial population randomly, including membersStep 2. Divide the initial population into subpopulationsStep 3. Create the weights related to the single objective function and assign to the subpopulationsFor instance, if we set and , we will have:Step 4. Perform the initial genetic algorithm process from step 2 for all subpopulations and save the dominant solutionsStep 5. Find the best solution among the dominant onesStep 6. Finish

3.10. The Simulated Annealing (SA)-Based Method

Another common metaheuristic algorithm for solving NP-hard problems is Simulated Annealing. This algorithm is inspired by physical systems in which the energy of atoms is reduced through the cooling process to a minimum level [3032]. SA analyzes the complete surface of the goal function and attempts to optimize it while going both uphill and downhill. Consequently, it is essentially independent of the initial values, which are often a crucial input in traditional algorithms. In addition, it can escape from local optimums and locate the global optimum via uphill and downhill maneuvers. In addition, SA makes fewer rigorous assumptions about the objective function than traditional algorithms. Because of these lenient assumptions, it can cope with objective functions with ridges and plateaus more readily. Finally, it is capable of optimizing objective functions that are not even specified for certain parameter values [33].

In this section, we first provide an overview of the SA and then describe the MOPSA.

3.10.1. The Primary Simulated Annealing (SA) Algorithm

SA starts from a solution (energy level) as the basis of an initial temperature (temp 0) and searches in the solution space in the neighborhood of that solution. The search continues until reaching a termination condition, which we set as iterations. Parameter is a fixed coefficient in range (0, 1). In this algorithm, we also use the method of GA to encode the problem solutions and introduce chromosomes. We also considered a random permutation of operations as the base solution for the beginning. In order to move from one solution to another, the neighborhood search method described in Algorithm 1 is used. After completing the neighborhood search process, the current temperature is reduced by multiplying it by the parameter and the neighborhood search process is repeated.

(1)Choose a gene at random.
(2)If the selected gene is located in the first location of the chromosome, then replace it with the next gene.
Otherwise,
(3)If the selected gene is located at the last location of the chromosome replace it with the previous gene.
Otherwise,
(6)Evaluate exchanging with the two lateral genes (previous and next).
(7)Replace the selected gene with the lateral gene that gets the most improvement.
(8)End of condition.

The algorithm continues until reaching the final temperature (). During iterations and searches, the new solution is accepted if it improves the value of the objective function, otherwise, the new solution is accepted with the probability , in which is the current temperature and . SA is summarized in Algorithm 2. We consider the final temperature as 1 and the value of other parameters of the algorithm including set by the statistical experiment design method is discussed in Section 4.

(1) Generate a random permutation of operations.
(2) Calculate the value of the objective function.
(3) Set the parameters .
(4) Check the termination condition, then go to step 5 if not met and go to step 15 if met.
(5) Set .
(6) go to step 7, otherwise, go to step 14.
(7) Call the neighborhood search process.
(8) If the value of the objective function is improved, go to step 9; otherwise go to step 10.
(9) Accept the new solution, update the value of the objective function, and go to step 13.
(10) Select a random number in (0, 1) and put it in .
(11) If go to step 12; otherwise, go to step 13.
(12) Accept the new solution, and update the value of the objective function.
(13) Set .
(14) Modify the new temperature as and go to step 4.
(15) End.
3.10.2. Multiobjective Parallel Simulated Annealing (SA)

MOPSA process is very similar to the process of MOPGA and both use the same approach to deal with multiobjective optimization. Here, the objective function is considered as a single function (1) and several SA examines the different weights of each target in parallel and independently. The difference between starting MOPSA and MOPGA is that, in MOPGA, the initial population is divided into several subpopulations, but since SA starts with only one chromosome, such a division is not possible. The process of MOPSA is as follows:Step 1. Produce chromosomes randomlyStep 2. Produce the weights of the single objective function as equation (22) and assign them to the chromosomesStep 3. Perform the primary SA as shown in Algorithm 2 for all chromosomes from step 2 and store the dominant solutionsStep 4. Find the best solution among the dominant onesStep 5. End

4. Experimental Design and Computation Evaluation

In this section, we seek to establish the adapted algorithms through experiments and statistical analysis. For this purpose, different levels of the relevant factors are considered and their different combinations are investigated. Therefore, the Taguchi approach is used as an experimental design method in this paper.

In addition, after adjusting the parameters of the proposed algorithms using the Taguchi method and by generating random data, two problem groups, one with small dimensions and the other with large dimensions, are randomly generated and the results of the solution methods are compared.

We also compare the results of our experiments on a large-scale benchmark with other methods that solved the OSSP.

4.1. Taguchi Experiment Design

Taguchi method is one of the statistical analysis methods in which using orthogonal arrays the number of experiments is significantly reduced compared to complete factorial designs [30]. For example, to design an experiment with four factors, each including three levels, a complete factorial design performs experiments, but Taguchi reduces them to nine experiments.

Taguchi believed that the key to improving quality was to reduce deviations, not just controlling placement within the specified range. Therefore, he tried to reduce the deviations to zero and paid less attention to being on a fixed slope. This is the main reason why the Taguchi method is mentioned as a reliable method [34]. According to the Taguchi method, factors are divided into two categories: controllable factors or signals and uncontrollable factors or noise. In order to reduce the deviations, a fraction called signal to noise (S/N) is defined which indicates the sensitivity of the response variable to uncontrollable factors and shows the deviation around a certain amount of signal. High values of S/N indicate low deviation, which means that controllable factors are more effective than uncontrollable ones. Using the optimal levels of factors obtained by this method, close to the mean value with the least deviation is expected [30].

To convert the objective function to the fraction S/N, three types of criteria can be imagined in the Taguchi method: “the less the better,” “the more the better,” and “the closer to the nominal value the better” [30]. According to the measurement criterion of the problem discussed in this, which is expressed as a minimization expression, the “less is better” mode is appropriate. The relevant S/N formula is as follows:

Also, a well-known performance criterion named RPD is used:Where is the value of an objective function of an example which is obtained from an algorithm of experiment design and is the minimum value of the objective function of that example which is obtained from all the algorithms of experiments. So, it is obvious that low RPD values are more valuable.

4.1.1. Data Generation

To perform Taguchi experiments, a problem with dimensions is considered, and five iterations of the same are generated randomly. Also, other literature-inspired [9, 3537] parameters of the problem are as follows:(i)Processing times have a continuous uniform distribution in (1, 99).(ii)Transfer times between machines have a continuous uniform distribution in (1) and (20).(iii)Consecutive unavailable durations of each machine have a uniform distribution in (1, 50).(iv)Consecutive machine access times (packet size of each machine) are calculated as follows:where a is selected from randomly.

Delivery time is calculated as follows:

In which is the total time that a job is on all machines. is the average movement between all machines. is the average unavailable times of all machines.

4.1.2. Adjusting the Parameters of the MOPGA

Factors to be set in the MOPGA algorithm are as follows: crossover probability (), mutation probability , population size (popsize), and termination criterion coefficient (). According to Table 10, three levels are considered for each factor [29, 30]. So, Orthogonal Array Design L9 is the most appropriate case because it has exactly the same number of factors and levels as the experiment and meets all objectives. Combinations of different levels of factors in each experiment of the design L9 for the MOPGA algorithm are shown in Table 11. MOPGA and all experiment designs are programmed in C#.Net visual language and run on a computer with GHz Intel® Core™ and 4 GB ram. According to the results, optimal levels of factors are as follows: , , and .

In order to find the effect of each factor on the response variable, we perform an analysis of variance (ANOVA). It is important to note that in the designed experiment, the degree of error freedom is 0. To deal with this problem, the factors that have the lowest mean squares are considered errors. In the above experiment, factors and with mean squares of 0.92 and 0.70, respectively, are of the least mean squares and so have the least effect on the response variable. Therefore, these two factors are considered errors. Analysis of the variance of the S/N fraction is given in Table 12. Therefore, the most effective factors are and with the percentage of effect of 86.92% and 7.81%, respectively.

4.1.3. Adjusting the Parameters of the MOPSA

According to the description of the SA, three factors should be adjusted: initial temperature (), temperature reduction coefficient (), and the coefficient of the number of iterations per temperature (). Considering two levels for each factor, orthogonal array L4 is suitable and its factors and levels are depicted in Table 13. Also, the combinations of different levels of factors in each experiment of design L4 of the MOPSA are shown in Table 14. MOPSA and all experiment designs are programmed in C#.Net and run on a computer with GHz Intel® Core™ and 4 GB ram. Optimal levels of factors are as follows: , and .

An analysis of variance is performed to find the effect of each factor. Also, to prevent the error degree of freedom from zeroing, we consider the factor with the lowest mean squares as the error. The results indicate that the factor with the mean square 0.03 has the lowest effect on the response variable. Analysis of variance of S/N for the factors of the MOPSA algorithm is shown in Table 15 in which factor with the percentage of effect of 66.62% has the highest effect on the response variable. As well, is the second effective factor with a percentage of effect of 21.89.

4.2. Computational Evaluation

In order to evaluate the performance of the MOMILP model and the proposed algorithms, two sets of problems including small-scale problems (n = 4, 5, 6; m = 2, 3) and large-scale problems (n = 10, 20; m = 5, 10) have been generated [26]. Considering one iteration for each of the small-scale problems and five iterations for each of the large-scale problems, we will have a total of 26 problems. The rest of the parameters are considered exactly as in the Data Generation Section. Considering the weights as , small-scale problems are solved using the primary GA and SA and the results will be compared to the optimal solutions obtained by the MOMILP model with the mentioned weights. MOMILP model is coded in Lingo 8 and C#.Net is used to code the primary algorithms GA and SA.

The performance of the MOMILP model and primary SA and GA are shown in Table 16. As seen in the table, although the MOMILP model is a suitable method to solve small-scale problems, by increasing the computational time of the MOMILP model while magnifying the problem, the GA method becomes more efficient with a deviation of 4.85% from the optimum. As well, although the SA algorithm shows a relatively large deviation from the optimal solution, its time saving compared to GA is clear.

To solve large-scale problems, MOPGA and MOPSA algorithms are used and each problem is solved five times using each of the algorithms. In addition, in order to compare algorithms, obtained results are converted to relative percentage difference (RPD) criteria and are shown in Table 17. Therefore, MOPGA with an average RPD of 0.05% is more efficient and stable than MOPSA in solving large-scale problems.

In addition, MOPGA and MOPSA are programmed in C#.Net programming language, and all algorithms including models MOMILP, GA, SA, MOPGA, and MOPSA are performed on a computer with CPU 3 GHz Intel® Core™ I and 4 GB RAM.

In order to show the efficiency of the proposed algorithm (MOPGA), we tested the above algorithm on benchmarks by Taillard [38] and compared it with other metaheuristic algorithms including SA [39], GA [40], Hybrid GA [40], GA [41], Hybrid GA [42], Ant Colony System (ACS) [43], Cuckoo Search (CS) [43], Cat Swarm Optimization (CSO) algorithm [44], Extended Genetic Algorithm Open Shop (EGA_OS) [14], and Bat Algorithm Open Shop (BA_OS) [22]. Table 18 shows the simulation results of the proposed algorithm and the comparable algorithms for different benchmarks. As mentioned in this table, the proposed algorithms were able to achieve the optimal solution in most benchmarks and solve the problem well.

As you can see in Table 19, the MOPSA algorithm has less execution time than the other two algorithms and was able to respond in less time. The MOPGA algorithm was able to achieve the optimal answer in less time than the CSO algorithm.

The accuracy of the proposed algorithm in programming things well on machines is shown in Figure 3. This figure shows the Gantt chart for the benchmark. In this case, a few free times left for the machine shows the accuracy of the proposed algorithm.

Figure 4 shows the fitness diagram of the MOPGA for the benchmark. In this diagram, the best fitness is getting close to 900 while the walk number crosses 900 walks.

Figure 5 shows the dispersion diagram for the benchmark. Population dispersion is always maintained in the MOPGA. As you can see in the above algorithm, premature convergence of chromosomes is prevented.

5. Conclusion

In this paper, a multiobjective mixed-integer linear programming model is adapted to solve biobjective OSSP considering unavailable times for machines. Also, “maximum completion time” and “total delays” are considered in optimization criteria simultaneously. The efficiency of the MOMILP model is investigated through a number of small-scale problems and compared with the efficiency and execution time of GA and SA. Although the results indicate the high ability of the MOMILP model to solve small-scale problems, the analyzes performed on the model highlight the need to develop other methods such as heuristic and metaheuristic methods to solve large-scale problems. Therefore, MOPGA and MOPSA algorithms were introduced for multiobjective problems with unknown weights for objectives. Also, the reliability of multiobjective algorithms is evaluated by designing Taguchi experiments. Results show that the MOPGA algorithm is more efficient than the MOPSA algorithm in dealing with large-scale problems. Finally, variations in MOPGA and MOPSA were compared in detail with various heuristic algorithms selected from the literature according to the Average RPD, execution time, and best solution criteria. Our experiments proved the success of the proposed solutions by running MOPGA and MOPSA not only on low- and medium-scaled open shop scheduling samples but also on large-scale open shop scheduling samples.

In the following, some areas of development and expanding the present study are mentioned:(i)Developing other solution methods, including new heuristic and metaheuristic methods to solve large-scale problems and comparison with the algorithms proposed in this paper can be a good idea for making the model more practical.(ii)Considering more diverse optimization criteria according to the needs of today’s industry can also provide the basis for future studies.(iii)In this paper, optimizing factors of the algorithms in discrete space are explored. Therefore, another area of development of the present paper can be the optimization of factors in the continuous space.(iv)A sequence-dependent preparation and separation times can be considered and then a mathematical model can be developed.(v)The number of vehicles can be used as a constraint on the problem.(vi)Considering variable speeds for machines seems practical. That is, the processing speed of the machines decreases after performing a certain number of jobs or a certain period of time and then increases again after performing the maintenance process.

Data Availability

We used data from ref [26] and discussed the same in Section 4.2 of the manuscript.

Conflicts of Interest

The authors declare that they have no conflicts of interest.