A Novel Multiobjective Fireworks Algorithm and Its Applications to Imbalanced Distance Minimization Problems

2022-08-13 02:07ShoufeiHanKunZhuMengChuZhouXiaojingLiuHaoyueLiuYusufAlTurkiSeniorandAbdullahAbusorrahSenior
IEEE/CAA Journal of Automatica Sinica 2022年8期

Shoufei Han, Kun Zhu,, MengChu Zhou,, Xiaojing Liu, Haoyue Liu,, Yusuf Al-Turki, Senior, and Abdullah Abusorrah, Senior

Abstract—Recently, multimodal multiobjective optimization problems (MMOPs) have received increasing attention. Their goal is to find a Pareto front and as many equivalent Pareto optimal solutions as possible. Although some evolutionary algorithms for them have been proposed, they mainly focus on the convergence rate in the decision space while ignoring solutions diversity. In this paper, we propose a new multiobjective fireworks algorithm for them, which is able to balance exploitation and exploration in the decision space. We first extend a latest single-objective fireworks algorithm to handle MMOPs.Then we make improvements by incorporating an adaptive strategy and special archive guidance into it, where special archives are established for each firework, and two strategies (i.e.,explosion and random strategies) are adaptively selected to update the positions of sparks generated by fireworks with the guidance of special archives. Finally, we compare the proposed algorithm with eight state-of-the-art multimodal multiobjective algorithms on all 22 MMOPs from CEC2019 and several imbalanced distance minimization problems. Experimental results show that the proposed algorithm is superior to compared algorithms in solving them. Also, its runtime is less than its peers’.

NOMENCLATURE

I. INTRODUCTION

In order to solve MOPs, many multiobjective evolutionary algorithms (MOEAs) have been proposed, e.g., the fast and elitist multiobjective genetic algorithm (NSGA-II) [4], indicator-based evolutionary algorithm [5], strength Pareto evolutionary algorithm 2 [6] and multiobjective evolutionary algorithm based on decomposition [1], [7]. Different from traditional deterministic solving methods, these MOEAs can find a set of Pareto-optimal solutions in a single run, and thus have been applied to many practical problems [8]–[10].

The MOEAs mentioned above aim to find solutions that are not only close to true PF, but also well-distributed in the objective space. Yet the decision space is rarely considered. Accordingly, these methods are not the best options to solve multimodal multiobjective optimization problems (MMOPs),which is an emerging new type of multiobjective optimization problems [11]. We attempt to obtain as many equivalent Pareto optimal solutions as possible. For multimodal problems,niching has been considered as an effective method, and some niche technologies have been proposed [12]–[15]. These niche technologies can effectively solve single objective multimodal optimization problems. However, they are rarely used to solve MMOPs [16]. In order to solve them, scholars have developed some multimodal multiobjective evolutionary algorithms(MMOEAs) [17]–[20]. They mainly use a convergence-first selection criterion, which tends to lose some solution diversity in the decision space, thereby leaving much room for their performance improvements.

Based on the above analysis, we face the following challenges when solving MMOPs: 1) The niche technologies are not the best choice for solving MMOPs since they can cause high time consumption; 2) The existing MOEAs [4]–[7] are not suitable for solving MMOPs since they pay more attention to the objective space and the equivalent Pareto optimal solutions are abandoned when an environment selection strategy is used; and 3) The existing MMOEAs [17]–[20] can locate the easy-to-find equivalent Pareto optimal solutions, but not the difficult-to-find ones due to using a convergence-first selection criterion. Therefore, in this work, we not only focus on both fast convergence and high diversity in decision space without using any niche technologies, but also obtain an excellent PF approximation in objective space.

We note that a fireworks algorithm (FWA) [21] and its variants [22]–[25] have shown strong power for solving single objective optimization problems. In FWA, a firework and its offspring can be seen as a niche, and thus it can be used to solve MMOPs without any niche technologies. Furthermore,FWA has shown its ability to handle multimodal issues with the focus on single objective problems [26], but not MMOPs.

To fill this gap, in this paper we first design a new multiobjective fireworks algorithm (MFWA) to enhance the ability of FWA for MMOPs. Considering the fact that in MMOPs, both convergence and diversity should be considered in the decision space, we propose an improved MFWA(IMFWA) using an adaptive strategy and special archive guidance. In IMFWA, we first establish a special archive for each firework, which is composed of some non-dominated solutions closest to this firework. Then the positions of sparks generated by fireworks are updated by an adaptive strategy in which two methods, namely the explosion and random methods, are adaptively selected based on some probability.The probability is dynamically changing and can be approximately calculated by using the numbers of non-dominated solutions. Finally, a non-dominated solution is randomly taken from a special archive and used to guide firework explosion.

As mentioned above, many researchers have developed MMOEAs to solve MMOPs, such as [17]–[20]. The advantages of our proposed method over them include:

1) It uses an adaptive strategy and a special archive guidance method to balance exploitation and exploration in decision space, i.e., it not only converges to the easy-to-find equivalent Pareto optimal solutions quickly with the help of a special archive guidance, but also locates the difficult-to-find solutions by applying an adaptive strategy, in which a random option is designed to preserve high solution diversity in decision space. However, existing work [17]–[20] uses a convergence-first selection criterion, and thus makes it difficult to balance exploitation and exploration in decision space.

2) It does not require the use of any niche technology since the firework and sparks can form a niche, and thus its runtime is greatly reduced. The existing work applies a ring or selforganizing network structure to ensure that multiple equivalent solutions are located, but such structures need a large space and significant computational time.

In summary, we intend to make the following contributions:

1) We extend the latest FWA from a single-objective version to a multiobjective one. It is determined that the calculation method of the explosion amplitude and a selection operator in single-objective FWA are not suitable for MFWA.To this end, we re-design them such that they can be used in MFWA. Our newly designed selection operator can reduce runtime greatly compared with other algorithms [16]–[19],[27]–[29].

2) To enhance the exploitation ability of MFWA in the decision space, we establish a special archive for each firework. Specifically, a certain number of non-dominated solutions closest to the firework are first stored to form the special archive, and then a special archive guidance is designed, in which a random one taken from the special archive is applied to guide firework explosion.

3) In order to ensure fast convergence and high solutions diversity in the decision space, we propose an adaptive strategy. Specifically, an explosion operator and random strategy are designed, and they are adaptively selected via a dynamic probability to generate the sparks. The explosion strategy is to generate the sparks based on the explosion operator in FWA with special archive guidance, which can speed up convergence. The random strategy is to generate the sparks randomly in search space, and it can help the algorithm jump out of a local optimum and locate more equivalent Pareto optimal solutions.

4) By combining the special archive guidance and adaptive strategy, we propose an IMFWA. In order to show the superiority of IMFWA, we compare: i) the performance of IMFWA and MFWA with and without the adaptive strategy and special archive; ii) the performance of IMFWA and five state-of-the-art MMOEAs on all 22 test functions from CEC2019; iii) the performance of IMFWA and five optimizers participating in the CEC2019 competition; iv) the runtime of IMFWA and other algorithms; and v) the performance of IMFWA and its peers on four scalable imbalanced distance minimization problems.

The remainder of this paper is organized as follows. Some related work is introduced in Section II. Our proposed algorithm is presented in Section III. Experimental results and their analysis are given in Section IV. The discussion is provided in Section V. Section VI concludes this paper.

II. RELATED WORK

A. Multimodal Multiobjective Optimization Problems (MMOPs)

In order to better understand MMOPs, the following definitions are given:

1)If different Pareto optimal solutions in decision space have the same image in objective space, they are calledequivalent Pareto optimal solutions.

2) A Local Pareto Optimal Solutionis the one that is not dominated by its neighborhood solution.

The definition of MMOPs can be found in [30]. In them, a point (at least one) on the PF in the objective space has multiple equivalent Pareto optimal solutions in the decision space or they have at least one local Pareto optimal solution[31], as shown in Fig. 1. In this work, we focus on the former.Thus, we aim to locate more equivalent Pareto optimal solutions in the decision space, and a good PF approximation in the objective space. When solving them, we should give more attention to the decision space than that of MOPs. If high solution diversity is achieved in the decision space, we should have more chances to obtain multiple equivalent Pareto optimal solutions. Thus, preserving solution high diversity in the decision space is of key importance for solving them. This work designs an adaptive strategy to do so.

Fig. 1. Illustration of MMOPs.

B(M. M OMEulAtism)odal Multiobjective Evolutionary Algorithms

Compared with multiobjective evolutionary algorithms(MOEAs), MMOEAs need pay more more attention to the decision space. Before MMOPs are proposed, some scholars have tried to improve the diversity of the decision space. For example, an omni-optimizer algorithm [32] is proposed,therein the crowding distance is applied to the decision space.The Lebesgue contribution and neighbor count methods are adopted into the space in [33]. A probabilistic model-based multiobjective evolutionary algorithm, called MMEA, is proposed to approximate PS and PF [34], where the principal component analysis technique is used to estimate the dimension of PS manifold in each sub-population, and then a probability model is built to model the distribution of the Pareto optimal solutions in the decision space.

Recently, an algorithm called decision space-based niching NSGAII (DN-NSGAII) is proposed to handle MMOPs and locate more equivalent Pareto optimal solutions [11]. Its solutions are ranked according to crowding distance. However, it fails to consider crowding distance in the objective space when sorting. Yueet al. [17] propose an improved multiobjective particle swarm optimizer to solve MMOPs,where a ring topology is adopted to form stable niches and a special crowding distance is used to sort the solutions in the same front.

Lianget al. [19] propose a self-organizing multiobjective particle swarm optimizer for MMOPs, denoted as SMPSOMM. In it, self-organizing networks are adopted into the multiobjective PSO to simulate the distribution of population,which is similar to the niche technique. They design an elite learning strategy for the best one in the population.

Based on [17], an improved particle swarm optimizer is proposed to solve MMOPs [20]. It seamlessly integrates a clustering method about decision variables, a leader updating mechanism, and a local-best PSO with a ring topology.

Liuet al. [35] propose a novel evolutionary algorithm to handle the MMOPs with imbalanced equivalent Pareto optimal subsets. In it, a convergence-penalized density method is designed. They also propose a novel multimodal multiobjective evolutionary algorithm by using two-archive and recombination strategies for solving MMOPs [36].

Lianget al. [28] propose a clustering-based differential evolution algorithm (MMODE_CSCD) for MMOPs. In it,they design a special crowding distance, namely clusteringbased special crowding distance, to measure the crowding degree. Then, an elite mechanism is proposed to improve the performance of the algorithm.

Quet al. [29] propose a self-organized speciation particle swarm optimizer (SSMOPSO) to solve MMOPs. In it, they first use a speciation strategy to construct the niches, and then a self-organized mechanism is designed to enhance the performance of the speciation.

Linet al. [37] design a dual clustering-based multimodal multiobjective evolutionary algorithm (MMOEAC) to solve MMOPs, in which they first use a neighborhood-based clustering method to classify solutions into multiple clusters aiming to enhance diversity in the decision space, and the non-dominated solutions in each cluster are selected.Then, the second clustering method is applied to the population in the objective space, and the clusters in the objective space are used to decide to remove some solutions in the decision space.

C. Related Work and Framework of FWA

1) FWA:Since FWA was proposed in 2010 [21], scholars have proposed a number of its many variants. In this section,the recent studies of FWA are briefly reviewed.

Zhenget al. propose an enhanced FWA [38] by analyzing and improving each operator of basic FWA. On this basis, an adaptive FWA [39] and a dynamic search FWA [40] are proposed to improve the calculation method of explosion amplitude. In them, the aim is to control explosion amplitude adaptively without additional parameters. Liet al. [41]propose a guided FWA to replace the Gaussian mutation operator in FWA. Therein, each firework generates a guided spark as a mutation firework by using the information of its sparks. In order to solve multimodal function optimization, Li and Tan [26] propose a loser-out tournament-based FWA by replacing the losers with random fireworks.

So far, FWA has been applied to many real-world problems,including clustering [42], regional seismic waveform inversion [43], constrained portfolio optimization problem [44],web information retrieval [45], and privacy preservation [46].

For multiobjective optimization problems, two multiobjective FWAs are proposed [47], [48]. In [47], its solutions are compared via the S-metric, and the fireworks with the largest S-metrics are selected to enter the next generation. In [48],they transform a multi-objective optimization problem into a single-objective optimization one, and non-dominated solutions are chosen randomly as the fireworks. In addition, the above two are extended from the basic FWA.

2) FWA Framework:FWA [21] is a population-based algorithm, which is inspired by the phenomenon of firework explosions. It has been used in many practical applications[42], [43] and has strong global optimization capability [38],[40], [41]. It performs heuristic search mainly through an iterative explosion strategy and selection operation. In its explosion strategy, a large number of sparks can be generated around the fireworks within certain explosion amplitudes.After that, some fireworks are selected from current population including fireworks and sparks via a selection operation to enter the next generation. These two operations are briefed as follows:

Next, why is FWA chosen as the base algorithm for solving MMOPs?

1) Based on [26] and [41], FWA has shown powerful function optimization ability. Also, it has been applied to many engineering optimization problems successfully.

2) Unlike those intelligent optimization algorithms [49]–[51]whose individuals generate only one offspring, each firework of FWA can generate multiple sparks. Thus, FWA’s search is diffuse, which can fully search decision space.

3) In FWA, each firework and its sparks naturally form a niche, and it does not introduce any new parameters to divide different individuals into different niches when compared with the existing niche methods. Such mechanisms of FWA is helpful in solving MMOPs.

4) FWA has shown strong ability in solving single-objective multimodal problems [26], [52], which also provides a basis to solve MMOPs.

Our algorithm differs from the above-mentioned multiobjective FWAs in that: a) It is extended from the latest FWA[26]; b) Its solutions are compared via dominance relationship; and c) It is not based on decom-position.

III. IMPROVED MFWA

In this section, we first give the motivation, and then present a general framework of IMFWA, as well as two important improvements are introduced in detail. Finally, the effectiveness of IMFWA is analyzed.

A. Motivation

For solving MMOPs, there are two challenges, i.e., diversity and convergence in the decision space. That is, we need to balance diversity and convergence in the decision space.However, current convergence-first MMOEAs can quickly converge to the easy-to-find equivalent Pareto optimal solutions, which means that the convergence-first MMOEAs treat convergence in the decision space as more important than diversity, resulting in an outcome where the obtained equivalent Pareto optimal solutions are crowded together on each PS and the distribution of solutions in located equivalent PS is poor. In order to address this issue, this work proposes an improved multiobjective FWA with an adaptive strategy and special archive guidance called IMFWA for short. Specifically, we first extend the latest version of FWA to make it applicable for MOPs. Then we improve it by incorporating an adaptive strategy and special archive guidance. In the adaptive strategy, two options, i.e., the explosion and random ones, are adaptively selected to generate the sparks, where the random strategy can help it jump out of the region which can locate equivalent Pareto optimal solutions easily, thereby helping locate more difficult-to-find equivalent Pareto optimal solutions. In the special archive guidance, the Pareto optimal solution taken from the special archive is randomly used to guide firework explosions, which can help it converges quickly. Thus, its adaptive strategy and special archive guidance can balance diversity and convergence in the decision space effectively.

B. IMFWA Framework

The general framework of IMFWA is shown in Algorithm 1. It is divided into two parts: initialization and iteration.

Algorithm 1 General framework of IMFWA 1: Set the number of fireworks:NF 2: Set the number of sparks generated by each firework:3: Set the first generation explosion amplitude: A(1)P NS 4: Initialize Population A P 5: Set archive =6: while termination condition is not satisfied do 7: for i = 1 to do PS XiA NF NF 8: = Special_archive( , , )PA XiPS NS Ai 9: = Adaptive_strategy( , , , (t))︿PA PA 10: = NSS( )images/BZ_145_402_2413_424_2446.pngXi ︿PA 11: = the first firework in images/BZ_145_434_2464_456_2498.pngXi Xi 12: if dominates then 13: (t) = (t)Xiimages/BZ_145_503_2579_546_2612.pngimages/BZ_145_546_2567_568_2600.pngXi 14: =15: else Ai Ai ×CR Ai Ai ×CA 16: (t) = (t)17: end if AB ︿PA 18: {i} = fireworks in the first front in 19: end for images/BZ_145_326_2872_348_2905.pngAB AB ∪AB ∪···∪AB NF 20: = {1} {2} { }︿A A∪images/BZ_145_530_2924_552_2957.pngAB 21: = NSS( )A NF ︿A 22: = fireworks in top in 23: end while 24: return the non-dominated fireworks in A

C. Establishing Special Archive

Algorithm 2 Special_archive( , , )XiA NF Require: (special archive size)DD(j)NA√(XDi −AD(j))2 2,..., NF 1: = , j = 1,2: Normalize DO(j)√DD(XOi −AO(j))2 2,..., NF 3: = , j = 1,4: Normalize DO 5: for j = 1 to do DD DO NF 6: D(j) = min ( (j), (j))7: end for︿D 8: = sort D in ascending order A ︿D 9: Get corresponding index in from PS NA A 10: = fireworks in top indexes in PS 11: return the

An example is given to illustrate this process. As shown in Fig. 2(a), in the decision space, there are five fireworks in A,and the distances fromXto them are 4, 5, 4, 3 and 2,respectively, i.e.,DD= (4, 5, 4, 3, 2).DDis then normalized to(0.67+ε, 1+ε, 0.67+ε, 0.33+ε, ε).

Fig. 2. Illustration of the proposed distance calculating method in the decision and objective space.

From Fig. 2(b), we can observe that in the objective space,the distances fromXto the five fireworks are 50, 20, 40, 60 and 30, respectively, i.e.,DO= (50, 20, 40, 60, 30). Then,DOis normalized to ( 0.75+ε, ε, 0.5+ε, 1+ε, 0.25+ε).

It is easy to find thatDD( 1)

The effectiveness of considering both decision and objective space is to be shown later in Section V-C.

D. Establishing Adaptive Strategy

Algorithm 3 presents the process to generate sparks by an adaptive strategy, in which we consider two options:explosion and random. Firstly, the probability of an option being selected needs to be set, e.g.,p= {0.5,0.5}, where the first component represents the probability of explosion being selected, and the second is the probability of random one being selected. Then, a single flagFis needed:F= 1 for an explosion option; 0 for a random option.Algorithm 3 is divided into two important process:

Algorithm 3 Adaptive_strategy ( , , , )XiPS NS Ai Require: p (probability of each strategy being selected)NS 1: for j = 1 to do r

1) Generate a Spark by an Adaptive Strategy:If a random numberris less thanp(1), then the explosion option is selected, andFis marked as 1 (lines 3−6). Otherwise the random option is selected, andFis marked as 0 (lines 8 and 9). Note that our explosion option is different from (3), where fireworkXetaken from special archive PSis randomly used to replace the current firework. That is,Xeis applied to guide FWA to search.

2) Update p:This process involves a reward phase and punishment phase. If a sparkS jdominates fireworkXi, a reward phase is triggered; otherwise a punishment phase applies. In the reward phase, the probability of a positive one being selected is added with a non-zero reward valueR(lines 12−16), which indicates that the positive option is more likely to be selected next time.Ris problem-dependent, reflecting the degree of reward and punishment. In a punishment phase,the probability of a negative option being selected is reduced(lines 18−22), which means a negative option is selected with a smaller probability next time. In the following, the effectiveness of the adaptive strategy is illustrated by comparing the algorithms with and without it.

For example, we assume that each firework producesNSsparks, and there are two equivalent Pareto optimal sparks among them, and we need to locate them. When generating a spark by (3), the probabilitypoof finding the two Pareto optimal solutions withNStrials is calculated as follows:

3) Discussion:The goal of the adaptive strategy is to strike a balance between diversity and convergence in the decision space. Specifically, an explosion strategy is used to help our algorithm converge to the true PS quickly. A random one can help locate more equivalent PS. In this work, we use the strategy selection probability to switch the explosion and random strategies. This switching mode can ensure the diversity and convergence of the algorithm, and it is adaptive without human intervention. In order to visualize the process of switching between the explosion and random strategies, we select a MMOP, namely MMF4 from CEC2019 [53], as an example, as shown in Supplementary File.

It can be seen that our algorithm tends to select the explosion strategy to converge to the true PS quickly in the early stage, while a random one is selected more frequently in the later stage aiming to locate more equivalent true PS.

E. Novelties of IMFWA

In this work, we use FWA as a base algorithm, and extend it to multiobjective cases, which is different from previous MMOEAs. In order to solve MMOPs, we present an improved multiobjective FWA. Its detailed differences from the existing methods are: 1) Its base algorithm is different from them; 2) It does not require any other technologies to assist in obtaining the equivalent solution since its firework and sparks can be seen as a niche; and 3) It uses an adaptive strategy to ensure high solution diversity and a special archive guidance method to speed up convergence in the decision space.

F. Effectiveness Analysis of IMFWA

In this paper, a special archive and adaptive strategy are applied to MFWA to help solve MMOPs. In IMFWA, the special archive preserves the fireworks, which not only are selected from archive A, but also are closest to the current firework. In it, the distances in both the decision and objective space are considered, which can not only ensure that the sparks generated by firework are around the points in decision and objective space, but also can improve the convergence speed. Meanwhile, in order to enhance solution diversity, a firework is randomly selected from a special archive to guide the update of spark positions.

In addition, an adaptive strategy is applied to the firework explosion, which can preserve high solution diversity in the decision space. In it, a random strategy is designed to avoid searching the same area, thereby increasing the possibility to fully search the decision space.

Next, we give an example to illustrate the effectiveness of IMFWA. DN-NSGAII [11], Omini-optimizer [32] and MO_Ring_PSO_SCD [17] are selected to compare with it,and they are tested on MMF4 from CEC2019 [53], which has four PSs. The population size is set to 200 and the maximum number of function evaluations is set to 10 000. Other parameters are set according to [11], [17], [32]. Each algorithm is run 21 times. Their obtained PSs are shown in Fig. 3.

Fig. 3. PSs are obtained by (a) DN-NSGAII; (b) Omini-optimizer; (c)MO_Ring_PSO_SCD; and (d) IMFWA.

From Figs. 3(a) and 3(d), DN-NSGAII has poor solution diversity on each PS. Since its points on each PS are crowded together. IMFWA has a uniform distribution on four PSs.

As shown in Figs. 3(b) and 3(d), it is easy to find that the Omini-optimizer has lower diversity than IMFWA. Since its points are even more crowded than DN-NSGAII'.

Finally, we can find that PSs achieved by MO_Ring_PSO_SCD (Fig. 3(c)) have better diversity than DN-NSGAII and the Omini-optimizer, but worse than IMFWA. Some points in MO_Ring_PSO_SCD are far away from the true PSs. It has good diversity and uniform distribution on some PSs, but does not have on other PSs as seen in upper-right and lower half of Fig. 3(c). For IMFWA, it has a good diversity and uniform distribution on all four PSs. This example shows that IMFWA has better performance than DN-NSGAII, Omini-optimizer and MO_Ring_PSO_SCD. More experimental results on other functions are given next.

IV. ExPERIMENTS AND RESULT ANALYSES

A. Benchmark Functions, Metrics and Parameters Settings

Twenty-two multimodal multiobjective benchmark functions are from CEC2019 [53], as shown in Supplementary File. Readers can refer to [53], [54] for more detail. We abbreviate all functions as MMF1-22 for convenience in the following experiments.

In the experiments, population size and the maximum number of function evaluations are respectively set to 100NDand 5000NDunless stated otherwise. For IMFWA, the amplification coefficientCAand reduction coefficientCRare set to 1.2 and 0.9 according to the latest FWA [26]. The initial probabilityp= {0.5,0.5}. The special archive sizeNAis set to 5. The effect of the number of sparks and reward value on the performance of IMFWA is studied in next. The parameter settings of compared algorithms are the same as the respective references [11], [17]–[19], [28], [29], [32], [37].

B. Sensitivity to Spark Count and Reward Value

In this section, we study the effect of spark countNSand reward valueRon IMFWA performance.NSis set to 5, 10,15, 20, 25 and 30, andRis set to 0.05, 0.1, 0.2 and 0.4. Thus,there are 24 combinations of (NS,R). Meanwhile, we define a new performance metric, namely score, to show the superiority of different combinations. In it, a pair-wise Wilcoxon rank sum test is conducted to determine whether a combination is better than others. If so, it adds one point. Note that the maximum score for a combination 22 × (24 − 1) = 506. The results are presented in Fig. 4 and Table I.

Fig. 4. Total scores of IMFWA with different ( NS, R) in terms of P ′SP.

From the results, we conclude that when (NS,R) = (20, 0.1),IMFWA is the best among considered combinations. Thus,(NS,R) is set to (20, 0.1) in the following experiments.

C. Establishing Special Archive With Different Distance

Next, we explain why we consider distance in both the decision and objective space rather than that of only the decision or objective space. The later two cases are denoted as IMFWADand IMFWAO.

Wilcoxon rank sum tests between them are conducted to show the significance of results, and the results are presented in Table II (The details are shown in Supplementary File), in which “+” means that the IMFWA is significantly better than its peer, “−” means its peer is significantly better than it, and“ ≈” means IMFWA and its peer have similar performance.

TABLE I AVERAGE RANKINGS OF IMFWA WITH DIFFERENT ( NS, R) IN TERMS OF P′SP

TABLE II MEAN P′SP FROM IMFWA, IMFWA D AND IMFWAO

D. Efficiency of IMFWA

In IMFWA, there are two important improvements: special archive guidance and an adaptive strategy. To demonstrate their respective effectiveness, multiobjective FWA with and without them are tested on 22 MMFs. In this section, we denote the basic multiobjective FWA without both as MFWA,MFWAarepresents the one with only an adaptive strategy,MFWAsis the one with only special archive guidance. The results are shown in Table III and Supplementary File.

Judging from the average rankings, IMFWA is ranked the 1st, followed by MFWAaand MFWAs, and MFWA is the worst among them. Meanwhile, we can find that MFWAsis a little better than MFWA, but MFWAais much better than MFWA. It proves that the two improved strategies can locate more Pareto optimal solutions than MFWA.

TABLE III COMPARISON OF IMFWA, MFWA, MFWA a AND MFWAS IN TERMS OF P′SP

We also study if the proposed improvements can enhance other algorithms. MO_Ring_PSO_SCD [17] and MODE [56]are selected as an example, and the improvements are denoted as IMO_Ring_PSO_SCD and IMODE. The results in Supplementary File indicate that IMFWA benefits more.

E. Comparison of IMFWA With Eight State-of-the-Art MMOEAs

F. Visual Comparison

In this section, MMF5 and MMF22 are selected to present the visual comparison, and the final PSs on them obtained bythe above nine algorithms are respectively plotted in Figs. S1 and S2 of Supplementary File. From visual results on MMF5,it can be observed intuitively that IMFWA can locate more uniform solutions than its peers. The solutions obtained by Omni_optimizer, DN-NSGAII and MO_Ring_PSO_SCD are crowded together and they are not well distributed on each PS.SMMOPIO, SMPSO_MM, MMODE_CSCD and SSMOPSO,MMOEAC are comparable, while they have worse distribution than IMFWA on each PS.

TABLEIV MEAN P′SP VALUESOFNINE ALGORITHMS

TABLE V MEAN I GDX VALUES OF NINE ALGORITHMS

TABLEVI MEAN HV′ VALUESOF NINE ALGORITHMS

Fig. 5. Reciprocals of Pareto sets proximity values of six algorithms with different population sizes on MMF5 and MMF15. Numbers on the horizontal axis:1 = 100 ND, 2 = 200 ND, 3 = 300 ND, 4 = 400 ND and 5 = 500 ND.

For the visual results on MMF22 (it has 27 equivalent PSs),IMFWA and MMOEAC can locate all equivalent PSs, while other algorithms miss some. Among them, Omni_optimizer,DN-NSGAII and MO_Ring_PSO_SCD are the worst in terms of the number of equivalent PSs. MMOEAC can generate more solutions than IMFWA, but some of them are redundant and not on PS. Judging from the distribution of obtained solutions on each PS, IMFWA is better than MMOEAC.

Based on such visual comparison, we conclude that it can locate more solutions with well-distributed PSs than its peers.

G. Comparison With Competitors in Decision Space

From Table VII, IMFWA is more stable than its peers due to its AR, and IMFWA is the best among them. Moreover, the pair-wise comparisons, except COEA and MMO-Clustering-PSO due to their incomplete data, show that IMFWA provides better results in all 10 cases, and equivalent results in 6, 7 and 3 cases in comparison with NMOHSA, PEN_MOBA and MM-NAEMO, respectively.

From Table VIII, it can be observed that IMFWA is the best among them with the AR of 3.1818. Moreover, comparing with NMOHSA, PEN_MOBA and MM-NAEMO, the results of pair-wise tests show that IMFWA provides better results in9, 10 and 10 cases, comparable results in 7, 8 and 3 cases,respectively. Hence, the proposed algorithm has better performance than top 5 participating algorithms benchmarked on CEC2019 benchmark functions in terms of AR in the decision space.

TABLEVII MEAN P′SP VALUES OF IMFWAANDTOP FIVE COMPETITORS

TABLE VIII MEAN I GDX VALUES OF IMFWA AND TOP FIVE COMPETITORS

H. Comparison of Runtime

In this section, the runtime of IMFWA is illustrated.MO_Ring_PSO_SCD [17], SMMOPIO [18], SMPSO_MM[19], NMOHSA [16], PEN_MOBA [27], MMODE_CSCD[28] and SSMOPSO [29], where they are all extended from a single objective case to a multiobjective one, are selected to compare with IMFWA. All compared algorithms are run on a PC with Windows 7 operating system (64 bit), quad core CPU(i5-4200U @ 1.60 GHz) and 4 GB RAM. Population size and the maximum number of function evaluations of all algorithms are set to 100NDand 5000ND, respectively. The runtime results are shown in Fig. 6.

We can observe that the runtime of MO_Ring_PSO_SCD,SMMOPIO and SMPSO_MM is comparable. SSMOPSO is the worst among them. NMOHSA is ranked the 2nd best.IMFWA is slightly better than NMOHSA, and much better than others. The reason is that the sparks generated by fireworks only in the first front are selected in IMFWA, and the other sparks are abandoned, which can reduce runtime effectively. In closing, IMFWA is the fastest on these MMFs among all six algorithms.

I. Performance Comparison on Four Scalable Imbalanced Distance Minimization Problems (IDMPs)

To further study its performance, IMFWA is compared with Omni_optimizer [32], DN-NSGAII [11], MO_Ring_PSO_SCD [17], SMMOPIO [18], SMPSO_MM [19], MMODE_CSCD [28], SSMOPSO [29], MMOEAC [37], TriMOEATA&R [36] and CPDEA [35] on four IDMPs, which are are selected from [35] (the definition of IDMP is given in Supplementary File). TheNOandNDin these four IDMPs are larger than 3. Population size and the maximum number of function evaluations of all algorithms are set to 240 and 72 000, respectively. All algorithms are run 40 times. Similar to [35], two performance metrics, namely IGDX [34] and IGDM [36], are selected to measure their performance. The results are shown in Tables IX and X.

Fig. 6. Mean runtime of six algorithms (s). Numbers on the horizontal axis:1 = IMFWA, 2 = MO_Ring_PSO_SCD, 3 = SMMOPIO, 4 = SMPSO_MM,5 = NMOHSA, 6 = PEN_MOBA, 7 = MMODE_CSCD and 8 = SSMOPSO.

From Tables IX and X, we conclude that IMFWA significantly outperforms the compared methods except CPDEA on four IDMPs in terms ofIGDXandIGDM. When comparing with CPDEA, it provides betterIGDXandIGDMresults in two cases. In addition, its mean runtime is significantly less than its peers, and SSMOPSO is the worst among them. IMFWA and TriMOEA-TA&R are comparable in terms of runtime.Hence, IMFWA performs the best among the compared algorithms, while its mean runtime is much less than its peers’.

V. DISCUSSION

Based on experimental results with 22 MMFs, we conclude that our proposed IMFWA is significantly better than its peers in terms ofP′SP,IGDXandHV′values, which indicates that it has better performance in both decision and objective spaces.The main reason is that the proposed adaptive strategy can help it strike a balance between diversity and convergence in the decision space, thereby locating more and well-distributed equivalent Pareto optimal solutions in the decision space and obtaining an excellent PF approximation in the objective space. It is found that the frequency of performing a random strategy is higher than that of performing an explosion strategy in the proposed adaptive strategy as shown in Fig. S3 of Supplementary File, which can provide a guarantee for locating more and well-distributed equivalent Pareto optimal solutions, but it also indirectly leads to a slight performancedrop on few MMFs, e.g., MMF20.

TABLE IX MEAN I GDX VALUES ON FOUR IDMPS, WHERE MR IS THE MEAN RUNTIME (S)

TABLE X MEAN I GDM VALUES ON FOUR IDMPS

Additionally, IMFWA has the least runtime among its peers. The basic reason is that only the fireworks in the first front are kept in IMFWA, and the fireworks in other fronts are abandoned, thus making the size of the archive smaller than that of other MMOEAs, which can greatly reduce the time consumption of performingNSS.

IMFWA has also been applied to solve four imbalanced distance minimization problems. The results demonstrate that it has better performance on these four IDMPs in terms ofIGDXandIGDM. Although it is an effective method to solve MMOPs, it faces the following limitations: 1) it is based on FWA, and the parameters in FWA, i.e.,CAandCR, have a great impact on the performance of IMFWA, thus the sensitivity to them remains an open issue; and 2) how to design an optimal switch strategy between the explosion and random ones is a thorny question to be answered.

VI. CONCLUSION

As future work, designing an optimal switch strategy between the explosion and random strategies or developing other mechanisms for the proposed algorithm is important to achieve better results in solving multimodal multiobjective problems. Moreover, reducing the parameters in the fireworks algorithm is worthy to study. Finally, applying it to other realworld problems [59]–[65] should be pursued.