Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Multi-Objective Optimization in Theory and Practice II: Metaheuristic Algorithms
Multi-Objective Optimization in Theory and Practice II: Metaheuristic Algorithms
Multi-Objective Optimization in Theory and Practice II: Metaheuristic Algorithms
Ebook524 pages6 hours

Multi-Objective Optimization in Theory and Practice II: Metaheuristic Algorithms

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Multi-Objective Optimization in Theory and Practice is a simplified two-part approach to multi-objective optimization (MOO) problems.

This second part focuses on the use of metaheuristic algorithms in more challenging practical cases. The book includes ten chapters that cover several advanced MOO techniques. These include the determination of Pareto-optimal sets of solutions, metaheuristic algorithms, genetic search algorithms and evolution strategies, decomposition algorithms, hybridization of different metaheuristics, and many-objective (more than three objectives) optimization and parallel computation. The final section of the book presents information about the design and types of fifty test problems for which the Pareto-optimal front is approximated. For each of them, the package NSGA-II is used to approximate the Pareto-optimal front.

It is an essential handbook for students and teachers involved in advanced optimization courses in engineering, information science and mathematics degree programs.

LanguageEnglish
Release dateMar 28, 2019
ISBN9781681087054
Multi-Objective Optimization in Theory and Practice II: Metaheuristic Algorithms

Related to Multi-Objective Optimization in Theory and Practice II

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Multi-Objective Optimization in Theory and Practice II

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Multi-Objective Optimization in Theory and Practice II - André A. Keller

    Declare.

    Pareto-Optimal Front Determination

    André A. Keller

    Abstract

    Multi-objective optimization (MOO) problems belong to programming approaches in which the decision-maker is faced with a multiplicity of conflicting objectives. This situation occurs with real-world problems involving engineering design, chemical processes, financial management, etc. In such problems, we will obtain rather a set of equally good solutions and not just one best solution. Classical methods for solving MOO problems have shown their limits to find Pareto-optimality fronts. The difficulties were the necessity of repetitive applications, the requirement of some knowledge about the problem, the insufficient spread of non-dominated solutions, and the sensitivity of results to the shape of the Pareto-optimal front. Therefore, metaheuristics with application to practical MOO problems had an accelerated development in the decades 1980s and 1990s. A variety of nature-inspired algorithms have been proposed in the recent years with extensive applications. The concept of dominance is the basis of the Pareto optimality. The dominance binary relation is a strict partial order relation. This approach allows a comparison between feasible solutions in the fitness space and decision space. The non-dominated solution sets yield Pareto fronts. Different techniques were proposed to find good approximations of the Pareto sets when a Pareto front cannot be determined analytically. Our method uses graph theory analysis. This approach provides a non-dominated set by using the Hasse diagram of an acyclic graph. Numerous examples from the literature show connected and disconnected Pareto-optimal fronts. In particular, Pareto-optimal fronts change if we decide to maximize instead to minimize the objectives. Algorithms use different selection procedures. The selection criteria can be an elitist Pareto ordering, non-Pareto criteria like indicators, a bi-criteria evolution, and the extended concepts of dominance like ε-dominance and grid dominance.

    Keywords: Attraction-based algorithm, Conflicting/nonconflicting objectives, Darwinian evolution, Engineering design problem, Globally/locally Pareto-optimality, Hasse diagram, Master strategy, Metaheuristics, Multi-objective optimization, Nature-inspired algorithm, Near/exact Pareto-optimal front, Non-dominated solution, Pareto-optimal solution, Pareto ranking, Population-based algorithm, Strongly/weakly Pareto-optimality, Subordinate heuristics, Swarm intelligence, Trajectory-based algorithm.

    1.1. Introduction

    The first use of heuristic algorithms goes back to Turing (1948) [1] when he was breaking the German Enigma code during World War II (see also Yang, 2014 [2], and Angelov, 2016 [3]). After that, heuristic and metaheuristic algorithms for solving programming problems resulted from the difficulties with classical optimization methods. Abido (2010) [4] specified four inconveniences with conventional algorithms. Firstly, there is a need for a repetitive application of an algorithm to find the Pareto-optimal solutions. Secondly, some knowledge about the problem is required. Thirdly, an algorithm is sensitive to the shape of the Pareto-optimal front. Finally, the spread of the Pareto-optimal solutions depends on the chosen algorithm. Heuristic algorithms are suitable solvers for severe high-dimensional real-life problems (see, for instance, Tong et al., 2014 [5]).

    1.1.1. Heuristic and Metaheuristic Algorithm

    Heuristics and metaheuristics are approximation resolution methods. Heuristics denotes techniques which seek near-optimal solutions at a low cost, is often problem-specific. Metaheuristics consists of a master strategy that guides and corrects the operations of subordinate heuristics (see, for instance, Reeves, 1995 [6]). Heuristics denotes a set of strategies to guide the search process in the feasible space. Metaheuristics, such as evolutionary algorithms (EAs), refers to a higher level procedure which combines different includes of depended heuristics for exploring search area. Metaheuristics includes notably genetic algorithms (GAs), evolution strategy (ES), and genetic programming (GP). EAs also include, but are not limited to, nature-inspired algorithms such as neural methods, simulated annealing, tabu search, ant colony systems and other particle swarm intelligence techniques. The capacity of such methods to solve NP-hard ¹ combinatorial problems is well known (e.g., the problems of traveling salesperson, scheduling, graph, and transportation). The book by Michalewicz, 1999 [7] introduced metaheuristics for solving numerical optimization problems. Many authors proposed an overview of evolutionary techniques with applications. Some overviews include Srinivas and K. Deb’s sorting GA (1994) [8], Fonseca and Fleming multi-objective GA (1993, 1995) [9, 10], Hajela and J. Lee’s weighted-based GA (1996) [11], Zitzler, 1999 [12] including Schaffer’s vector evaluated GA [13].

    Genetic algorithms use principles of the Darwinian evolution characterized as follows. Individuals within populations (or species) differ. Traits are passed on to offspring. Few offspring can survive in every generation. The selected members who survive have most favorable performances. This natural process on individuals has consequences on the corresponding population. This evolution process is backward-looking, mostly deterministic (i.e., partially random). It is not perfect and can produce new traits besides existing traits. Such algorithms are population-based stochastic algorithms which elements include a population of individuals, fitness evaluation, genetic operators guiding evolution, and selection.

    Fig. (1.1) shows the basic cycle of the process. The initial step consists of a population of individuals chosen at random. In the evaluation phase of the primary cycle, we evaluate all the individuals by using the objective functions of the programming problem. Next, fitness values are assigned to individuals on this basis. Then, the fittest individuals are selected for reproduction. After that, new individuals emanate from the use of genetic operators, such as with crossover and mutation. Closing the basic cycle, the new population including the selected individuals and offspring is transferred to the first step for evaluation, and a new cycle goes on.

    Fig. (1.1))

    Basic cycle of an evolutionary algorithm.

    1.1.2. History of Metaheuristics

    One should indicate the fast development of an MOO approach in the mid-1980s with the help of evolutionary algorithms². An early attempt to use genetic algorithms to solve MOO problems was realized by Ito et al. (1983) [15]. Goldberg (1989) [16] proposed Pareto-set fitness assignment to solve Schaffer’s multi-objective problems. In the same period, two books dealt with theory and techniques of MOO, such as Chankong and Haimes’ book (1983) [17], and that book of Sawaragi et al. (1985) [18]. The fast expansion of this approach was stimulated by numerous real-world applications from science, technology, management, and finance. Rangaiah’s book (2009) [19] was the first publication on MOOPs with a focus on chemical engineering. The applications in this area were notably in chemical, mineral processing, oil and gas, petroleum, pharmaceutical industries. Lai and C-L. Huang (1994) [20] extended the MOO approach to fuzzy decision-making problems ³.

    The first use of genetic-based search algorithms to multi-objective optimization problems goes back to the pioneering work of Rosenberg (1967) [24] (see Coello, 1999 [25]). In his brief history of metaheuristics X-S. Yang, 2014 [2] specified the relevant decades of the development of evolutionary algorithms. The 1960s and 1970s knew the development of genetic algorithms at the University of Michigan. The contribution of Holland (1975) [26] proposed a search method based on the Darwinian evolution concepts and natural principles of biological systems. Crossover, mutation, and selection operators were used to solve difficult combinatorial problems. In the same period, researchers from the Technical University of Berlin proposed evolutionary strategies. Rechenberg (1971) [27] and Schwefel (1975) [28] proposed a search method for solving optimization problems. D.B. Fogel (1994) [29] introduced the evolutionary programming by using simulated evolution as a learning process ⁴.

    Following X-S. Yang, the decades 1980s, and 1990s were fruitful steps for metaheuristic algorithms. Kirkpatrick, Gelatt Jr, and Vecchi pioneered the simulated annealing SA algorithm in 1983 [30]. This method has its origin in the annealing process of metals. In 1986, the use of memory was proposed by Glover’s tabu search in 1986 [31]. In 1992, the search technique by Dorigo [32] was inspired by the swarm intelligence of ant colonies using a pheromone to communicate. Later in 1995, Kennedy and Eberhardt [33] developed the particle swarm optimization, inspired by the swarm intelligence of fish and birds. In 1997, Storn and Price [34] proposed the differential evolution (DE) algorithm. This vector-based evolutionary algorithm was proved to be more efficient than a genetic algorithm. In 1997, the study on No Free Lunch Theorems (NFL) by Wolpert and Macready (1997) [35] was a significant step in the development of better algorithms. Indeed, theorems proved that there exists no universal better algorithm for all applications. Thus, the most efficient algorithm would be valid only for a given class of problems.

    In the recent years, other nature-inspired algorithms were proposed. We can mention harmony search (HS) algorithm for distribution, transport and scheduling 2001, honeybee algorithms 2004, firefly algorithm (FA) 2007, cuckoo search algorithm (CSA) 2009, bat algorithm (BA) 2010 based on echolocation behavior, flower pollination algorithm (FPA) 2012.

    1.1.3. Metaheuristic Algorithms and Applications

    Some survey articles on metaheuristic algorithms suggested some classifications. Besides conventional aggregating approaches, Fonseca and Fleming (1995) [10] considered population-based non-Pareto methods (VEGA), Pareto-based approaches, and niche induction techniques. I. Fister et al. (2013) [36] divided algorithms into four groups of inspiration, such as 1) swarm intelligence (SI)– based, 2) not SI-based bio-inspired algorithms, 3) physics/chemistry-based algorithms, and 4) other algorithms. Fig. (1.2) and Table 1.1 show the selected classification for this study.

    Fig. (1.2))

    Metaheuristic algorithms for optimization problems [classification inspired by I. Fister et al. (2013) [36].

    However, I. Fister et al. (2013) [36] declared that there is no single classification. Taxonomies depend on focus and perspective. Thus, algorithms include population-based algorithm (e.g., GA, FA, PSO algorithms) or trajectory-based (e.g., SA algorithm). We can divide algorithms into several categories such as attraction-based procedures (e.g., FA algorithm using the attraction of light) or non-attraction-based procedures (e.g., GA algorithms). Algorithms can also be classified rule-based (crossover and mutation principles in GAs) and updating equation-based (e.g., PSO, CS).

    Fig. (1.2) shows most metaheuristic algorithms. The algorithms consist of bio-inspired algorithms, and physics and chemistry-based algorithms. The algorithms belong to one or more of the following groups, such as swarm intelligence, genetic algorithms, genetic programming GP, and evolutionary algorithms (see Tong et al., 2014 [5]).

    Table 1.1 defines the algorithms of Fig. (1.2) with more information about the acronym, the authors, and year of publication. Additional information from the literature is the number of references provided by IEEE Explore Digital Library ⁵ for each algorithm. It tells the widespread use of each algorithm. The ten first algorithms with more than 1,000 publications are: 1) HEMH, 2) GA, 3) ECBO, 4) PSO, 5) GP, 6) SA, 7) EP, 8) ACO, 9) DE, 10) MOEA where the first three algorithms have 174,091 items. 25 algorithms have between 100 and 1,000 publications. 15 other algorithms have between 10 and 100 items, and eight algorithms have less than ten items.

    Table 1.1 Description of 65 metaheuristic algorithms for optimization ⁶.

    The review of the literature by Lalwani et al. (2013) [93] showed the applicability of particle swarm optimization (PSO) to multi-objective problems ⁷(MOPSO). In this algorithm⁸, the potential solution is the swarm which individuals are the particles. This heuristic is inspired by the choreography of bird flocks. It was validated using standard test functions and compared against the EMO algorithms⁹. Each particle is trying to move using information it holds on its position, velocity, and performance. The movement of a particle is governed by updating its position and velocity. Fig. (1.3) shows applications in a broad range of activities in engineering, industry, biology, management, and environment. The application areas of the studies ¹⁰ were presented by Lalwani et al. (2013) [93]. The applications included aerospace engineering (abbreviated by Aero Eng in Fig. (1.3)), biological sciences (or Biological), chemical engineering (or Chem Eng), data mining (or data min), electrical engineering (or Elec Eng), Electromagnetic Engineering (or Electromag), Electronics Engineering (or Electronics), Environmental Sciences (or Environ), flow shop and job shop scheduling problems (or ‘Flowshop), image processing (or Image Proc), industrial engineering (or Ind Eng), mechanical engineering (or Mecha Eng), neural network (or Neural netw), robotics, and software engineering (or Software E.).

    Fig. (1.3))

    Application area of MOPSO algorithm in published papers in the period 2006-2012 [data, and adapted Figure from Lalwani et al., 2013 [93], Figure 3, p.45]. Note: The sum of rounded percentages differs from hundred percent.

    Problems studied in these applications are listed in Table 1.2 (see Lalwani et al., 2013 [93], Table 2 for more details).

    Table 1.2 Type of problems solved by using MOPSO by areas [adapted from Lalwani et al. (2013) [93], Table 2, pp. 60-89].

    Enjoying the preview?
    Page 1 of 1