Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Handbook of Metaheuristic Algorithms: From Fundamental Theories to Advanced Applications
Handbook of Metaheuristic Algorithms: From Fundamental Theories to Advanced Applications
Handbook of Metaheuristic Algorithms: From Fundamental Theories to Advanced Applications
Ebook991 pages9 hours

Handbook of Metaheuristic Algorithms: From Fundamental Theories to Advanced Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Handbook of Metaheuristic Algorithms: From Fundamental Theories to Advanced Applications provides a brief introduction to metaheuristic algorithms from the ground up, including basic ideas and advanced solutions. Although readers may be able to find source code for some metaheuristic algorithms on the Internet, the coding styles and explanations are generally quite different, and thus requiring expanded knowledge between theory and implementation. This book can also help students and researchers construct an integrated perspective of metaheuristic and unsupervised algorithms for artificial intelligence research in computer science and applied engineering domains.

Metaheuristic algorithms can be considered the epitome of unsupervised learning algorithms for the optimization of engineering and artificial intelligence problems, including simulated annealing (SA), tabu search (TS), genetic algorithm (GA), ant colony optimization (ACO), particle swarm optimization (PSO), differential evolution (DE), and others. Distinct from most supervised learning algorithms that need labeled data to learn and construct determination models, metaheuristic algorithms inherit characteristics of unsupervised learning algorithms used for solving complex engineering optimization problems without labeled data, just like self-learning, to find solutions to complex problems.

  • Presents a unified framework for metaheuristics and describes well-known algorithms and their variants
  • Introduces fundamentals and advanced topics for solving engineering optimization problems, e.g., scheduling problems, sensors deployment problems, and clustering problems
  • Includes source code based on the unified framework for metaheuristics used as examples to show how TS, SA, GA, ACO, PSO, DE, parallel metaheuristic algorithm, hybrid metaheuristic, local search, and other advanced technologies are realized in programming languages such as C++ and Python
LanguageEnglish
Release dateMay 30, 2023
ISBN9780443191091
Handbook of Metaheuristic Algorithms: From Fundamental Theories to Advanced Applications
Author

Chun-Wei Tsai

Chun-Wei Tsai received his Ph.D. degree from the Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan in 2009 where he is currently an assistant professor. He has more than 20 years of experience in metaheuristic algorithms and their applications and has served as the secretary general of Taiwan Association of Cloud Computing from 2018 to 2021; as an associate editor for Journal of Internet Technology, IEEE Access, IET Networks, and IEEE Internet of Things Journal since 2014, 2017, 2018, and 2020, respectively. He has also been a member of the Editorial Board of the Elsevier Journal of Network and Computer Applications (JNCA) and Elsevier ICT Express since 2017 and 2021, respectively. His research interests include computational intelligence, data mining, cloud computing, and internet of things.

Related to Handbook of Metaheuristic Algorithms

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Handbook of Metaheuristic Algorithms

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Handbook of Metaheuristic Algorithms - Chun-Wei Tsai

    Preface

    Chun-Wei Tsai; Ming-Chao Chiang     

    Our very first memory of a metaheuristic algorithm, the genetic algorithm (GA), in the early 2000s, is quite vivid. Many other metaheuristic algorithms, including simulated annealing (SA), tabu search (TS), ant colony optimization (ACO), particle swarm optimization (PSO), and differential evolution (DE), mushroomed at the end of the 1980s and continued to thrive, to become a big family nowadays. This finding led us to embark on a challenging but rewarding adventure to this new world that has its own system and rules; however, almost everything can be broken and innovation is a norm in this world. About 10 years ago, we came across the idea of writing a book to pass our knowledge of metaheuristic algorithms on to whoever are interested in this research domain for one reason: We found that many students have a hard time realizing some of the metaheuristic algorithms based on descriptions given in reference books or research papers. Although they may be able to find source code for some of the metaheuristic algorithms on the Internet, the coding styles and explanations are generally quite different. It would normally take students a lot of time and effort to adapt themselves to these different coding styles to figure out how to realize them. Apparently, there exists a gap between theory and implementation in this research domain. To narrow down this gap, we first present a unified framework for metaheuristics (UFM) and then use it to describe well-known metaheuristic algorithms and their variants. To make it easier for the audience of this book to understand how the metaheuristic algorithms and advanced technologies discussed herein are realized, the source code developed by the authors from scratch based on the UFM is included to show how SA, TS, GA, ACO, PSO, DE, parallel metaheuristic algorithms, hybrid metaheuristics, local search, and some other relevant advanced technologies are implemented in C++ and Python, where C++ is the programming language of our choice for the examples described in this book, while Python is the programming language suggested by one of the reviewers of the proposal of this book. We are not particularly good at Python, and it took us about a month to finish the implementation.

    In the very beginning, the aim was to reduce the learning curve of the audience by developing an integrated library. Different versions of this library even have nicknames such as the USS Enterprise (NCC-1701) and Space Battleship Yamato in our groups. With the first version of the library, only about 10 to 20 instructions are needed to realize a new metaheuristic algorithm for solving a new optimization problem. This all benefits from the characteristics of an object-oriented programming language (namely, encapsulation, inheritance, and polymorphism) in general and the template of C++ in particular. Although this library is very charming in the sense that it makes it very easy to realize most metaheuristic algorithms for solving various optimization problems—be it discrete or continuous—students have difficulty understanding how metaheuristic algorithms work. From the perspective of teaching, this library is not useful for students who do not know what a metaheuristic algorithm is. So, although we still have this library today, the students in our groups are not encouraged to use it. Our teaching and learning experience shows that the best way to learn about a new metaheuristic algorithm is to first understand the theory (algorithm) and then implement it. The realization process helps us to fully understand the pros and cons of metaheuristic algorithms. However, the learning performance of some students was still not as good as expected, even though some of the implementation was simplified in the second version of the book. For example, in the second version, the function was used to simplify the parsing of the command line options, but students who did not have enough experience with C++ or other relevant programming languages were still struggling. We soon realized that theory and programming should not be an obstacle for students and researchers who are interested in entering this research domain. The price that must be paid to write a more flexible program is certainly that it will be more complicated as well. Instead of making the implementation more flexible, and thus more complicated, in this third version, the aim is to make implementation as simple and thus as easy to understand as possible, so chances are that the audience of this book will have no difficulty understanding not only the theory but also the implementation of a set of representative metaheuristic algorithms, to make it easier for them to enter this research domain.

    The ultimate goal of this book is to share with the audience our experience and know-how on metaheuristic algorithms from the ground up, that is, from the basic ideas to advanced technologies, even for readers who have no background knowledge in artificial intelligence or machine learning. This book is roughly divided into three parts: fundamentals, advanced technologies, and the appendix.

    •  For readers who are new to this area, we suggest reading the book from the very beginning to the very end chapter by chapter. In addition to the implementation in C++ described in the main text, the implementation in Python can be found in Appendix B: Implementation in Python.

    •  For readers who already have some background and experience in metaheuristic algorithms, we suggest reading Chapters 3 and 4 first to get familiar with the writing and programming styles of this book and then selecting an algorithm in Chapters 5–10 that is of interest as the second starting point before jumping to chapters in the second part of the book (advanced technologies) to find the knowledge that is of interest.

    •  For readers who already are quite familiar with most metaheuristic algorithms, we suggest starting with Chapter 4 and then jumping to the chapter that is of interest to find the needed information. Moreover, brief reviews of SA, TS, GA, ACO, PSO, and DE are provided in the end of Chapters 5–10, and discussions on advanced technologies in the beginning of Chapters 11–20 may provide some useful information.

    After reading this book, it is expected that the audience can not only realize most of the metaheuristic algorithms, but also use them to solve real-world problems. It is also hoped that this book can be used by students and researchers as a reference for self-study to enter this research domain or by teachers as a reference or textbook for a course. This book does not cover all metaheuristic algorithms; however, it provides insight into the metaheuristic algorithms and it is expected that people who read the whole book will have no difficulty understanding metaheuristic algorithms not covered in this book. The strategy on which this book is based is like the famous proverb: Give a man a fish, and you feed him for a day; teach a man to fish, and you feed him for a lifetime. Finally, it is hoped that this will lead the audience of this book to come up with valuable ideas and solutions in using metaheuristic algorithms. Last but not least, we would like to take this opportunity to thank Mara E. Conner, John Leonard, and Sajana Devasi P K at Elsevier for their support in the publication of this book. Special thanks go to the anonymous reviewers who suggested that this book should also provide (1) the implementation in Python so that readers who are not familiar with C++ can also benefit from this book, (2) brief reviews of SA, TS, GA, ACO, PSO, and DE, and (3) discussions on advanced applications of metaheuristic algorithms. The days of the week writing this book were the days we enjoyed the most, because we forgot about almost everything else, like children playing a video game all night. We may write another book to cover more metaheuristic algorithms presented after 2000 in the near future. It sounds very exciting and is worthy of being looked forward to, isn't it?

    Sizihwan

    October 2022

    Part One: Fundamentals

    Outline

    Chapter One. Introduction

    Chapter Two. Optimization problems

    Chapter Three. Traditional methods

    Chapter Four. Metaheuristic algorithms

    Chapter Five. Simulated annealing

    Chapter Six. Tabu search

    Chapter Seven. Genetic algorithm

    Chapter Eight. Ant colony optimization

    Chapter Nine. Particle swarm optimization

    Chapter Ten. Differential evolution

    Chapter One: Introduction

    Abstract

    Since we encounter many complex optimization problems in our daily lives and scientific studies, the development of high-performance search algorithms to solve these complex optimization problems, especially those that are either NP-hard or NP-complete, has been an interesting research topic for years. That is why this chapter begins with an optimization problem, named the traveling salesman problem, as an example to explain how hard such problems can be. This is followed by a discussion of why metaheuristic algorithms are used to solve optimization problems and the main research issues in the implementation of metaheuristic algorithms. The organization of the whole book and of each chapter is explained to show how to use this book to maximize learning.

    Keywords

    optimization problem; metaheuristic algorithm; traveling salesman problem; NP-hard; NP-complete

    Optimization problems are ubiquitous. In general, in optimization problems the goal is to find the best solution from a large number of possible solutions¹ in a given solution space. One way to solve an optimization problem is to first find good solutions; then estimate these solutions; and finally use information thus obtained to make a decision between looking for other possible solutions² or simply announcing that the solution for the problem in question has been found. This technique for solving a problem can be traced back to the crude life of our savage ancestors in the remote past. For instance, using rocks or animal bones as weapons for hunting large beasts seems to be a better solution than fighting with savage beasts bare-handed. The lesson we learn from the above is that problems have been solved as follows: (1) find possible solutions, (2) evaluate possible solutions to obtain qualities, and (3) compare qualities of solutions we have and determine whether or not to repeat the process until an appropriate solution is found. Although optimization problems may be different, a similar process can be used to find an applicable solution, thus making our life much easier.

    The two types of problems that we often confront in our daily lives are: (1) those that can be solved intuitively and (2) those that cannot be solved intuitively. The first type is, of course, very simple and can often be solved intuitively or reflexively. In our daily lives, we pull our hands back immediately when our fingers touch fire neglectfully. Since this example has only two possible solutions—either pull our hands back or keep touching the fire—we have to quickly make a good decision without thinking for a long time to avoid being in jeopardy. In contrast to the problems that can be solved intuitively, many complex and/or large-scale optimization problems we face in our daily lives cannot be solved intuitively because all these problems have a common characteristic—a very large solution space that is composed of a huge number of candidate solutions for the optimization problem in question—and there is simply no way to know which one of these solutions is the best intuitively, because it is like looking for a needle in a haystack. Although the best solution can be pinpointed from a large number of possible solutions by checking each of the solutions one by one, it may take an unreasonable amount of time for a complex optimization problem.

    A good example is the so-called traveling salesman problem (TSP) (Applegate et al., 2007; Lin, 1965). This well-known complex optimization problem is defined as follows: Given a set of cities and the distance between each pair of cities, find the shortest path to visit each city and then return to the city of departure. This kind of problem can often be found in our daily lives in different forms. For instance, a truck driver of an express home delivery service typically has to drive to the place of dispatch in the departure city to pick up the goods before they are delivered to different customers in cities , , and . Finally, the truck driver has to drive back to city . Table 1.1 shows the three possible distinct paths , , and .

    Table 1.1

    Fig. 1.1 shows graphically the delivery order of cities , , and , which may affect the efficiency of a truck driver in terms of the total delivery time or cost. Finding the shortest or lowest-cost route is a critical problem for such a company. It seems that this problem can be easily handled. Nevertheless, we will soon realize that there are in fact a total of possible routes if we always depart from and return to city , where n is the number of cities.

    Figure 1.1 Different paths for a truck driver in the case of four cities. (a) Path 1 ( r 1 ). (b) Path 2 ( r 2 ). (c) Path 3 ( r 3 ).

    Table 1.2 shows that as the number of cities increases, the number of possible routes will grow explosively. There are 12 possible routes for five cities, 60 possible routes for six cities, and so forth. For such a complex problem, despite the fact that intuition cannot guarantee that the best solution will be found, it is reasonable for human beings to give it a try, especially when in a hopeless tangle.

    Table 1.2

    Now the following question arises: Can modern computers find the best solution of a TSP in a reasonable time? Eventually, all we know is that the advance of computer technology has witnessed many impossibles being made possibles in many different fields. The answer is yes and no! Computers are indeed powerful and useful for computations that have to be repeated again and again, such as checking for a large number of possible routes in a short time. However, letting the computer list all possible routes of a TSP before picking the best route may not be a feasible strategy, even with the fastest computer in the world today, when the number of cities is high. The reason is simple. With a large problem, the number of possible routes may become an astronomical number. In this case, even modern computers may not be able to list all of them in a reasonable time, let alone check all of them for the best solution, again, in a reasonable time. This TSP example demonstrates that using only the computational ability of a computer is not guaranteed to solve complex problems that we confront efficiently. Accordingly, modern computer systems have typically been designed for integration with smart or intelligent methods to make it possible for them to solve complex problems in a reasonable time. Deep Blue developed by IBM that beats Kasparov in a chess game (Silver, 2012) and AlphaGo developed by Google (Silver et al., 2016) that beats Lee Sedol in a Go game are two representative examples to show the possibilities of computers with artificial intelligence. AlphaGo, in addition to deep learning algorithms, uses the so-called minimax and alpha-beta pruning methods to reduce computations that are essentially irrelevant to make it possible for such a system to find a good decision in a Go game in a reasonable time. This illustrates the importance of artificial intelligence methods in modern computer

    Enjoying the preview?
    Page 1 of 1