Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms
Ebook527 pages5 hours

Nature-Inspired Optimization Algorithms

Rating: 4 out of 5 stars

4/5

()

Read preview

About this ebook

Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning and control, as well as multi-objective optimization.

This book can serve as an introductory book for graduates, doctoral students and lecturers in computer science, engineering and natural sciences. It can also serve a source of inspiration for new applications. Researchers and engineers as well as experienced experts will also find it a handy reference.

  • Discusses and summarizes the latest developments in nature-inspired algorithms with comprehensive, timely literature
  • Provides a theoretical understanding as well as practical implementation hints
  • Provides a step-by-step introduction to each algorithm
LanguageEnglish
Release dateFeb 17, 2014
ISBN9780124167452
Nature-Inspired Optimization Algorithms
Author

Xin-She Yang

Xin-She Yang obtained his DPhil in Applied Mathematics from the University of Oxford. He then worked at Cambridge University and National Physical Laboratory (UK) as a Senior Research Scientist. He is currently a Reader in Modelling and Simulation at Middlesex University London, Fellow of the Institute of Mathematics and its Application (IMA) and a Book Series Co-Editor of the Springer Tracts in Nature-Inspired Computing. He has published more than 25 books and more than 400 peer-reviewed research publications with over 82000 citations, and he has been on the prestigious list of highly cited researchers (Web of Sciences) for seven consecutive years (2016-2022).

Read more from Xin She Yang

Related to Nature-Inspired Optimization Algorithms

Related ebooks

Programming For You

View More

Related articles

Reviews for Nature-Inspired Optimization Algorithms

Rating: 4 out of 5 stars
4/5

1 rating1 review

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 4 out of 5 stars
    4/5
    Interesante lectura, algoritmos bien explicados pero algunas imágenes de las fórmulas fueron borradas del libro

Book preview

Nature-Inspired Optimization Algorithms - Xin-She Yang

Nature-Inspired Optimization Algorithms

First Edition

Xin-She Yang

School of Science and Technology Middlesex University London, London

Table of Contents

Cover image

Title page

Copyright

Preface

1: Introduction to Algorithms

1.1 What is an Algorithm?

1.2 Newton’s Method

1.3 Optimization

1.4 Search for Optimality

1.5 No-Free-Lunch Theorems

1.6 Nature-Inspired Metaheuristics

1.7 A Brief History of Metaheuristics

2: Analysis of Algorithms

2.1 Introduction

2.2 Analysis of Optimization Algorithms

2.3 Nature-Inspired Algorithms

2.4 Parameter Tuning and Parameter Control

2.5 Discussions

2.6 Summary

3: Random Walks and Optimization

3.1 Random Variables

3.2 Isotropic Random Walks

3.3 Lévy Distribution and Lévy Flights

3.4 Optimization as Markov Chains

3.5 Step Sizes and Search Efficiency

3.6 Modality and Intermittent Search Strategy

3.7 Importance of Randomization

3.8 Eagle Strategy

4: Simulated Annealing

4.1 Annealing and Boltzmann Distribution

4.2 Parameters

4.3 SA Algorithm

4.4 Unconstrained Optimization

4.5 Basic Convergence Properties

4.6 SA Behavior in Practice

4.7 Stochastic Tunneling

5: Genetic Algorithms

5.1 Introduction

5.2 Genetic Algorithms

5.3 Role of Genetic Operators

5.4 Choice of Parameters

5.5 GA Variants

5.6 Schema Theorem

5.7 Convergence Analysis

6: Differential Evolution

6.1 Introduction

6.2 Differential Evolution

6.3 Variants

6.4 Choice of Parameters

6.5 Convergence Analysis

6.6 Implementation

7: Particle Swarm Optimization

7.1 Swarm Intelligence

7.2 PSO Algorithm

7.3 Accelerated PSO

7.4 Implementation

7.5 Convergence Analysis

7.6 Binary PSO

8: Firefly Algorithms

8.1 The Firefly Algorithm

8.2 Algorithm Analysis

8.3 Implementation

8.4 Variants of the Firefly Algorithm

8.5 Firefly Algorithms in Applications

8.6 Why the Firefly Algorithm is Efficient

9: Cuckoo Search

9.1 Cuckoo Breeding Behavior

9.2 Lévy Flights

9.3 Cuckoo Search

9.4 Why Cuckoo Search is so Efficient

9.5 Global Convergence: Brief Mathematical Analysis

9.6 Applications

10: Bat Algorithms

10.1 Echolocation of Bats

10.2 Bat Algorithms

10.3 Implementation

10.4 Binary Bat Algorithms

10.5 Variants of the Bat Algorithm

10.6 Convergence Analysis

10.7 Why the Bat Algorithm is Efficient

10.8 Applications

11: Flower Pollination Algorithms

11.1 Introduction

11.2 Flower Pollination Algorithm

11.3 Multi-Objective Flower Pollination Algorithms

11.4 Validation and Numerical Experiments

11.5 Applications

11.6 Further Research Topics

12: A Framework for Self-Tuning Algorithms

12.1 Introduction

12.2 Algorithm Analysis and Parameter Tuning

12.3 Framework for Self-Tuning Algorithms

12.4 A Self-Tuning Firefly Algorithm

12.5 Some Remarks

13: How to Deal with Constraints

13.1 Introduction and Overview

13.2 Method of Lagrange Multipliers

13.3 KKT Conditions

13.4 Penalty Method

13.5 Equality with Tolerance

13.6 Feasibility Rules and Stochastic Ranking

13.7 Multi-objective Approach to Constraints

13.8 Spring Design

13.9 Cuckoo Search Implementation

14: Multi-Objective Optimization

14.1 Multi-Objective Optimization

14.2 Pareto Optimality

14.3 Weighted Sum Method

14.4 Utility Method

14.5 The -Constraint Method

14.6 Metaheuristic Approaches

14.7 NSGA-II

15: Other Algorithms and Hybrid Algorithms

15.1 Ant Algorithms

15.2 Bee-Inspired Algorithms

15.3 Harmony Search

15.4 Hybrid Algorithms

15.5 Final Remarks

Appendix A: Test Function Benchmarks for Global Optimization

Appendix B: Matlab Programs

B.1 Simulated Annealing

B.2 Particle Swarm Optimization

B.3 Differential Evolution

B.4 Firefly Algorithm

B.5 Cuckoo Search

B.6 Bat Algorithm

B.7 Flower Pollination Algorithm

Copyright

Elsevier

32 Jamestown Road, London NW1 7BY

225 Wyman Street, Waltham, MA 02451, USA

First edition 2014

Copyright © 2014 Elsevier Inc. All rights reserved

No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangement with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website:

This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein)

Notice

Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary

Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility

To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein

British Library Cataloguing-in-Publication Data

A catalogue record for this book is available from the British Library

Library of Congress Cataloging-in-Publication Data

A catalog record for this book is available from the Library of Congress

For information on all Elsevier publications visit our website at

ISBN 978-0-12-416743-8

This book has been manufactured using Print On Demand technology. Each copy is produced to order and is limited to black ink. The online version of this book will show color figures where appropriate.

Preface

Nature-inspired optimization algorithms have become increasingly popular in recent years, and most of these metaheuristic algorithms, such as particle swarm optimization and firefly algorithms, are often based on swarm intelligence. Swarm-intelligence-based algorithms such as cuckoo search and firefly algorithms have been found to be very efficient.

The literature has expanded significantly in the last 10 years, intensifying the need to review and summarize these optimization algorithms. Therefore, this book strives to introduce the latest developments regarding all major nature-inspired algorithms, including ant and bee algorithms, bat algorithm, cuckoo search, firefly algorithms, flower algorithms, genetic algorithms, differential evolution, harmony search, simulated annealing, particle swarm optimization, and others. We also discuss hybrid methods, multiobjective optimization, and the ways of dealing with constraints.

Organization of the book’s contents follows a logical order so that we can introduce these algorithms for optimization in a natural way. As a result, we do not follow the order of historical developments. We group algorithms and analyze them in terms of common criteria and similarities to help readers gain better insight into these algorithms.

This book’s emphasis is on the introduction of basic algorithms, analysis of key components of these algorithms, and some key steps in implementation. However, we do not focus too much on the exact implementation using programming languages, though we do provide some demo codes in the Appendices.

The diversity and popularity of nature-inspired algorithms do not mean there is no problem that needs urgent attention. In fact, there are many important questions that remain open problems. For example, there are some significant gaps between theory and practice. On one hand, nature-inspired algorithms for optimization are very successful and can obtain optimal solutions in a reasonably practical time. On the other hand, mathematical analysis of key aspects of these algorithms, such as convergence, balance of solution accuracy and computational efforts, is lacking, as is the tuning and control of parameters.

Nature has evolved over billions of years, providing a rich source of inspiration. Researchers have drawn various inspirations to develop a diverse range of algorithms with different degrees of success. Such diversity and success do not mean that we should focus on developing more algorithms for the sake of algorithm developments, or even worse, for the sake of publication. We do not encourage readers to develop new algorithms such as grass, tree, tiger, penguin, snow, sky, ocean, or Hobbit algorithms. These new algorithms may only provide distractions from the solution of really challenging and truly important problems in optimization. New algorithms may be developed only if they provide truly novel ideas and really efficient techniques to solve challenging problems that are not solved by existing algorithms and methods.

It is highly desirable that readers gain some insight into the nature of different nature-inspired algorithms and can thus take on the challenges to solve key problems that need to be solved. These challenges include the mathematical proof of convergence of some bio-inspired algorithms, the theoretical framework of parameter tuning and control, statistical measures of performance comparison; solution of large-scale, real-world applications; and real progress on tackling nondeterministic polynomial (NP)-hard problems. Solving these challenging problems is becoming more important than ever before.

It can be expected that highly efficient, truly intelligent, self-adaptive, and self-evolving algorithms may emerge in the not-so-distant future so that challenging problems of crucial importance (e.g., the traveling salesman problem and protein structure prediction) can be solved more efficiently.

Any insight gained or any efficient tools developed will no doubt have a huge impact on the ways that we solve tough problems in optimization, computational intelligence, and engineering design applications.

Xin-She Yang

London, 2013

1

Introduction to Algorithms

Abstract

Algorithms are important tools for solving problems computationally. All computation involves algorithms, and the efficiency of an algorithm largely determines its usefulness. This chapter provides an overview of the fundamentals of algorithms and their links to self-organization, exploration, and exploitation. A brief history of recent nature-inspired algorithms for optimization is outlined in this chapter.

Keywords

Algorithm

Optimization

Nature-inspired

Metaheuristic

Self-organization

Optimization is paramount in many applications, such as engineering, business activities, and industrial designs. Obviously, the aims of optimization can be anything—to minimize the energy consumption and costs, to maximize the profit, output, performance, and efficiency. It is no exaggeration to say that optimization is needed everywhere, from engineering design to business planning and from Internet routing to holiday planning. Because resources, time, and money are always limited in real-world applications, we have to find solutions to optimally use these valuable resources under various constraints. Mathematical optimization or programming is the study of such planning and design problems using mathematical tools. Since most real-world applications are often highly nonlinear, they require sophisticated optimization tools to tackle. Nowadays, computer simulations become an indispensable tool for solving such optimization problems with various efficient search algorithms.

Behind any computer simulation and computational methods, there are always some algorithms at work. The basic components and the ways they interact determine how an algorithm works and the efficiency and performance of the algorithm.

This chapter introduces algorithms and analyzes the essence of the algorithm. Then we discuss the general formulation of an optimization problem and describe modern approaches in terms of swarm intelligence and bio-inspired computation. A brief history of nature-inspired algorithms is reviewed.

1.1 What is an Algorithm?

In essence, an algorithm is a step-by-step procedure of providing calculations or instructions. Many algorithms are iterative. The actual steps and procedures depend on the algorithm used and the context of interest. However, in this book, we mainly concern ourselves with the algorithms for optimization, and thus we place more emphasis on iterative procedures for constructing algorithms.

, can be written as

(1.1)

is the iteration counter or index, also called the pseudo-time or generation counter.

in the following form:

(1.2)

, we have

(1.3)

(1.4)

(1.5)

which shows that this iteration method is very efficient.

due to the fact that

(1.6)

).

. This method is similar to the well-known bisection method.

It is worth pointing out that the final result, though converged beautifully here, may depend on the starting (initial) guess. This is a very common feature and disadvantage of deterministic procedures or algorithms. We will come back to this point many times in different contexts in this book.

was converted to Eq. (1.1)? Why not write the iterative formula as simply the following:

(1.7)

, we have

(1.8)

. This clearly demonstrates that the way to design a good iterative formula is very important.

. That is,

(1.9)

. Such notations will become useful and no confusion will occur when used appropriately. We use such notations when appropriate in this book.

1.2 Newton’s Method

Newton’s method . It was formulated by Newton in 1669, and later Raphson applied this idea to polynomials in 1690. This method is also referred to as the Newton-Raphson method.

,

(1.10)

which leads to

(1.11)

or

(1.12)

in the preceding expression. Thus we have the standard Newton iterative formula

(1.13)

and continues until a certain criterion is met.

can be used as the starting point. But if the initial value is too far away from the true zero, the iteration process may fail. So it is a good idea to limit the number of iterations.

Newton’s method is very efficient and is thus widely used. For nonlinear equations, there are often multiple roots, and the choice of initial guess may affect the root into which the iterative procedure could converge. For some initial guess, the iteration simply does not work. This is better demonstrated by an example.

We know that the following nonlinear equation

. Let us now try to solve it using Newton’s method. First, we rewrite it as

, and

. The convergence is very slow.

and the iterative formula

(1.14)

we get

, though the expression may become singular if we continue the iterations.

. This highlights the importance of choosing the right initial starting point.

-dimensional space. That is,

(1.15)

in bold font as our vector notation.

. In general, this Newton-Raphson method has a quadratic convergence rate. Sometimes the true convergence rate may not be as quick as it should be; it may have nonquadratic convergence property.

so that

(1.16)

as

(1.17)

The previous iterative equation can be written as

(1.18)

to be optimized.

1.3 Optimization

Mathematically speaking, it is possible to write most optimization problems in the generic form

(1.19)

(1.20)

(1.21)

are functions of the design vector

(1.22)

are called design or decision variables, and they can be real continuous, discrete, or a mix of these two.

are called the objective functions or simply cost functions, there is only a single objective. The space spanned by the decision variables is called the design space or search space , whereas the space formed by the objective function values is called the solution space or response spaceare called constraints, and we can also formulate the objectives as a maximization problem.

In a rare but extreme case where there is no objective at all, there are only constraints. Such a problem is called a feasibility problem because any feasible solution is an optimal solution.

. Multiobjective optimization is also referred to as multicriteria or even multiattribute optimization in the literature. In real-world problems, most optimization tasks are multiobjective. Though the algorithms we discuss in this book are equally applicable to multiobjective optimization with some modifications, we mainly place the emphasis on single-objective optimization problems.

, then it is called an unconstrained optimization , it is called an equality-constrained become an inequality-constrained problem.

. However, equality constraints have special properties and require special care. One drawback is that the volume of satisfying an equality is essentially zero in the search space, thus it is very difficult to get sampling points that satisfy the equality exactly. Some tolerance or allowance is used in practice.

are nonlinear, we have to deal with a nonlinear optimization problem.

1.3.1 Gradient-Based Algorithms

Newton’s method introduced earlier is for single-variable functions. Now let us extend it to multivariate functions.

:

is the solution to the following linear equation:

(1.23)

This leads to

(1.24)

is the Hessian matrix, which is defined as

(1.25)

This matrix is symmetric due to the fact that

(1.26)

iteration can be written as

(1.27)

is quadratic, the solution can be found exactly in a single step. However, this method is not efficient for nonquadratic functions.

and we have the modified Newton’s method

(1.28)

, and we have the quasi-Newton method

(1.29)

which is essentially the steepest descent method.

, we have

(1.30)

is the increment vector. Since we are trying to find a better approximation to the objective function, it requires that the second term on the right side be negative. So,

(1.31)

is the largest when they are parallel but in opposite directions, so as to make their dot product negative. Therefore, we have

(1.32)

is along the steepest descent in the negative gradient direction. In the case of finding maxima, this method is often referred to as hill climbing.

. Therefore, the steepest descent method can be written as

(1.33)

In each iteration, the gradient and step size will be calculated. Again, a good initial guess of both the starting point and the step size

Enjoying the preview?
Page 1 of 1