GECCO 2023 will feature 32 tutorials.
Title | Organizers |
---|---|
A Gentle Introduction to Theory (for Non-Theoreticians) |
|
An Introduction to Scientific Experimentation and Benchmarking |
|
Automated Algorithm Configuration and Design |
|
Bayesian Optimization |
|
Benchmarking and analyzing iterative optimization heuristics with IOHprofiler |
|
Benchmarking Multiobjective Optimizers 2.0 |
|
CMA-ES and Advanced Adaptation Mechanisms |
|
Coevolutionary Computation for Adversarial Deep Learning |
|
Constraint-Handling Techniques used with Evolutionary Algorithms |
|
Evolution of Neural Networks |
|
Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition |
|
Evolutionary Computation and Tunneling at the Edge of Quantum Computing |
|
Evolutionary Computation for Feature Selection and Feature Construction |
|
Evolutionary computation for stochastic problems |
|
Exploratory Landscape Analysis |
|
Generative Hyper-heuristics |
|
Genetic Improvement: Taking real-world source code and improving it using computational search methods |
|
Genetic Programming: A Tutorial Introduction |
|
Introduction to Quantum Optimization |
|
Landscape Analysis of Optimization Problems and Algorithms |
|
Large-Scale Optimization and Learning |
|
Lexicase Selection |
|
Model-Based Evolutionary Algorithms |
|
Modern Applications of Evolutionary Rule-based Machine Learning |
|
Optimization Challenges at the European Space Agency |
|
Quality-Diversity Optimization |
|
Representations for Evolutionary Algorithms |
|
Runtime Analysis of Population-based Evolutionary Algorithms - Part I: Steady State EAs |
|
Single and Multi-Objective Bilevel Optimization |
|
Theory and Practice of Population Diversity in Evolutionary Computation |
|
Transfer Learning in Evolutionary Spaces |
|
What You Always Wanted to Know About Evolution Strategies, But Never Dared to Ask |
|
Tutorials
A Gentle Introduction to Theory (for Non-Theoreticians)
This tutorial addresses GECCO attendees who do not regularly use
theoretical methods in their research. For these, we give a smooth
introduction to the theory of evolutionary computation.
Complementing other introductory theory tutorials, we do not discuss
mathematical methods or particular results, but explain
- what the theory of evolutionary algorithms aims at,
- how theoretical research in evolutionary computation is conducted,
- how to interpret statements from the theory literature,
- what the most important theory contributions are, and
- what the theory community is trying to understand most at the moment.
Benjamin Doerr
Benjamin Doerr is a full professor at the French Ecole Polytechnique. He received his diploma (1998), PhD (2000) and habilitation (2005) in mathematics from Kiel University. His research area is the theory of both problem-specific algorithms and randomized search heuristics like evolutionary algorithms. Major contributions to the latter include runtime analyses for existing evolutionary algorithms, the determination of optimal parameter values, and complexity theoretic results. Benjamin's recent focus is the theory-guided design of novel operators, on-the-fly parameter choices, and whole new evolutionary algorithms, hoping that theory not only explains, but also develops evolutionary computation.
Together with Frank Neumann and Ingo Wegener, Benjamin Doerr founded the theory track at GECCO and served as its co-chair 2007-2009 and 2014. He is a member of the editorial boards of "Artificial Intelligence", "Evolutionary Computation", "Natural Computing", "Theoretical Computer Science", and three journals on classic algorithms theory. Together with Anne Auger, he edited the the first book focused on theoretical aspects of evolutionary computation ("Theory of Randomized Search Heuristics", World Scientific 2011). Together with Frank Neumann, he is an editor of the recent book "Theory of Evolutionary Computation - Recent Developments in Discrete Optimization" (Springer 2020).
An Introduction to Scientific Experimentation and Benchmarking
Scope and Content: In the field of Evolutionary Computation, scientific experimentation is needed to test scientific hypotheses, new ideas, and to design new algorithms. Benchmarking typically comes into play in a second phase of algorithm design to evaluate the performance of new algorithms and compare them to others.
Yet, how to conduct scientific experimentation in a sound way and avoid typical mistakes is rarely if ever taught. Young researchers mainly learn (or do not learn) only from their mistakes.
In this tutorial, we will discuss how to conduct basic scientific experimentation in an efficient manner starting from posing good questions, using adequate experimental design, and visualizing data in the right way. One main goal is to not fool ourselves in the understanding of how much a new idea helps independently of “how beautiful the idea is”1 or how desperately we want to publish said idea.
We will cover the question of statistical tests and of how many experiments should be done to present new results.
We will also present and discuss key methodological ideas of benchmarking that have been invented and established over the past 15 years.
This tutorial is intended for young researchers starting in the field who wish to learn how to conduct scientific experimentation and benchmarking as well as for researchers that wish to challenge how they do experimentation and benchmarking.
Anne Auger
Anne Auger is a research director at the French National Institute for Research in Computer Science and Control (Inria) heading the RandOpt team. She received her diploma (2001) and PhD (2004) in mathematics from the Paris VI University. Before to join INRIA, she worked for two years (2004-2006) at ETH in Zurich. Her main research interest is stochastic continuous optimization including theoretical aspects, algorithm designs and benchmarking. She is a member of ACM-SIGECO executive committee and of the editorial board of Evolutionary Computation. She has been General chair of GECCO in 2019. She has been organizing the biannual Dagstuhl seminar "Theory of Evolutionary Algorithms" in 2008 and 2010 and all seven previous BBOB workshops at GECCO since 2009. She is co-organzing the forthcoming Dagstuhl seminar on benchmarking.
Nikolaus Hansen
Nikolaus Hansen is a research director at Inria and the Institut Polytechnique de Paris, France. After studying medicine and mathematics, he received a PhD in civil engineering from the Technical University Berlin and the Habilitation in computer science from the University Paris-Sud. His main research interests are stochastic search algorithms in continuous, high-dimensional search spaces, learning and adaptation in evolutionary computation, and meaningful assessment and comparison methodologies. His research is driven by the goal to develop algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).
Automated Algorithm Configuration and Design
Most optimization algorithms, including evolutionary algorithms and metaheuristics, and general-purpose solvers for integer or constraint programming, have often many parameters that need to be properly designed and tuned for obtaining the best results on a particular problem. Automatic (offline) algorithm design methods help algorithm users to determine the parameter settings that optimize the performance of the algorithm before the algorithm is actually deployed. Moreover, automatic offline algorithm design methods may potentially lead to a paradigm shift in algorithm design because they enable algorithm designers to explore much larger design spaces than by traditional trial-and-error and experimental design procedures. Thus, algorithm designers can focus on inventing new algorithmic components, combine them in flexible algorithm frameworks, and let final algorithm design decisions be taken by automatic algorithm design techniques for specific application contexts.
This tutorial is structured into two main parts. In the first part, we will give an overview of the algorithm design and tuning problem, review recent methods for automatic algorithm design, and illustrate the potential of these techniques using recent, notable applications from the presenters' and other researchers work. In the second part of the tutorial will focus on a detailed discussion of more complex scenarios, including multi-objective problems, anytime algorithms, heterogeneous problem instances, and the automatic generation of algorithms from algorithm frameworks. The focus of this second part of the tutorial is, hence, on practical but challenging applications of automatic algorithm design. The second part of the tutorial will demonstrate how to tackle algorithm design tasks using our irace or crace software, which implements the racing procedure for automatic algorithm design. We will provide a practical step-by-step guide on using irace/crace for the typical algorithm design scenario. Most optimization algorithms, including evolutionary algorithms and metaheuristics, and general-purpose solvers for integer or constraint programming, have often many parameters that need to be properly designed and tuned for obtaining the best results on a particular problem. Automatic (offline) algorithm design methods help algorithm users to determine the parameter settings that optimize the performance of the algorithm before the algorithm is actually deployed. Moreover, automatic offline algorithm design methods may potentially lead to a paradigm shift in algorithm design because they enable algorithm designers to explore much larger design spaces than by traditional trial-and-error and experimental design procedures. Thus, algorithm designers can focus on inventing new algorithmic components, combine them in flexible algorithm frameworks, and let final algorithm design decisions be taken by automatic algorithm design techniques for specific application contexts.
Leslie Pérez Cáceres
Leslie Pérez Cáceres is a professor at Pontificia Universidad Católica de Valparaíso, Chile since 2018. She is the Director of the Artificial Intelligence Diploma of the PUCV’s Escuela de Ingeniería Informática. She received the M.S. degree in Engineering Sciences in 2011 from the Universidad Técnica Federico Santa María and, the Ph.D. in Engineering and Technology Sciences from the Université Libre de Bruxelles in 2017. Her research interests are the automatic configuration of optimization algorithms and the design of optimization algorithm for solving combinatorial optimization problems. She is also one of the developers of the irace configuration tool.
Manuel López-Ibáñez
Dr. López-Ibáñez is a senior lecturer in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. Between 2020 and 2022, he was also a "Beatriz Galindo" Senior Distinguished Researcher at the University of Málaga, Spain. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 32 journal papers, 9 book chapters and 54 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, multi-objective optimization, and various combinatorial optimization problems. His current research interests are experimental analysis and the automatic configuration and design of stochastic optimization algorithms, for single and multi-objective problems. He is the lead developer and current maintainer of the irace software package for automatic algorithm configuration (http://iridia.ulb.ac.be/irace) and the EAF package for the analysis of multi-objective optimizers (https://mlopez-ibanez.github.io/eaf/).
Thomas Stützle
Thomas Stützle is a research director of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium. He received his PhD and his habilitation in computer science both from the Computer Science Department of Technische Universität Darmstadt, Germany, in 1998 and 2004, respectively. He is the co-author of two books about ``Stochastic Local Search: Foundations and Applications and ``Ant Colony Optimization and he has extensively published in the wider area of metaheuristics including 22 edited proceedings or books, 11 journal special issues, and more than 250 journal, conference articles and book chapters, many of which are highly cited. He is associate editor of Computational Intelligence, Evolutionary Computation and Applied Mathematics and Computation and on the editorial board of seven other journals. His main research interests are in metaheuristics, swarm intelligence, methodologies for engineering stochastic local search algorithms, multi-objective optimization, and automatic algorithm configuration. In fact, since more than a decade he is interested in automatic algorithm configuration and design methodologies and he has contributed to some effective algorithm configuration techniques such as F-race, Iterated F-race and ParamILS.
Bayesian Optimization
One of the strengths of evolutionary algorithms (EAs) is that they can be applied to black-box optimisation problems. For the sub-class of low-dimensional continuous black-box problems that are expensive to evaluate, Bayesian optimization (BO) has become a very popular alternative. BO has applications ranging from hyperparameter tuning of deep learning models and design optimization in engineering to stochastic optimization in operational research.
Bayesian optimization builds a probabilistic surrogate model, usually a Gaussian process, based on all previous evaluations. Gaussian processes are not only able to predict the quality of new solutions, but also approximate the uncertainty around the prediction. This information is then used to decide what solution to evaluate next, explicitly trading off exploration (high uncertainty) and exploitation (high quality). This trade-off is modeled by the acquisition function which quantifies how ‘interesting’ a solution is to evaluate.
Besides both being successful black-box and derivative-free optimizers, EAs and BO have more similarities. They can both handle multiple objectives and noise. EAs have been enhanced with surrogate models (including Gaussian processes) to better handle expensive function evaluations, and EAs are often used within BO to optimize the acquisition function.
The tutorial will introduce the general BO framework for black-box optimisation with and without noise, specifically highlighting the similarities and differences to evolutionary computation. The most commonly used acquisition functions will be explained, and how they are optimised using, e.g., evolutionary algorithms. Furthermore, we will cover multiobjective Bayesian optimisation, with a particular focus on noise handling strategies, and give examples of practical applications such as simulation-optimisation and hyperparameter optimisation.
Jürgen Branke
Jürgen Branke is Professor of Operational Research and Systems at Warwick Business School (UK). His main research interests are the adaptation of metaheuristics to problems under uncertainty (including optimization in stochastic and dynamic environments) as well as evolutionary multiobjective optimization. Prof. Branke has been an active researcher in evolutionary computation for 25 years, and has published more than 200 papers in international peer-reviewed journals and conferences. He is editor of the ACM Transactions on Evolutionary Learning and Optimization, area editor of the Journal of Heuristics and associate editor of the IEEE Transactions on Evolutionary Computation and the Evolutionary Computation Journal.
Sebastian Rojas Gonzalez
Sebastian Rojas Gonzalez is currently a post-doctoral research fellow at the Surrogate Modeling Lab, in Ghent University, Belgium. Until 2020 he worked at the Department of Decision Sciences of KU Leuven, Belgium, on a PhD in multiobjective simulation-optimisation. He previously obtained a M.Sc. in Applied Mathematics from the Harbin Institute of Technology in Harbin, China, and a masters degree in Industrial Systems Engineering from the Costa Rica Institute of Technology in 2012. Since 2016 his research work has centered on developing stochastic optimization algorithms assisted by machine learning techniques for multi-criteria decision making.
Ivo Couckuyt
Ivo Couckuyt received a M.Sc. degree in Computer Science from the University of Antwerp in 2007. In 2013 he finished a PhD degree working at the Internet Technology and Data Science Lab (IDLab) at Ghent University, Belgium. He was attached to Ghent University – imec where he workedas a post-doctoral research fellow until 2021. Since September 2021, he is an associate professor in the IDLab. His main research interests are automation in machine learning, data analytics, data-efficient machine learning and surrogate modeling algorithms.
Bernd Bischl
Bernd Bischl holds the chair of "Statistical Learning and Data Science" at the Department of Statistics at the Ludwig-Maximilians-Universität in Munich and he is a co-director of the Munich Center for Machine Learning (MCML), one of Germany’s national competence centers for ML. His research interests include AutoML, model selection, interpretable ML, as well as the development of statistical software. He is an active developer of several R-packages, leads the “mlr” (Machine Learning in R) engineering group and is co-founder of the science platform “OpenML” for open and reproducible ML.
Benchmarking and analyzing iterative optimization heuristics with IOHprofiler
Comparing and evaluating optimization algorithms is an important part of evolutionary computation, and requires a robust benchmarking setup to be done well. IOHprofiler supports researchers in this task by providing an easy-to-use, interactive, and highly customizable environment for benchmarking iterative optimizers.
IOHprofiler is designed as a modular benchmarking tool. The experimenter module provides easy access to common problem sets (e.g. BBOB functions) and modular logging functionality that can be easily combined with other optimization functions. The resulting logs (and logs from other platforms, e.g. COCO and Nevergrad) are fully interoperable with the IOHanalyzer, which provides access to highly interactive performance analysis, in the form of a wide array of visualizations and statistical analyses. A GUI, hosted at https://iohanalyzer.liacs.nl/ makes these analysis tools easy to access. Data from many repositories (e.g. COCO, Nevergrad) are pre-processed, such that the effort required to compare performance to existing algorithms is greatly reduced.
This tutorial will introduce the key features of IOHprofiler by providing background information on benchmarking in EC and showing how this can be done using the modules of IOHprofiler. The key components will be highlighted and demonstrated by the organizers, with a focus on the functionality introduced in the year since the last tutorial, such as constrained optimization. Guided examples will be provided to highlight the many aspects of algorithm performance which can be explored using the interactive GUI.
Carola Doerr
Carola Doerr, formerly Winzen, is a permanent CNRS research director at Sorbonne Université in Paris, France. Carola's main research activities are in the analysis of black-box optimization algorithms, both by mathematical and by empirical means. Carola is associate editor of IEEE Transactions on Evolutionary Computation, ACM Transactions on Evolutionary Learning and Optimization (TELO) and board member of the Evolutionary Computation journal. She is/was program chair for the GECH track at GECCO 2023, for PPSN 2020, FOGA 2019 and for the theory tracks of GECCO 2015 and 2017. She has organized Dagstuhl seminars and Lorentz Center workshops. Her works have received several awards, among them the CNRS bronze medal, the Otto Hahn Medal of the Max Planck Society, best paper awards at EvoApplications, CEC, and GECCO.
Hao Wang
Hao Wangobtained his PhD (cum laude) from Leiden University in2018. He is currently employed as an assistant professor of computer science at Leiden University. Previously, he has a research stay at Sorbonne University, France. He received the Best Paper Award at the PPSN2016conference and was a best paper award finalist at the IEEE SMC 2017 conference. His research interests are in the analysis and improvement of efficient global optimization for mixed-continuous search spaces, Evolution strategies, Bayesian optimization, and benchmarking.
Diederick Vermetten
Diederick Vermetten is a PhD student at LIACS. He is part of the core development team of IOHprofiler, with a focus on the IOHanalyzer. His research interests include benchmarking of optimization heuristics, dynamic algorithm selection and configuration as well as hyperparameter optimization.
Thomas Bäck
Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas Bäck has more than 350 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and most recently, the Handbook of Natural Computing. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft für Informatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.
Jacob de Nobel
Jacob de Nobel is a PhD student at LIACS, and is currently one of the core developers for the IOHexperimenter. His research concerns the real world application of optimization algorithms for finding better speech encoding strategies for cochlear implants, which are neuroprosthesis for people with profound hearing loss.
Furong Ye
Furong Ye is a PhD student at LIACS. He is part of the core development team of IOHprofiler, with a focus on the IOHexperimenter. His PhD topic is benchmarking discrete optimization heuristics, from building a sound experimental environment to algorithm configuration. His research interests are empirical analysis of algorithm performance and (dynamic) algorithm configuration.
Benchmarking Multiobjective Optimizers 2.0
Benchmarking is an important part of algorithm design, selection and recommendation---both in single- and multiobjective optimization. Benchmarking multiobjective solvers seems at first sight more complicated than benchmarking single-objective ones as there exists no natural total order on the objective space. In the past, comparisons of multiobjective solvers have therefore been done either entirely visually (at first) or via quality indicators and the attainment function. Only very recently did we realize that the quality indicator approach transforms a multiobjective problem into a single-objective (set-based) problem and thus all recent progress from the rapidly evolving single-objective benchmarking field can be transferred to the multiobjective case as well. Moreover, many multiobjective test functions have been proposed in the past but not much has changed in the last 15 or so years in terms of addressing the disadvantages of those problems (like Pareto sets on constraint boundaries, usage of distance variables, etc.).
In this tutorial, we will discuss the past and future of benchmarking multiobjective optimizers. In particular, we will discuss the new view on benchmarking multiobjective algorithms by falling back on single-objective comparisons and thus being able to use all methodologies and tools from the single-objective domain such as empirical distributions of runtimes. We will also discuss the advantages and drawbacks of some widely used multiobjective test suites that we have all become familiar with over the years and explain how we can do better: by going back to the roots of what a multi-objective problem is in practice, namely the simultaneous optimization of multiple objective functions. Finally, we discuss recent advances in the visualization of (multiobjective) problem landscapes and compare the previous and newly proposed benchmark problems in the context of those landscape visualizations.
Dimo Brockhoff
Dimo Brockhoff received his diploma in computer science from University of Dortmund, Germany in 2005 and his PhD (Dr. sc. ETH) from ETH Zurich, Switzerland in 2009. After two postdocs at Inria Saclay Ile-de-France (2009-2010) and at Ecole Polytechnique (2010-2011), he joined Inria in November 2011 as a permanent researcher (first in its Lille - Nord Europe research center and since October 2016 in the Saclay - Ile-de-France one). His research interests are focused on evolutionary multiobjective optimization (EMO), in particular on theoretical aspects of indicator-based search and on the benchmarking of blackbox algorithms in general. Dimo has co-organized all BBOB workshops since 2013 and was EMO track co-chair at GECCO'2013 and GECCO'2014.
Tea Tušar
Tea Tušar is a research fellow at the Department of Intelligent Systems of the Jozef Stefan Institute in Ljubljana, Slovenia. She was awarded the PhD degree in Information and Communication Technologies by the Jozef Stefan International Postgraduate School for her work on visualizing solution sets in multiobjective optimization. She has completed a one-year postdoctoral fellowship at Inria Lille in France where she worked on benchmarking multiobjective optimizers. Her research interests include evolutionary algorithms for singleobjective and multiobjective optimization with emphasis on visualizing and benchmarking their results and applying them to real-world problems.
CMA-ES and Advanced Adaptation Mechanisms
The Covariance-Matrix-Adaptation Evolution Strategy is nowadays considered as the state-of-the art continuous stochastic search algorithm, in particular for optimization of non-separable, ill-conditioned and rugged functions. The CMA-ES consists of different components that adapt the step-size and the covariance matrix separately.
This tutorial will focus on CMA-ES and provide the audience with an overview of the different adaptation mechanisms used within CMA-ES and the scenarios where their relative impact is important. We will in particular present the rank-one update, rank-mu update, active CMA for the covariance matrix adaptation. Different step-size mechanisms (CSA and TPA) as well as the scenario where they should be applied will be discussed.
We will address important design principles as well as questions related to parameter tuning that always accompany algorithm design. The input parameters such as the initial mean, the initial step-size, and the population size will be discussed in relation with the ruggedness of functions. Restart strategies that automatize the input parameter tuning will be presented.
Youhei Akimoto
Youhei Akimoto is an associate professor at University of Tsukuba, Japan. He received the B.S. degree in computer science in 2007, and the M.S. and Ph.D. degrees in computational intelligence and systems science from Tokyo Institute of Technology, Japan, in 2008 and 2011, respectively. From 2010 to 2011, he was a Research Fellow of JSPS in Japan, and from 2011 to 2013, he was a Post-Doctoral Research Fellow with INRIA in France. From 2013 to 2018, he was an Assistant Professor with Shinshu University, Japan. Since 2018, he has been an Associate Professor with University of Tsukuba, Japan as well as a Visiting Researcher with the Center for Advanced Intelligence Projects, RIKEN. He served as a Track Chair for the continuous optimization track of GECCO in 2015 and 2016. He is an Associate Editor of ACM TELO and is on the editorial board of the ECJ. He won the Best Paper Award at GECCO 2018 and FOGA 2019. His research interests include design principles, theoretical analyses, and applications of stochastic search heuristics.
Nikolaus Hansen
Nikolaus Hansen is a research director at Inria and the Institut Polytechnique de Paris, France. After studying medicine and mathematics, he received a PhD in civil engineering from the Technical University Berlin and the Habilitation in computer science from the University Paris-Sud. His main research interests are stochastic search algorithms in continuous, high-dimensional search spaces, learning and adaptation in evolutionary computation, and meaningful assessment and comparison methodologies. His research is driven by the goal to develop algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).
Coevolutionary Computation for Adversarial Deep Learning
Coevolutionary Computation for Adversarial Deep Learning
Description
In recent years, machine learning with Generative Adversarial Networks (GANs) has been recognized as a powerful method for generative modeling. Generative modeling is the problem of estimating the underlying distribution of a set of samples. GANs accomplish this using unsupervised learning. They have also been extended to handle semi-supervised and fully supervised learning paradigms. GANs have been successfully applied to many domains. They can generate novel images (e.g., image colorization or super-resolution, photograph editing, and text-to-image translation), sound (e.g., voice translation and music generation), and video (e.g., video-to-video translation, deep fakes generation, and AI-assisted video calls), finding application in domains of multimedia information, engineering, science, design, art, and games.
GANs are an adversarial paradigm. Two NNs compete with each other using antagonistic lost function to train the parameters with gradient descent. This connects them to evolution because evolution also exhibits adversarial engagements and competitive coevolution. In fact, the evolutionary computation community’s study of coevolutionary pathologies and its work on competitive and cooperative coevolutionary algorithms offers a means of solving convergence impasses often encountered in GAN training.
In this tutorial we will explain:
+ The main concepts of generative modeling and adversarial learning. + GAN gradient-based training and the main pathologies that prevent ideal convergence. Specifically, we will explain mode collapse, oscillation, and vanishing gradients. + Coevolutionary algorithms and how they can be applied to train GANs. Specifically, we will explain how algorithm enhancements address non-ideal convergence + To demonstrate we will draw upon the open-source Lipizzaner framework (url: http://lipizzaner.csail.mit.edu/). This framework is easy to use and extend. It sets up a spatial grid of communicating populations of GANs. + Students will be given the opportunity to set up and use the Lipizzaner framework during the tutorial by means of a jupyter notebook expressly developed for teaching purposes.Jamal Toutouh
Jamal Toutouh is a Researcher Assistant Professor at the University of Málaga (Spain). Previously, he was a Marie Skłodowska Curie Postdoctoral Fellow at Massachusetts Institute of Technology (MIT) in the USA, at the MIT CSAIL Lab. He obtained his Ph.D. in Computer Engineering at the University of Malaga (Spain), which was awarded the 2018 Best Spanish Ph.D. Thesis in Smart Cities. His dissertation focused on the application of Machine Learning methods inspired by Nature to address Smart Mobility problems. His current research explores the combination of Nature-inspired gradient-free and gradient-based methods to address Generative Modelling and Adversarial Machine Learning. The main idea is to devise new algorithms to improve the efficiency and efficacy of the state-of-the-art methodology by mainly applying evolutionary computation and related techniques, such as particle swarm optimization in the form of Evolutionary Machine Learning approaches. Besides, he is on the application of Machine Learning to address problems related to Smart Mobility, Smart Cities, and Climate Change.
UnaMay OReilly
Una-May O'Reilly is the leader of the AnyScale Learning For All (ALFA) group at MIT CSAIL. ALFA focuses on evolutionary algorithms, machine learning, and frameworks for large-scale knowledge mining, prediction and analytics. The group has projects in cyber security using coevolutionary algorithms to explore adversarial dynamics in networks and malware detection. Una-May received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, which has evolved into ACM Sig-EVO. She now serves as Vice-Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference, GECCO, in 2005.
Constraint-Handling Techniques used with Evolutionary Algorithms
Evolutionary Algorithms (EAs), when used for global optimization, can be seen as unconstrained optimization techniques. Therefore, they require an additional mechanism to incorporate constraints of any kind (i.e., inequality, equality, linear, nonlinear) into their fitness function. Although the use of penalty functions (very popular with mathematical programming techniques) may seem an obvious choice, this sort of approach requires a careful fine tuning of the penalty factors to be used. Otherwise, an EA may be unable to reach the feasible region (if the penalty is too low) or may reach quickly the feasible region but being unable to locate solutions that lie in the boundary with the infeasible region (if the penalty is too severe). This has motivated the development of a number of approaches to incorporate constraints into the fitness function of an EA. This tutorial will cover the main proposals in current use, including novel approaches such as the use of tournament rules based on feasibility, multiobjective optimization concepts, hybrids with mathematical programming techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial immune systems, among others. Other topics such as the importance of maintaining diversity, current benchmarks and the use of alternative search engines (e.g., particle swarm optimization, differential evolution, evolution strategies, etc.) will be also discussed (as time allows).
Carlos Coello Coello
Carlos Artemio Coello Coello received a PhD in Computer Science from Tulane University (USA) in 1996. His research has mainly focused on the design of new multi-objective optimization algorithms based on bio-inspired metaheuristics, which is an area in which he has made pioneering contributions. He currently has over 500 publications which, according to Google Scholar, report over 58,800 citations (with an h-index of 96). He has received several awards, including the National Research Award (in 2007) from the Mexican Academy of Science (in the area of exact sciences), the 2009 Medal to the Scientific Merit from Mexico City's congress, the Ciudad Capital: Heberto Castillo 2011 Award for scientists under the age of 45, in Basic Science, the 2012 Scopus Award (Mexico's edition) for being the most highly cited scientist in engineering in the 5 years previous to the award and the 2012 National Medal of Science in Physics, Mathematics and Natural Sciences from Mexico's presidency (this is the most important award that a scientist can receive in Mexico). He also received the Luis Elizondo Award from the Instituto Tecnológico de Monterrey in 2019. He is the recipient of the prestigious 2013 IEEE Kiyo Tomiyasu Award, ""for pioneering contributions to single- and multiobjective optimization techniques using bioinspired metaheuristics"", and of the 2016 The World Academy of Sciences (TWAS) Award in ?Engineering Sciences?. Since January 2011, he is an IEEE Fellow. He is also the recipient of the 2021 IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award. He is also Associate Editor of several international journals including Evolutionary Computation and the IEEE Transactions on Emerging Topics in Computational Intellience. Since January 2021, he is the Editor-in-Chief of the IEEE Transactions on Evolutionary Computation. He is currently Full Professor with distinction at the Computer Science Department of CINVESTAV-IPN in Mexico City, Mexico.
Evolution of Neural Networks
Evolution of artificial neural networks has recently emerged as a powerful technique in two areas. First, while the standard value-function based reinforcement learning works well when the environment is fully observable, neuroevolution makes it possible to disambiguate hidden state through memory. Such memory makes new applications possible in areas such as robotic control, game playing, and artificial life. Second, deep learning performance depends crucially on the network design, i.e. its architecture, hyperparameters, and other elements. While many such designs are too complex to be optimized by hand, neuroevolution can be used to do so automatically. Such evolutionary AutoML can be used to achieve good deep learning performance even with limited resources, or state-of-the art performance with more effort. It is also possible to optimize other aspects of the architecture, like its size, speed, or fit with hardware. In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) methods for neural architecture search and evolutionary AutoML, and (3) applications of these techniques in control, robotics, artificial life, games, image processing, and language.
Risto Miikkulainen
Risto Miikkulainen is a Professor of Computer Science at the University of Texas at Austin and Associate VP of Evolutionary AI at Cognizant. He received an M.S. in Engineering from Helsinki University of Technology (now Aalto University) in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His current research focuses on methods and applications of neuroevolution, as well as neural network models of natural language processing and vision; he is an author of over 450 articles in these research areas. At Cognizant, and previously as CTO of Sentient Technologies, he is scaling up these approaches to real-world problems. Risto is an IEEE Fellow, recipient of the IEEE CIS EC Pioneer Award, INNS Gabor Award, ISAL Outstanding Paper of the Decade Award, as well as 10 Best-Paper Awards at GECCO.
Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition
The intertwining disciplines of image analysis, signal processing and pattern recognition are major fields of computer science, computer engineering and electrical and electronic engineering, with past and on-going research covering a full range of topics and tasks, from basic research to a huge number of real-world industrial applications.
Among the techniques studied and applied within these research fields, evolutionary computation (EC) including evolutionary algorithms, swarm intelligence and other paradigms is playing an increasingly relevant role. Recently, evolutionary deep learning has also attracted very good attention to these fields. The terms Evolutionary Image Analysis and Signal Processing and Evolutionary Computer Vision are more and more commonly accepted as descriptors of a clearly defined research area and family of techniques and applications. This has also been favoured by the recent availability of environments for computer hardware and systems such as GPUs and grid/cloud/parallel computing systems, whose architecture and computation paradigm fit EC algorithms extremely well, alleviating the intrinsically heavy computational burden imposed by such techniques and allowing even for real-time applications.
The tutorial will introduce the general framework within which Evolutionary Image Analysis, Signal Processing and Pattern Recognition can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include edge detection, segmentation, object tracking, object recognition, motion detection, image classification and recognition. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, evolutionary multi-objective optimisation as well as memetic/hybrid paradigms. We take a focus on the use of evolutionary deep learning idea for image analysis --- this includes automatic learning architectures, learning parameters and transfer functions of convolutional neural networks (and autoencoders and genetic programming if time allows). The use of GPU boxes will be discussed for real-time/fast object classification. We will show how such EC techniques can be effectively applied to image analysis and signal processing problems and provide promising results. A deep network-like approach will also be discussed for GP, in which classification is improved by co-evolving an embedding of the input pattern and a classifier at the same time.
Mengjie Zhang
Mengjie Zhang is a Fellow of Royal Society of New Zealand, a Fellow of IEEE, and currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, Associate Dean (Research and Innovation) in the Faculty of Engineering, and Chair of the Research Committee of the Faculty of Engineering and School of Engineering and Computer Science. His research is mainly focused on evolutionary computation, particularly genetic programming, particle swarm optimisation and learning classifier systems with application areas of feature selection/construction and dimensionality reduction, computer vision and image processing, evolutionary deep learning and transfer learning, job shop scheduling, multi-objective optimisation, and clustering and classification with unbalanced and missing data. He is also interested in data mining, machine learning, and web information extraction. Prof Zhang has published over 700 research papers in refereed international journals and conferences in these areas. He has been serving as an associated editor or editorial board member for over 10 international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, the Evolutionary Computation Journal (MIT Press), ACM Transactions on Evolutionary Learning and Optimisation, Genetic Programming and Evolvable Machines (Springer), IEEE Transactions on Emergent Topics in Computational Intelligence, Applied Soft Computing, and Engineering Applications of Artificial Intelligence, and as a reviewer of over 30 international journals. He has been a major chair for eight international conferences. He has also been serving as a steering committee member and a program committee member for over 80 international conferences including all major conferences in evolutionary computation. Since 2007, he has been listed as one of the top ten world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html). He is the Tutorial Chair for GECCO 2014, an AIS-BIO Track Chair for GECCO 2016, an EML Track Chair for GECCO 2017, and a GP Track Chair for GECCO 2020 and 2021. Since 2012, he has been co-chairing several parts of IEEE CEC, SSCI, and EvoIASP/EvoApplications conference (he has been involving major EC conferences such as GECCO, CEC, EvoStar, SEAL). Since 2014, he has been co-organising and co-chairing the special session on evolutionary feature selection and construction at IEEE CEC and SEAL, and also delivered a keynote/plenary talk for IEEE CEC 2018,IEEE ICAVSS 2018, DOCSA 2019, IES 2017 and Chinese National Conference on AI in Law 2017. Prof Zhang was the Chair of the IEEE CIS Intelligent Systems Applications, the IEEE CIS Emergent Technologies Technical Committee, and the IEEE CIS Evolutionary Computation Technical Committee; a Vice-Chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and the IEEE CIS Task Force on Evolutionary Deep Learning and Applications; and also the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.
Stefano Cagnoni
Stefano Cagnoni graduated in Electronic Engineering at the University of Florence, Italy, where he also obtained a PhD in Biomedical Engineering and was a postdoc until 1997. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997 he has been with the University of Parma, where he has been Associate Professor since 2004. Recent research grants include: a grant from Regione Emilia-Romagna to support research on industrial applications of Big Data Analysis, the co-management of industry/academy cooperation projects: the development, with Protec srl, of a computer vision-based fruit sorter of new generation and, with the Italian Railway Network Society (RFI) and Camlin Italy, of an automatic inspection system for train pantographs; a EU-funded “Marie Curie Initial Training Network" grant for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing. He has been Editor-in-chief of the "Journal of Artificial Evolution and Applications" from 2007 to 2010. From 1999 to 2018, he was chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, then a track of the EvoApplications conference. From 2005 to 2020, he has co-chaired MedGEC, a workshop on medical applications of evolutionary computation at GECCO. Co-editor of journal special issues dedicated to Evolutionary Computation for Image Analysis and Signal Processing. Member of the Editorial Board of the journals “Evolutionary Computation” and “Genetic Programming and Evolvable Machines”. He has been awarded the "Evostar 2009 Award" in recognition of the most outstanding contribution to Evolutionary Computation.
Evolutionary Computation and Tunneling at the Edge of Quantum Computing
There is considerable interest in quantum search and optimization algorithms.
Many quantum optimization tools, particularly quantum annealing, require that the optimization problem be transformed to QUBO: Quadratic Unconstrained Boolean Optimization. However, many evolutionary computations algorithms can also greatly benefit from using transforms to create QUBO problems or other k-bounded optimization problems. Transforms are also closely related to “reductions” which are used in NP-Completeness proofs. Some classic problems can be solved faster and more easily after transforms are applied. The Walsh/Hadamard transform which has been widely used in evolutionary computation theory, also plays a key role in quantum computing and the construction of quantum gates.
This talk provides a brief overview of Quantum Computing, but it primarily looks at how tools used in Quantum Computing can also be used to improve evolutionary optimization methods. The talk also shows how genetic recombination can tunnel between local optima without using quantum effects. New results suggest that these tunnels may provide a theoretical foundation for understanding the “Big Valley” behavior of some classic optimization problem, and why both evolutionary algorithms and Stochastic Local Search methods are able to effectively exploit the “Big Valley” distribution of local optima. The talk also highlights how some Quantum Computing researchers are sometimes trying to solve problems that have already been solved using traditional deterministic methods. This talk will also address how evolutionary methods and tunneling methods can be applied to classic problems such as MAX-3SAT. Tunneling can sometimes occur in O(1) time, even using classic deterministic methods.
Darrell Whitley
Prof. Darrell Whitley has been active in Evolutionary Computation since 1986, and has published more than 250 papers. These papers have garnered more than 28,000 citations. Dr. Whitley’s H-index is 68. He introduced the first “steady state genetic algorithm” with rank based selection, published some of the earliest papers on neuroevolution, and has worked on dozens of real world applications of evolutionary algorithms. He has served as Editor-in-Chief of the journal Evolutionary Computation, and served as Chair of the Governing Board of ACM Sigevo from 2007 to 2011. He is a Fellow of the ACM recognized for his contributions to Evolutionary Computation, and he has been awarded the 2022 IEEE PIONEER Award in Evolutionary Computation.
Evolutionary Computation for Feature Selection and Feature Construction
In data mining/big data and machine learning, many real-world problems such as bio-data classification and biomarker detection, image analysis, text mining often involve a large number of features/attributes. However, not all the features are essential since many of them are redundant or even irrelevant, and the useful features are typically not equally important. Using all the features for classification or other data mining tasks typically does not produce good results due to the big dimensionality and the large search space. This problem can be solved by feature selection to select a small subset of original (relevant) features or feature construction to create a smaller set of high-level features using the original low-level features.
Feature selection and construction are very challenging tasks due to the large search space and feature interaction problems. Exhaustive search for the best feature subset of a given dataset is practically impossible in most situations. A variety of heuristic search techniques have been applied to feature selection and construction, but most of the existing methods still suffer from stagnation in local optima and/or high computational cost. Due to the global search potential and heuristic guidelines, evolutionary computation techniques such as genetic algorithms, genetic programming, particle swarm optimisation, ant colony optimisation, differential evolution and evolutionary multi-objective optimisation have been recently used for feature selection and construction for dimensionality reduction, and achieved great success. Many of these methods only select/construct a small number of important features, produce higher accuracy, and generated small models that are easy to understand/interpret and efficient to run on unseen data. Evolutionary computation techniques have now become an important means for handling big dimensionality issues where feature selection and construction are required. Furthermore, feature selection and dimensionality reduction has also been a main approach to explainable machine learning and interpretable AI.
The tutorial will introduce the general framework within which evolutionary feature selection and construction can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include bio-data classification and biomarker detection, image analysis and pattern classification, symbolic regression, network security and intrusion detection, and text mining. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, differential evolution, ant colony optimisation, artificial bee colony optimisation, and evolutionary multi-objective optimisation. We will show how such evolutionary computation techniques (with a focus on particle swarm optimisation and genetic programming) can be effectively applied to feature selection/construction and dimensionality reduction and provide promising results.
Bing Xue
Bing Xue is currently Professor of Artificial Intelligence, and Deputy Head of School in the School of Engineering and Computer Science at Victoria University of Wellington. Her research focuses mainly on evolutionary computation, machine learning, big data, feature selection/learning, evolving neural networks, explainable AI and their real-world applications. Bing has over 300 papers published in fully refereed international journals and conferences including many highly cited papers and top most popular papers.
Bing is currently the Editor of IEEE CIS Newsletter, Chair of the Evolutionary Computation Technical Committee, member of ACM SIGEVO Executive Committee and Chair of IEEE CIS Task Force on Evolutionary Deep Learning and Applications. She also chaired the IEEE CIS Data Mining and Big Data Technical Committee, Students Activities committee, and a member of many other committees. She founded and chaired IEEE CIS Task Force on Evolutionary Feature Selection and Construction, and co-founded and chaired IEEE CIS Task Force on Evolutionary Transfer Learning and Transfer Optimisation. She also won a number of awards including Best Paper Awards from international conferences, and Early Career Award, Research Excellence Award and Supervisor Award from her University, IEEE CIS Outstanding Early Career Award, IEEE TEVC Outstanding Associate Editor and others. Bing has also been served as an Associate/Guest Editor or Editorial Board Member for > 10 international journals, including IEEE TEVC, ACM TELO, IEEE TETCI, IEEE TAI, and IEEE CIM. She is a key organiser for many international conferences, e.g. Conference Chair of IEEE CEC 2024, Co-ambassador for Women in Data Science NZ 2023, Tutorial Chair for IEEE WCCI 2022, Publication Chair of EuroGP 2022, Track Chair for ACM GECCO 2019-2022, Workshop Chair for IEEE ICDM 2021, General Co-Chair of IVCNZ 2020, Program Co-Chair for KETO 2020, Senior PC of IJCAI 2019-2021, Finance Chair of IEEE CEC 2019, Program Chair of AJCAI 2018, IEEE CIS FASLIP Symposium founder and Chair since 2016, and others in international conferences. More can be seen from: https://homepages.ecs.vuw.ac.nz/~xuebing/index.html
Mengjie Zhang
Mengjie Zhang is a Fellow of Royal Society of New Zealand, a Fellow of IEEE, and currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, Associate Dean (Research and Innovation) in the Faculty of Engineering, and Chair of the Research Committee of the Faculty of Engineering and School of Engineering and Computer Science. His research is mainly focused on evolutionary computation, particularly genetic programming, particle swarm optimisation and learning classifier systems with application areas of feature selection/construction and dimensionality reduction, computer vision and image processing, evolutionary deep learning and transfer learning, job shop scheduling, multi-objective optimisation, and clustering and classification with unbalanced and missing data. He is also interested in data mining, machine learning, and web information extraction. Prof Zhang has published over 700 research papers in refereed international journals and conferences in these areas. He has been serving as an associated editor or editorial board member for over 10 international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, the Evolutionary Computation Journal (MIT Press), ACM Transactions on Evolutionary Learning and Optimisation, Genetic Programming and Evolvable Machines (Springer), IEEE Transactions on Emergent Topics in Computational Intelligence, Applied Soft Computing, and Engineering Applications of Artificial Intelligence, and as a reviewer of over 30 international journals. He has been a major chair for eight international conferences. He has also been serving as a steering committee member and a program committee member for over 80 international conferences including all major conferences in evolutionary computation. Since 2007, he has been listed as one of the top ten world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html). He is the Tutorial Chair for GECCO 2014, an AIS-BIO Track Chair for GECCO 2016, an EML Track Chair for GECCO 2017, and a GP Track Chair for GECCO 2020 and 2021. Since 2012, he has been co-chairing several parts of IEEE CEC, SSCI, and EvoIASP/EvoApplications conference (he has been involving major EC conferences such as GECCO, CEC, EvoStar, SEAL). Since 2014, he has been co-organising and co-chairing the special session on evolutionary feature selection and construction at IEEE CEC and SEAL, and also delivered a keynote/plenary talk for IEEE CEC 2018,IEEE ICAVSS 2018, DOCSA 2019, IES 2017 and Chinese National Conference on AI in Law 2017. Prof Zhang was the Chair of the IEEE CIS Intelligent Systems Applications, the IEEE CIS Emergent Technologies Technical Committee, and the IEEE CIS Evolutionary Computation Technical Committee; a Vice-Chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and the IEEE CIS Task Force on Evolutionary Deep Learning and Applications; and also the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.
Evolutionary computation for stochastic problems
Many optimization problems involve stochastic and uncertain characteristics and evolutionary algorithms have been successfully applied to a wide range of such problems.
This tutorial will give an overview of different evolutionary computation approaches for dealing with stochastic problems. It will cover theoretical foundations as well as a wide range of concepts and approaches on how to deal with stochastic objective functions and/or stochastic constraints.
The material presented will range from theoretical studies involving runtime analysis to applications of evolutionary algorithms for stochastic problems in real-world scenarios such as engineering and mine planning.
Frank Neumann
Frank Neumann is a professor and the leader of the Optimisation and Logistics group at the University of Adelaide and an Honorary Professorial Fellow at the University of Melbourne. His current position is funded by the Australian Research Council through a Future Fellowship and focuses on AI-based optimisation methods for problems with stochastic constraints. Frank has been the general chair of the ACM GECCO 2016 and co-organised ACM FOGA 2013 in Adelaide. He is an Associate Editor of the journals "Evolutionary Computation" (MIT Press) and ACM Transactions on Evolutionary Learning and Optimization. In his work, he considers algorithmic approaches in particular for combinatorial and multi-objective optimization problems and focuses on theoretical aspects of evolutionary computation as well as high impact applications in the areas of cybersecurity, renewable energy, logistics, and mining.
Aneta Neumann
Aneta Neumann is a researcher in the School of Computer and Mathematical Sciences at the University of Adelaide, Australia, and focuses on real world problems using evolutionary computation methods. She is also part of the Integrated Mining Consortium at the University of Adelaide. Aneta graduated in Computer Science from the Christian-Albrechts-University of Kiel, Germany, and received her PhD from the University of Adelaide, Australia. She served as the co-chair of the Real-World Applications track at GECCO 2021 and GECCO 2022, and is a co-chair of the Genetic Algorithms track at GECCO 2023. Her main research interests are bio-inspired computation methods, with a particular focus on dynamic and stochastic multi-objective optimization for real-world problems that occur in the mining industry, defence, cybersecurity, creative industries, and public health.
Hemant Kumar Singh
Dr. Hemant Kumar Singh completed his Ph.D. from University of New South Wales (UNSW) Australia in 2011 and B.Tech in Mechanical Engineering from Indian Institute of Technology (IIT) Kanpur in 2007. Since 2013, he has worked with UNSW Australia as a Lecturer (2013-2017) and Senior Lecturer (2017-) in the School of Engineering and Information Technology. He also worked with GE Aviation at John F. Welch Technology Centre as a Lead Engineer during 2011-13. His research interests include development of evolutionary computation methods with a focus on engineering design optimization problems. He has over 50 refereed publications this area. He is the recipient of Australia Bicentennial Fellowship 2016, WCSMO Early Career Researcher Fellowship 2015 and The Australian Society for Defence Engineering Prize 2011 among others.
Exploratory Landscape Analysis
Whenever one performs optimization on a (continuous) problem, having at least a vague idea of its landscape's structure is usually very beneficial, as this, for instance, allows one to select and/or configure a suitable, well-performing optimization algorithm. However, characterizing problem landscapes poses (at least) two challenges:
1. one wants to avoid spending major parts of the overall budget on the characterization of the landscape (as this would reduce the budget that is left for the optimizer), and
2. the properties should be automatically measurable/computable (otherwise, one would always depend on the availability of some expert, i.e., a real person).
As a result, over the last decade, an entire research area denoted "Exploratory Landscape Analysis (ELA)" has developed around the topic of automated and feature-based problem characterization for continuous optimization problems. Our tutorial aims at providing a soft entry into this topic, as we therein cover the developments from the beginnings to the current state of the art.
Within this tutorial, we will first introduce the general concepts of automated algorithm selection — being one of the primary use cases of ELA — along with an introduction of ELA itself, followed by an overview of the historical evolution of this research domain. Next, we will provide a detailed summary of its state of the art, as well as an overview of exemplary success stories for ELA usage. Amongst others, this will include studies on the detection of a problem's structural properties (such as funnels), algorithm selection and configuration, or the comparison of newly designed benchmark suites with well-established optimization benchmarks. Moreover, we will address how ELA helps to improve our understanding of problem characteristics (and thus the problems themselves) and of the search behavior of optimization algorithms. At last, we offer pointers to promising perspectives for future research in this field.
Throughout the tutorial, we aim to engage all participants actively. For this purpose, we demonstrate suitable software frameworks (which are available in R, python, as well as a platform-independent web application), which the participants may then use to tackle simple problems (see Activities).
Pascal Kerschke
Pascal Kerschke is professor of Big Data Analytics in Transportation at TU Dresden, Germany and member of the Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) Dresden/Leipzig. His research interests cover various topics in the context of benchmarking, data science, machine learning, and optimization - including automated algorithm selection, Exploratory Landscape Analysis, as well as continuous single- and multi-objective optimization. Moreover, he is the main developer of flacco, co-authored further R-packages such as smoof and moPLOT, co-organized numerous tutorials and workshops in the context of Exploratory Landscape Analysis and/or benchmarking, and is an active member of the Benchmarking Network and the COSEAL group.
Mike Preuss
Mike Preuss is assistant professor at LIACS, the Computer Science department of Leiden University. He works in AI, namely game AI, natural computing, and social media computing. Mike received his PhD in 2013 from the Chair of Algorithm Engineering at TU Dortmund, Germany, and was with ERCIS at the WWU Muenster, Germany, from 2013 to 2018. His research interests focus on the field of evolutionary algorithms for real-valued problems, namely on multi-modal and multi-objective optimization, and on computational intelligence and machine learning methods for computer games. Recently, he is also involved in Social Media Computing, and he is publications chair of the upcoming multi-disciplinary MISDOOM conference 2019. He is associate editor of the IEEE ToG journal and has been member of the organizational team of several conferences in the last years, in various functions, as general co-chair, proceedings chair, competition chair, workshops chair.
Generative Hyper-heuristics
The automatic design of algorithms has been an early aim of both machine learning and AI, but has proved elusive. The aim of this tutorial is to introduce generative hyper-heuristics as a principled approach to the automatic design of algorithms. Hyper-heuristics are metaheuristics applied to a space of algorithms; i.e., any general heuristic method of sampling a set of candidate algorithms. In particular, this tutorial will demonstrate how to mine existing algorithms to obtain algorithmic primitives for the generative hyper-heuristic to compose new algorithmic solutions from, and to employ various types of genetic programming to execute the composition process; i.e., the search of program space.
This tutorial will place generative hyper-heuristics in the context of genetic programming - which differs in that it constructs solutions from scratch using atomic primitives - as well as genetic improvement - which takes a program as starting point and improves on it (a recent direction introduced by William Langdon).
The approach proceeds from the observation that it is possible to define an invariant framework for the core of any class of algorithms (often by examining existing human-written algorithms for inspiration). The variant components of the algorithm can then be generated by genetic programming. Each instance of the framework therefore defines a family of algorithms. While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach, as the template can be chosen to be any executable program and the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence.
This leads to a technique for mass-producing algorithms that can be customized to the context of end-use. This is perhaps best illustrated as follows: typically a researcher might create a traveling salesperson algorithm (TSP) by hand. When executed, this algorithm returns a solution to a specific instance of the TSP. We will describe a method that generates TSP algorithms that are tuned to representative instances of interest to the end-user. This method has been applied to a growing number of domains including; data mining/machine learning, combinatorial problems including bin packing (on- and off-line), Boolean satisfiability, job shop scheduling, exam timetabling, image recognition, black-box function optimization, wind-farm layout, and the automated design of meta-heuristics themselves (from selection and mutation operators to the overall meta-heuristic architecture).
This tutorial will provide a step-by-step guide which takes the novice through the distinct stages of automatic design. Examples will illustrate and reinforce the issues of practical application. This technique has repeatedly produced results which outperform their manually designed counterparts, and a theoretical underpinning will be given to demonstrate why this is the case. Automatic design will become an increasingly attractive proposition as the cost of human design will only increase in-line with inflation, while the speed of processors increases in-line with Moore's law, thus making automatic design attractive for industrial application. Basic knowledge of genetic programming will be assumed.
Daniel Tauritz
Daniel R. Tauritz is an Associate Professor in the Department of Computer Science and Software Engineering at Auburn University (AU), the Director for National Laboratory Relationships in AU's Samuel Ginn College of Engineering, the founding Head of AU’s Biomimetic Artificial Intelligence Research Group (BioAI Group), the founding director of AU’s Biomimetic National Security Artificial Intelligence Laboratory (BONSAI Lab), a cyber consultant for Sandia National Laboratories, a Guest Scientist at Los Alamos National Laboratory (LANL), and founding academic director of the LANL/AU Cyber Security Sciences Institute (CSSI). He received his Ph.D. in 2002 from Leiden University. His research interests include the design of generative hyper-heuristics, competitive coevolution, and parameter control, and the application of computational intelligence techniques in security and defense. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.
John Woodward
John R. Woodward is a lecturer at the Queen Mary University of London. Formerly he was a lecturer at the University of Stirling, within the CHORDS group (http://chords.cs.stir.ac.uk/) and was employed on the DAASE project (http://daase.cs.ucl.ac.uk/). Before that he was a lecturer for four years at the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 50 talks at International Conferences and as an invited speaker at Universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities.
Genetic Improvement: Taking real-world source code and improving it using computational search methods
Within the large field of Search-Based Software Engineering (SBSE), the relatively young area of Genetic Improvement (GI) aims to improve the properties of software at the level of source-code. This means that it operates directly on Java or C source code, and it typically starts with real-world software. This makes GI attractive for industrial applications, e.g. in contrast to Program Synthesis that aims to computationally create applications from scratch. In this tutorial, we demonstrate how we can apply GI methodologies to improve functional properties, such as fixing bugs and increasing accuracy, and also to optimise the physical properties of code, such as power consumption, size of code, bandwidth, and execution time. We will, furthermore, show and discuss the importance of involving the end users in the research and development of GI tools. This applies especially to industry software developers and how the tools deliver or suggest their improvements.
The aim of the tutorial is to:
• examine the motives for evolving source code directly, rather than a language built from a function set and terminal set which has to be interpreted after a program has been evolved
• understand different approaches to implementing GI including operating directly on text files, and operating on abstract syntax trees
• appreciate the new research questions that can be addressed while operating on actual source code
• understand some of the issues regarding measuring non-functional properties such as execution time and power consumption
• examine some early examples of GI as well as current state-of-the-art examples.
• understanding links between GI and other techniques such as hyper-heuristics, automatic parameter tuning, and deep parameter tuning
• highlight some of the multi-objective research where programs have been evolved that lie on the Pareto front with axes representing different non-functional properties
• discuss the issue of search space sparseness and flatness in GI, and more effective targeting of mutations to improve efficiency
• discuss the human-computer interaction between software developers and tools that provide automatic-programming-type services.
• give an introduction to GI in No Time - an open source simple micro-framework for GI (https://github.com/gintool/gin) including a live demonstration.
Alexander Brownlee
Alexander (Sandy) Brownlee is a Lecturer in the Division of Computing Science and Mathematics at the University of Stirling. His main topics of interest are in search-based optimisation methods and machine learning, with a focus on decision support tools, and applications in civil engineering, transportation and software engineering. He has published over 70 peer-reviewed papers on these topics. He has worked with several leading businesses including BT, KLM, and IES on industrial applications of optimisation and machine learning. He serves as a reviewer for several journals and conferences in evolutionary computation, civil engineering and transportation, and is currently an Editorial Board member for the journal Complex And Intelligent Systems. He has been an organiser of several workshops and tutorials at GECCO, CEC and PPSN on genetic improvement of software.
Saemundur Haraldsson
Saemundur O. Haraldsson is a Lecturer at the University of Stirling. He has co-organised every version of this tutorial and a number of the GI workshop iterations. He has multiple publications on Genetic Improvement, including two that have received best paper awards. Additionally, he co-authored the first comprehensive survey on GI 1 which was published in 2017. He has been invited to give talks on the subject in two Crest Open Workshops and multiple times for an industrial audience in Iceland. His PhD thesis (submitted in May 2017) details his work on the world's first live GI integration in an industrial application.
John Woodward
John R. Woodward is a lecturer at the Queen Mary University of London. Formerly he was a lecturer at the University of Stirling, within the CHORDS group (http://chords.cs.stir.ac.uk/) and was employed on the DAASE project (http://daase.cs.ucl.ac.uk/). Before that he was a lecturer for four years at the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 50 talks at International Conferences and as an invited speaker at Universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities.
Genetic Programming: A Tutorial Introduction
Genetic programming emerged in the early 1990's as one of the most exciting new evolutionary algorithm paradigms. It has rapidly grown into a thriving area of research and application. While sharing the evolutionary inspired algorithm principles of a genetic algorithm, it differs by exploiting an executable genome. Genetic programming evolves a ‘program’ to solve a problem rather than a single solution. This tutorial introduces the basic genetic programming paradigm. It explains how the powerful capability of genetic programming is derived from modular algorithmic components: executable representations such as a parse tree, variation operators that preserve syntax and explore a variable length, hierarchical solution space, appropriately chosen programming functions and fitness function specification. It provides demos and walks through an example of GP software.
UnaMay OReilly
Una-May O'Reilly is the leader of the AnyScale Learning For All (ALFA) group at MIT CSAIL. ALFA focuses on evolutionary algorithms, machine learning, and frameworks for large-scale knowledge mining, prediction and analytics. The group has projects in cyber security using coevolutionary algorithms to explore adversarial dynamics in networks and malware detection. Una-May received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, which has evolved into ACM Sig-EVO. She now serves as Vice-Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference, GECCO, in 2005.
Erik Hemberg
Erik Hemberg is a Research Scientist in the AnyScale Learning For All (ALFA) group at Massachusetts Institute of Technology Computer Science and Artificial Intelligence Lab, USA. He has a PhD in Computer Science from University College Dublin, Ireland and an MSc in Industrial Engineering and Applied Mathematics from Chalmers University of Technology, Sweden. His work focuses on developing autonomous, pro-active cyber defenses that are anticipatory and adapt to counter attacks. He is also interested in automated semantic parsing of law, and data science for education and healthcare.
Introduction to Quantum Optimization
Scope:
Quantum computers are rapidly becoming more powerful and increasingly applicable to solve problems in the real world. They have the potential to solve extremely hard computational problems, which are currently intractable by conventional computers. A major application domain of quantum computers is solving hard combinatorial optimization problems. This is the emerging field of quantum optimization. The algorithms that quantum computers use for optimization can be regarded as general types of stochastic heuristic optimization algorithms.
There are two main types of quantum computers, quantum annealers and quantum gate computers. These have very different architectures. To solve optimization problems on quantum computers, they need to be reformulated in a format suitable for the specific architecture. Quantum annealers are specially tailored to solve combinatorial optimization problems, once they are reformulated as a Quadratic Unconstrained Binary Optimisation (QUBO) problem. In quantum gates computers, Quantum Approximate Optimization Algorithm (QAOA) can be used to approximately solve optimization problems. In this case, a classical algorithm can be used on top of the quantum computer to guide the search of parameters.
The tutorial is aimed at researchers in optimization who have no previous knowledge of quantum computers and want to learn about how to solve optimization problems on quantum computers. The tutorial will demonstrate how to solve in practice a simple combinatorial optimization problem on the two main quantum computer architectures (quantum gate computers and quantum annealers) using the Amazon Web Services cloud platform for quantum computing. All attendants will be provided with a free AWS account to experiment hands-on on solving simple optimization problems on quantum computer simulators.
Content:
Part 1 (theory):
Quantum Computing Background
Optimisation on Quantum Annealers
Optimisation on Quantum Gate Computers
Part 2 (practice):
Amazon Web Service (Braket) Environment
Solving the Max Cut problem on a Quantum Annealer
Solving the Max Cut problem on a Quantum Gate Computer (via QAOA)
Alberto Moraglio
Alberto Moraglio is a Senior Lecturer at the University of Exeter, UK. He holds a PhD in Computer Science from the University of Essex and Master and Bachelor degrees (Laurea) in Computer Engineering from the Polytechnic University of Turin, Italy. He is the founder of a Geometric Theory of Evolutionary Algorithms, which unifies Evolutionary Algorithms across representations and has been used for the principled design and rigorous theoretical analysis of new successful search algorithms. He gave several tutorials at GECCO, IEEE CEC and PPSN, and has an extensive publication record on this subject. He has served as co-chair for the GP track, the GA track and the Theory track at GECCO. He also co-chaired twice the European Conference on Genetic Programming, and is an associate editor of Genetic Programming and Evolvable Machines journal. He has applied his geometric theory to derive a new form of Genetic Programming based on semantics with appealing theoretical properties which is rapidly gaining popularity in the GP community. In the last three years, Alberto has been collaborating with Fujitsu Laboratories on Optimisation on Quantum Annealing machines. He has formulated dozens of Combinatorial Optimisation problems in a format suitable for the Quantum hardware. He is also the inventor of a software (a compiler) aimed at making these machines usable without specific expertise by automating the translation of high-level description of combinatorial optimisation problems to a low-level format suitable for the Quantum hardware (patented invention).
Francisco Chicano
Francisco Chicano holds a PhD in Computer Science from the University of Málaga and a Degree in Physics from the National Distance Education University. Since 2008 he is with the Department of Languages and Computing Sciences of the University of Málaga. His research interests include quantum computing, the application of search techniques to Software Engineering problems and the use of theoretical results to efficiently solve combinatorial optimization problems. He is in the editorial board of Evolutionary Computation Journal, Engineering Applications of Artificial Intelligence, Journal of Systems and Software, ACM Transactions on Evolutionary Learning and Optimization and Mathematical Problems in Engineering. He has also been programme chair and Editor-in-Chief in international events.
Landscape Analysis of Optimization Problems and Algorithms
The notion of a fitness landscape was first introduced in 1932 to understand natural evolution, but the concept was later applied in the context of evolutionary computation to understand algorithm behavior on different problems. In the last decade, the field of fitness landscapes has experienced a large upswing in research, evident in the increased number of published papers on the topic as well as regular tutorials, workshops and special sessions at all the major evolutionary computation conferences. More recently, landscape analysis has been used in contexts beyond evolutionary computation in areas such as neural architecture search, feature selection for data mining, hyperparameter optimization and neural network training.
A further recent advance has been the analysis of landscapes through the trajectories of algorithms. The search paths of algorithms provide samples of the search space that can be seen as a view of the landscape from the perspective of the algorithm. What algorithms "see" as they move through the search space of different problems can help us understand how evolutionary and other search algorithms behave on problems with different characteristics.
The content of the tutorial will include foundational work in fitness landscape analysis as well as new techniques that have been developed in the last few years. We will cover recent results on local optima networks (LONs), a network-based model of fitness landscapes where nodes are local optima and edges are possible search transitions among these optima. We also cover search trajectory networks (STNs) as a tool to analyze and visualize the behavior of metaheuristics through modelling the search trajectories of algorithms. Both LONs and STNs allow us to visualize realistic search spaces in ways not previously possible and bring a whole new set of quantitative network metrics for characterizing and understanding computational search.
Case studies will be presented of recent applications of landscape analysis in both discrete and continuous domains. These include constrained search spaces, neural network training landscapes, feature selection search spaces for data mining, and neural architecture search spaces.
Katherine Malan
Katherine Malan is an associate professor in the Department of Decision Sciences at the University of South Africa. She received her PhD in computer science from the University of Pretoria in 2014 and her MSc & BSc degrees from the University of Cape Town. She has over 25 years' lecturing experience, mostly in Computer Science, at three different South African universities. Her research interests include automated algorithm selection in optimisation and learning, fitness landscape analysis and the application of computational intelligence techniques to real-world problems. She is editor-in-chief of South African Computer Journal, associate editor for Engineering Applications of Artificial Intelligence, and has served as a reviewer for over 20 Web of Science journals.
Gabriela Ochoa
Gabriela Ochoa is a Professor of Computing Science at the University of Stirling in Scotland, UK. Her research lies in the foundations and applications of evolutionary algorithms and metaheuristics, with emphasis on adaptive search, fitness landscape analysis and visualisation. She holds a PhD from the University of Sussex, UK, and has worked at the University Simon Bolivar, Venezuela, and the University of Nottingham, UK. Her Google Scholar h-index is 40, and her work on network-based models of computational search spans several domains and has obtained 4 best-paper awards and 8 other nominations. She collaborates cross-disciplines to apply evolutionary computation in healthcare and conservation. She has been active in organisation and editorial roles in venues such as the Genetic and Evolutionary Computation Conference (GECCO), Parallel Problem Solving from Nature (PPSN), the Evolutionary Computation Journal (ECJ) and the ACM Transactions on Evolutionary Learning and Optimisation (TELO). She is a member of the executive board for the ACM interest group in evolutionary computation, SIGEVO, and the editor of the SIGEVOlution newsletter. In 2020, she was recognised by the leading European event on bio-inspired algorithms, EvoStar, for her outstanding contributions to the field.
Large-Scale Optimization and Learning
Many real-world optimization problems involve a large number of decision variables. The trend in engineering optimization shows that the number of decision variables involved in a typical optimization problem has grown exponentially over the last 50 years, and this trend continues with an ever-increasing rate. The proliferation of big-data analytic applications has also resulted in the emergence of large-scale optimization problems at the heart of many machine learning problems. The recent advances in the area of machine learning has also witnessed very large scale optimization problems encountered in training deep neural network architectures (so-called deep learning), some of which have over a billion decision variables. It is this "curse-of-dimensionality" that has made large-scale optimization an exceedingly difficult task. Current optimization methods are often ill-equipped in dealing with such problems. It is this research gap in both theory and practice that has attracted much research interest, making large-scale optimization an active field in recent years. We are currently witnessing a wide range of mathematical, metaheuristics and learning-based optimization algorithms being developed to overcome this scalability issue. This Tutorial is dedicated to exploring the recent advances in this field, and is divided into two parts: (I) large-scale black-box continuous optimization and (II) large-scale combinatorial optimization. In Part I, we focus on the methods that learn and exploit problem structures to tackle large-scale black-box optimization problems such as decomposition methods based on variable interaction analysis. The opportunities and challenges of applying Evolutionary Algorithms (EAs) to deep learning are also discussed. In Part II, we introduce traditional problem reduction and decomposition techniques to solve large-scale combinatorial optimization problems, with a special focus on the emerging topic of leveraging machine learning and data mining for combinatorial optimization.
Part I: Large-Scale Black-Box Continuous Optimization
In this part of the tutorial, we provide an overview of the recent advances in the field of evolutionary large-scale global optimization with an emphasis on the divide-and-conquer approaches (a.k.a. decomposition methods). In particular, we give an overview of different approaches including the non-decomposition based approaches such as memetic algorithms and sampling methods to deal with large-scale problems. This is followed by a more detailed treatment of implicit and explicit decomposition algorithms in large-scale optimization. Considering the popularity of decomposition methods in recent years, we provide a detailed technical explanation of the state-of-the-art decomposition algorithms including the differential grouping algorithm and its latest improved derivatives, which outperform other decomposition algorithms on the latest large-scale global optimization benchmarks. We also address the issue of resource allocation in cooperative co-evolution and provide a detailed explanation of some recent algorithms such as the contribution-based cooperative co-evolution family of algorithms. Overall, this tutorial takes the form of a critical survey of the existing methods with an emphasis on articulating the challenges in large-scale global optimization in order to stimulate further research interest in this area.
Part II: Large-Scale Combinatorial Optimization
In this part, we focus on large-scale Mixed Integer Programs (MIPs), a general mathematical formulation for many real-world combinatorial optimization problems. In particular, we introduce two important categories of methods for solving large-scale MIPs. (1) Problem reduction: We briefly describe the existing metaheuristics and metaheuristics that incorporate the idea of problem reduction for solving large-scale MIPs, including the Construct, Merge, Solve & Adapt, Merge Search, and Large Neighbourhood Search algorithms. We also introduce an emerging topic of leveraging machine learning and data mining for problem reduction, and show how this can be incorporated into EAs to boost their performance for solving large-scale MIPs. (2) Problem decomposition: We give a general description of problem decomposition techniques such as Lagrangian relaxation and Dantzig-Wolfe decomposition, and show how EAs can be combined with traditional mathematical programming techniques to solve large-scale MIPs more efficiently. Importantly, we discuss the potential of using EAs and machine learning techniques to automatically identify good decompositions for general MIPs. Overall, this part of the tutorial takes the form of a critical survey of existing and emerging algorithms with an emphasis on techniques used, and future challenges and opportunities in this field.
Mohammad Nabi Omidvar
Nabi Omidvar is a University Academic Fellow (Assistant Professor) with the School of Computing, University of Leeds, and Leeds University Business School, UK. He is an expert in large-scale global optimization and is currently a senior member of the IEEE and the chair of IEEE Computational Intelligence Society's Taskforce on Large-Scale Global Optimization. He has made several award winning contributions to the field including the state-of-the-art variable interaction analysis algorithm which won the IEEE Computational Intelligence Society's best paper award in 2017. He also coauthored a paper which won the large-scale global optimization competition in the IEEE Congress on Evolutionary Computation in 2019. Dr. Omidvar's current research interests are high-dimensional (deep) learning and the applications of artificial intelligence in financial services.
Yuan Sun
Yuan Sun is a Research Fellow in the School of Computing and Information Systems, University of Melbourne and the Vice-Chair of the IEEE CIS Taskforce on Large-Scale Global Optimization. He completed his Ph.D degree from University of Melbourne and a Bachelor’s degree from Peking University. His research interests include artificial intelligence, evolutionary computation, operations research, and machine learning. He has published more than twenty research papers in these areas, and his research has been nominated for the best paper award at GECCO 2020 and won the CEC 2019 Competition on Large-Scale Global Optimization.
Xiaodong Li
Xiaodong Li received his B.Sc. degree from Xidian University, Xi'an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. He is a Professor with the School of Computing Technologies, RMIT University, Melbourne, Australia. His research interests include machine learning, evolutionary computation, neural networks, deep learning, data analytics, multiobjective optimization, operational research, and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a former vice-chair of IEEE Task Force on Multi-modal Optimization, and a former chair of IEEE CIS Task Force on Large Scale Global Optimization. He is the recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS "IEEE Transactions on Evolutionary Computation Outstanding Paper Award". He is an IEEE Fellow.
Lexicase Selection
Lexicase parent selection decides which individuals in an EC population are selected not based on a single, aggregated fitness value, but instead based on random sequences of test cases (or objectives) used to filter the population down to a single individual. By never aggregating performance on separate cases, lexicase selection effectively makes use of additional information for parent selection. In Genetic Programming, this results in a semantic-aware parent selection, although lexicase selection has proven useful in other EC systems without a genotypic/semantic distinction. Since lexicase selection places importance on performing well on subsets of the test cases rather than generally on all cases at once, it often selects specialist individuals who perform well on some parts of a problem while poorly on others. These specialists contribute to the maintenance of increased diversity in populations evolved under lexicase selection. However, unlike some methods that focus entirely on diversity, lexicase selection also provides strong selection pressure toward individuals that perform well on many combinations of test cases, which can lead to excellent overall performance.
Lexicase selection has recently been used in various evolutionary computation systems, often outperforming tournament selection as well as other more advanced parent selection techniques. While originally formulated for Genetic Programming, it has also been used for evolutionary many-objective optimization in Genetic Algorithms, in Learning Classifier Systems, in Evolutionary Robotics, and as a method of building machine learning ensembles. Many variants of lexicase selection have been proposed to tackle different issues. In particular, epsilon lexicase selection has performed well in problem domains where errors on test cases are real numbers. We will discuss these variants and when you might want to use them.
This tutorial will give an introduction to lexicase selection and will discuss major research directions involving both advances in its use and explanations for why it performs well. We will discuss its effects on population diversity, its ability to select specialist individuals, and its propensity toward hyperselection, where it selects the same individual many times in one generation. Participants will come away with a strong understanding of lexicase selection and how it can be applied to any test case-based evolutionary computation system. In addition, they will gain insight into the research frontiers in lexicase selection.
William La Cava
William La Cava is an Assistant Professor in the Computational Health Informatics Program (CHIP) at Boston Children’s Hospital and Harvard Medical School. He received his PhD from UMass Amherst with a focus on interpretable modeling of dynamical systems. Prior to joining CHIP, he was a post-doctoral fellow and research associate in the Institute for Biomedical Informatics at the University of Pennsylvania.
Thomas Helmuth
Thomas Helmuth, PhD Assistant Professor of Computer Science, Hamilton College Thomas is an assistant professor of computer science at Hamilton College in New York. He received his Ph.D. from University of Massachusetts Amherst working with professor Lee Spector. His research focuses on applications of genetic programming to automatic software synthesis. He has contributed to genetic programming in the areas of parent selection, stack-based program representations (in particular Push), and the benchmarking of program synthesis systems.
Model-Based Evolutionary Algorithms
In model-based evolutionary algorithms (MBEAs) the variation operators are guided by the use of a model that conveys problem-specific information so as to increase the chances that combining the currently available solutions leads to improved solutions. Such models can be constructed beforehand for a specific problem, or they can be learnt during the optimization process. Replacing traditional crossover and mutation operators by building and using models enables the use of machine learning techniques for automatic discovery of problem regularities and subsequent exploitation of these regularities, thereby enabling the design of optimization techniques that can automatically adapt to a given problem. This is an especially useful feature when considering optimization in a black-box setting. The use of models can furthermore also have major implications for grey-box settings where not everything about the problem is considered to be unknown a priori.
Well-known types of MBEAs are Estimation-of-Distribution Algorithms (EDAs) where probabilistic models of promising solutions are built and samples are subsequently drawn from these models to generate new solutions. A more recent class of MBEAs is the family of Optimal Mixing EAs such as the Linkage Tree GA and, more generally, various GOMEA variants. The tutorial will mainly focus on the latter types of MBEAs.
Dirk Thierens
Dr. Dirk Thierens is a lecturer/senior researcher at the Department of Information and Computing Sciences at Utrecht University, where he is teaching courses on Evolutionary Computation and Computational Intelligence. He has (co)-authored over 100 peer reviewed papers in Evolutionary Computation. His main current research interests are focused on the design and application of structure learning techniques in the framework of population-based, stochastic search. Dirk contributed to the organization of previous GECCO conferences as track chair, workshop organizer, Editor-in-Chief, and past member of the SIGEVO ACM board.
Peter A. N. Bosman
Peter Bosman is a senior researcher in the Life Sciences research group at the Centrum Wiskunde & Informatica (CWI) (Centre for Mathematics and Computer Science) located in Amsterdam, the Netherlands. Peter obtained both his MSc and PhD degrees on the design and application of estimation-of-distribution algorithms (EDAs). He has (co-)authored over 150 refereed publications on both algorithmic design aspects and real-world applications of evolutionary algorithms. At the GECCO conference, Peter has previously been track (co-)chair, late-breaking-papers chair, (co-)workshop organizer, (co-)local chair (2013) and general chair (2017).
Modern Applications of Evolutionary Rule-based Machine Learning
Evolutionary Rule-based Machine Learning (ERBML) has a rich history (over 40 years), however this unique family of machine learning algorithms is often overlooked in favor of more popular approaches; e.g. neural networks, random forests, etc. This tutorial will highlight modern ERBML concepts and implementations to demonstrate how they can be ideal for current applications, especially in the age of interpretable and eXplainable AI.
Rule-based Machine Learning (RBML) is a family of machine learning approaches that automatically (without a human in the loop) identify, evolve, and learn rules to solve a certain problem or a part of the problem. These rules collectively provide knowledge of the environment in a 'piece-wise' manner. Evolutionary Computation (EC) is a paradigm consisting of population-based problem solving techniques inspired by neo-Darwinian evolution. Evolutionary Machine Learning is continuing to grow in popularity due to its combination of open-ended genetic search with effective machine learning. This tutorial presents how RBML approaches, in particular, Learning Classifier Systems (LCS), embed evolution with machine learning to produce transparent, interpretable/explainable and flexible AI systems for solving complex real-life problems; for example, how LCS can be adapted to problems ranging from bioinformatics to robotic navigation.
The goals of this tutorial are three-fold: first, to provide a gentle introduction to ERBML-based systems, how they work, what types of problems they can solve, and to discuss the advantages, disadvantages, and limitations of these systems with a particular focus on modern modalities and knowledge perception; second, to provide a hands-on experience of one of the state-of-the-art ERBML systems (ExSTraCS) to solve a real-world bioinformatics problem, interpretation and statistical analysis of the experimental results; third, to highlight the recent advances in ERBML systems, especially adaptation to real-world problems, ranging from bioinformatics to robotic navigation domains.
Abubakar Siddique
Dr. Siddique's main research lies in creating novel machine learning systems, inspired by the principles of cognitive neuroscience, to provide efficient and scalable solutions for challenging and complex problems in different domains, such as Boolean, computer vision, navigation and Bioinformatics. He has provided a tutorial on Learning Classifier Systems: Cognitive inspired machine learning for eXplainable AI at GECCO 2022. He is engaged as an author and reviewer for different journals and international conferences including IEEE Transactions on Cybernetics, IEEE Transactions on Evolutionary Computation, IEEE Computational Intelligence Magazine, GECCO, IEEE CEC and EuroGP.
Dr. Siddique did his Bachelor's in Computer Science from Quaid-i-Azam University, Master's in Computer Engineering from U.E.T Taxila and Ph.D. in Computer Engineering from Victoria University of Wellington. He was the recipient of the VUWSA Gold Award and the "Student Of The Session" award during his Ph.D. and bachelor studies, respectively. He spent nine years at Elixir Pakistan, a California (USA) based leading software company. His last designation was a Principal Software Engineer where he led a team of software developers. He developed enterprise-level software for customers such as Xerox, IBM and Adobe. Currently, he is a lecturer at the Wellington Institute of Technology and a Postdoctoral Research Fellow at Victoria University of Wellington.
Will N. Browne
Prof. Will Browne's research focuses on applied cognitive systems. Specifically, how to use inspiration from natural intelligence to enable computers/machines/robots to behave usefully. This includes cognitive robotics, learning classifier systems, and modern heuristics for industrial application. Prof. Browne has been co-track chair for the Genetics-Based Machine Learning (GBML) track and the co-chair for the Evolutionary Machine Learning track at the Genetic and Evolutionary Computation Conference. He has also provided tutorials on Rule-Based Machine Learning and Advanced Learning Classifier Systems at GECCO, chaired the International Workshop on Learning Classifier Systems (LCSs), and lectured graduate courses on LCSs. He has co-authored the first textbook on LCSs Introduction to Learning Classifier Systems, Springer 2017. Currently, he is Professor and Chair in Manufacturing Robotics at Queensland University of Technology, Brisbane, Queensland, Australia.
Ryan Urbanowicz
Dr. Ryan Urbanowicz is an Assistant Professor of Computational Biomedicine at the Cedars Sinai Medical Center. His research focuses on the development of machine learning, artificial intelligence automation, data mining, and informatics methodologies as well as their application to biomedical and clinical data analyses. This work is driven by the challenges presented by large-scale data, complex patterns of association (e.g. epistasis and genetic heterogeneity), data integration, and the essential demand for interpretability, reproducibility, and efficiency in machine learning. His research group has developed a number of machine learning software packages including ReBATE, GAMETES, ExSTraCS, STREAMLINE, and FIBERS. He has been a regular contributor to GECCO since 2009 having (1) provided tutorials on learning classifier systems and the application of evolutionary algorithms to biomedical data analysis, (2) co-chaired the International Workshop on Learning Classifier Systems and a workshop on benchmarking evolutionary algorithms, and (3) co-chaired various tracks. He is also an invested educator, with dozens of educational videos and lectures available on his YouTube channel, and co-author of the textbook, `Introduction to Learning Classifier Systems'.
Optimization Challenges at the European Space Agency
Challenging optimization problems arise in the context of space exploration. These include optimization of trajectories, spacecraft maneuvers, mission planning, etc. Tackling these optimization problems often requires expensive computations and simulations of astrodynamics. Techniques used in the specialized literature include Evolutionary Algorithms, Particle Swarm Optimization and others black-box optimizers. These problems are, however, quite different from the ones tackled in the black-box optimization literature, having a large number of decision variables, very rugged landscapes, and dynamic behavior. Expert knowledge about the application scenario is often useful to understand the behavior of optimization algorithms.
This tutorial will introduce the basic knowledge required to understand optimization problems in Space at a level that the audience of GECCO can understand. Several optimization problems will be described in detail and possible solution approaches will be discussed. Software tools and example codes will be provided to demonstrate the problems and potential solutions.
The goal of the tutorial is to introduce these problems to the audience of GECCO in an approachable manner that fosters interest and further advances both the application area and the evolutionary computation field. We believe these problems provide challenging real-world examples of the type of optimization problems that are of interest to the GECCO community, but the “bar to entry” prevents many members in the community from engaging with this application area. The aim of the tutorial is to lower this bar as much as possible.
Dario Izzo
Dr. Izzo graduated in Aeronautical Engineering from the University Sapienza of Rome in 1999 and later obtained a second master in “Satellite Platforms” at the University of Cranfield in the UK and a Ph.D. in Mathematical Modelling in 2003, at the University Sapienza of Rome. In 2004, he moved to the European Space Agency (ESA) in the Netherlands as a research fellow in Mission Analysis Dr. Izzo is now heading the Advanced Concepts Team and manageing its interface to the rest of ESA. During the years spent with tha ACT, he has led studies in interplanetary trajectory design and artificial intelligence and he took part in several other innovative researches on diverse fields. He started the Global Trajectory Optimization Competitions events, the ESA’s Summer of Code in Space, and the Kelvins competition platform (https://kelvins.esa.int/). Dr. Izzo has published more than 150 papers in journals, conferences and books. In GECCO 2013, he received the Humies Gold Medal for the work on grand tours of the galilean moons and, the following year, he won the 8th edition of the Global Trajectory Optimization Competition, organized by NASA/JPL, leading a mixed team of ESA/JAXA scientists. His interests range from computer science, open source software development, interplanetary trajectory optimization, biomimetics and artificial intelligence.
Manuel López-Ibáñez
Dr. López-Ibáñez is a senior lecturer in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. Between 2020 and 2022, he was also a "Beatriz Galindo" Senior Distinguished Researcher at the University of Málaga, Spain. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 32 journal papers, 9 book chapters and 54 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, multi-objective optimization, and various combinatorial optimization problems. His current research interests are experimental analysis and the automatic configuration and design of stochastic optimization algorithms, for single and multi-objective problems. He is the lead developer and current maintainer of the irace software package for automatic algorithm configuration (http://iridia.ulb.ac.be/irace) and the EAF package for the analysis of multi-objective optimizers (https://mlopez-ibanez.github.io/eaf/).
Quality-Diversity Optimization
A fascinating aspect of natural evolution is its ability to produce a diversity of organisms that are all high performing in their niche. By contrast, the main artificial evolution algorithms are focused on pure optimization, that is, finding a single high-performing solution.
Quality-Diversity optimization (or illumination) is a recent type of evolutionary algorithm that bridges this gap by generating large collections of diverse solutions that are all high-performing. This concept was introduced by the “Generative and Developmental Systems” community between 2011 (Lehman & Stanley, 2011) and 2015 (Mouret and Clune, 2015) with the “Novelty Search with Local Competition” and “MAP-Elites” evolutionary algorithms. The main differences with multi-modal optimization algorithms are that (1) Quality Diversity typically works in the behavioural space (or feature space), and not in the genotypic space, and (2) Quality Diversity attempts to fill the whole behaviour space, even if the niche is not a peak in the fitness landscape. In the last 5 years, more than 100 papers have been written about quality diversity, many of them in the GECCO community (a non-exhaustive list is available at https://quality-diversity.github.io/papers).
The collections of solutions obtained by Quality Diversity algorithms open many new applications for evolutionary computation. In robotics, it was used to create repertoires of neural network policies (Nilsson & Cully, 2021 — best paper of the NE track), to allow robots to adapt to damage in a few minutes (Cully & et al. 2015, Nature; Allard & et al. 2022 best paper of the CS track); in engineering, it can be used to propose a diversity of optimized aerodynamic shapes (Gaier et al., 2018 — best paper of the CS track); they were also recently used in video games (Khalifa et al., 2018) and for Workforce Scheduling and Routing Problems (WSRP) (Urquhart & Hart, 2018).
This tutorial will give an overview of the various questions addressed in Quality-Diversity optimization, relying on examples from the literature. Past achievements and major contributions, as well as specific challenges in Quality-Diversity optimization will be described. The tutorial will in particular focus on:
• what is Quality-Diversity optimization?
• similarities and differences with traditional evolutionary algorithms;
• existing variants of Quality-Diversity algorithms;
• example of application: Learning behavioural repertoires in robotics, evolving 3D shapes for
aerodynamic design;
• open questions and future challenges.
The tutorial will effectively complement the Complex Systems track, which usually contains several papers about Quality-Diversity algorithms. For instance, 47% (7/15) of the papers accepted in the CS track in 2021 used Quality-Diversity optimization, 56% (5/9) in 2020, 36% (4/11) in 2019 and 20% (3/15) in 2018. After the conference, the video of the tutorial will be hosted on the quality-diversity website (https://quality-diversity.github.io/), which gathers papers and educational content related to quality-diversity algorithms.
Antoine Cully
Antoine Cully is Lecturer (Assistant Professor) at Imperial College London (United Kingdom). His research is at the intersection between artificial intelligence and robotics. He applies machine learning approaches, like evolutionary algorithms, on robots to increase their versatility and their adaptation capabilities. In particular, he has recently developed Quality-Diversity optimization algorithms to enable robots to autonomously learn large behavioural repertoires. For instance, this approach enabled legged robots to autonomously learn how to walk in every direction or to adapt to damage situations. Antoine Cully received the M.Sc. and the Ph.D. degrees in robotics and artificial intelligence from the Sorbonne Université in Paris, France, in 2012 and 2015, respectively, and the engineer degree from the School of Engineering Polytech’Sorbonne, in 2012. His Ph.D. dissertation has received three Best-Thesis awards. He has published several journal papers in prestigious journals including Nature, IEEE Transaction in Evolutionary Computation, and the International Journal of Robotics Research. His work was featured on the cover of Nature (Cully et al., 2015), received the "Outstanding Paper of 2015" award from the Society for Artificial Life (2016), the French "La Recherche" award (2016), and two Best-Paper awards from GECCO (2021, 2022).
Jean-Baptiste Mouret
Jean-Baptiste Mouret is a senior researcher ("directeur de recherche) at Inria, a French research institute dedicated to computer science and mathematics. He was previously an assistant professor ("mâitre de conférences) at ISIR (Institute for Intelligent Systems and Robotics), which is part of Université Pierre et Marie Curie - Paris 6 (UPMC, now Sorbonne Université). He obtained a M.S. in computer science from EPITA in 2004, a M.S. in artificial intelligence from the Pierre and Marie Curie University (Paris, France) in 2005, and a Ph.D. in computer science from the same university in 2008. He was the principal investigator of an ERC grant (ResiBots - Robots with animal-like resilience, 2015-2020) and was the recipient of a French "ANR young researcher grant (Creadapt - Creative adaptation by Evolution, 2012-2015). Overall, J.-B. Mouret conducts researches that intertwine evolutionary algorithms, neuro-evolution, and machine learning to make robots more adaptive. His work was featured on the cover of Nature (Cully et al., 2015) and it received the "2017 ISAL Award for Distinguished Young Investigator in the field of Artificial Life, the "Outstanding Paper of 2015 award from the Society for Artificial Life (2016), the French "La Recherche" award (2016), 3 GECCO best paper awards (2011, GDS track; 2017 & 2018, CS track), and the IEEE CEC "best student paper" award (2009). He co-chaired the "Evolutionary Machine Learning track at GECCO 2019 and the "Generative and Developmental Systems'' track in 2015.
Stéphane Doncieux
Stéphane Doncieux is Professeur des Universités (Professor) in Computer Science at Sorbonne University, Paris, France. He is engineer of the ENSEA, a French electronic engineering school. He obtained a Master's degree in Artificial Intelligence and Pattern Recognition in 1999. He pursued and defended a PhD in Computer Science in 2003. He was responsible, with Bruno Gas, of the SIMA research team since its creation in 2007 and up to 2011. From 2011 to 2018, he was the head of the AMAC (Architecture and Models of Adaptation and Cognition) research team with 11 permanent researchers, 3 post-doc students and engineers and 11 PhD students. As from January 2019, he is deputy director of the ISIR lab, one of the largest robotics lab in France. He has organized several workshops on ER at conferences like GECCO or IEEE-IROS and has edited 2 books. Stéphane Doncieux was co-chair of the GECCO complex systems track in 2019 and 2020. His research is in cognitive robotics, with a focus on the use of evolutionary algorithms in the context of synthesis of robot controllers. He worked on selective pressures and on the use of evolutionary methods in a developmental robotics approach in which the evolutionary algorithms are used for their creativity to bootstrap a cognitive process and allow it to acquire an experience that can be later redescribed in another representation for a faster and more effective task resolution. This is the goal of the H2020 DREAM European project that he has coordinated (http://dream.isir.upmc.fr).
Representations for Evolutionary Algorithms
Successful and efficient use of evolutionary algorithms depends on the choice of the genotype, the problem representation (mapping from genotype to phenotype) and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation.
Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on the performance of evolutionary algorithms. Relevant concepts are the locality and redundancy of representations. Locality describes whether neighboring genotypes are also neighboring phenotypes. Representations have high locality if the application of variation operators results in new solutions similar to the original ones. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes or search operators favor some kind of phenotypes.
The tutorial gives a brief overview about existing guidelines for representation design, illustrates different aspects of representations, gives a brief overview of models describing the different aspects, and illustrates the relevance of the aspects with practical examples.
It is expected that the participants have a basic understanding of EA principles.
Franz Rothlauf
He received a Diploma in Electrical Engineering from the University of Erlangen, Germany, a Ph.D. in Information Systems from the University of Bayreuth, Germany, and a Habilitation from the University of Mannheim, Germany, in 1997, 2001, and 2007, respectively. Since 2007, he is professor of Information Systems at the University of Mainz. He has published more than 100 technical papers in the context of planning and optimization, evolutionary computation, e-business, and software engineering, co-edited several conference proceedings and edited books, and is author of the books "Representations for Genetic and Evolutionary Algorithms" and "Design of Modern Heuristics". At the University Mainz, he is Academic Director of the Executive MBA program (since 2013) and Chief Information Officer (since 2016). His main research interests are the application of modern heuristics in planning and optimization systems. He is a member of the Editorial Board of Evolutionary Computation Journal (ECJ), ACM Transactions on Evolutionary Learning and Optimization (TELO), and Business & Information Systems Engineering (BISE). Since 2007, he is member of the Executive Committee of ACM SIGEVO. He was treasurer of ACM SIGEVO between 2011 and 2019. Since 2019, he serves as chair for ACM SIGEVO. He has been organizer of a number of workshops and tracks on heuristic optimization, chair of EvoWorkshops in 2005 and 2006, co-organizer of the European workshop series on "Evolutionary Computation in Communications, Networks, and Connected Systems", co-organizer of the European workshop series on "Evolutionary Computation in Transportation and Logistics", and co-chair of the program committee of the GA track at GECCO 2006. He was conference chair of GECCO 2009.
Runtime Analysis of Population-based Evolutionary Algorithms - Part I: Steady State EAs
Populations are at the heart of evolutionary algorithms (EAs). They provide the genetic variation which selection acts upon. A complete picture of EAs can only be obtained if we understand their population dynamics. A rich theory on runtime analysis (also calledtime-complexity analysis) of EAs has been developed over the last 20 years. The goal of this theory is to show, via rigorous mathematicalmeans, how the performance of EAs depends on their parameter setting sand the characteristics of the underlying fitness landscapes. Initially, runtime analysis of EAs was mostly restricted to simplified EAs that do not employ large populations, such as the (1+1) EA. This two-part tutorial introduces more recent techniques that enable runtime analysis of EAs with realistic
population sizes.
In this first part the techniques employed for the analysis of elitist steady-state EAs will be introduced. The tutorial begins with a brief overview of the population-based EAs that are covered by the techniques. We introduce fundamental methods and analytical tools such as fitness based partitions, and drift analysis for the achievement of upper and lower bounds on the expected runtime of EAs. We illustrate the use of the techniques in the analysis of simple (1+1) EAs and Random local search and build upon these for the analysis of population-based EAs with and without crossover. In addition to custom-tailored methods for the analysis of evolutionary algorithms we also introduce the relevant tools and notions from probability theory in an accessible form. Moreover, the attendees will be fully prepared to follow the more advanced tutorials that cover more specialized aspects of the field.
Per Kristian Lehre
Dr Lehre is a Professor of Evolutionary Computation in the School of Computer Science at the University of Birmingham. Before joining Birmingham, he was since 2011 Assistant Professor with the University of
Nottingham. He obtained MSc and PhD degrees in Computer Science from
the Norwegian University of Science and Technology (NTNU) in
Trondheim, He completed the PhD in 2006 under the supervision of Prof
Pauline Haddow, and joined the School of Computer Science at The
University of Birmingham, UK, as a Research Fellow in January 2007
with Prof Xin Yao. He was a Post Doctoral Fellow at DTU Informatics,
Technical University of Denmark in Lyngby, Denmark from April 2010.
Dr Lehre's research interests are in theoretical aspects of
nature-inspired search heuristics, in particular runtime analysis of
population-based evolutionary algorithms. His research has won
numerous best paper awards, including GECCO (2013, 2010, 2009, 2006),
ICSTW (2008), and ISAAC (2014). He is vice-chair of IEEE Task Force on
Theoretical Foundations of Bio-inspired Computation, and a member of
the editorial board of Evolutionary Computation and associate editor
of IEEE Transactions of Evolutionary Computation. He has guest-edited
special issues of Theoretical Computer Science and IEEE Transaction on
Evolutionary Computation on theoretical foundations of evolutionary
computation. Dr Lehre has given many tutorials on evolutionary
computation in summer schools (UK: 2007, 2015, 2016, France: 2009,
2010, and 2011, Estonia: 2010), as well as major conferences and
workshops (GECCO 2013-2020, CEC 2013 and 2015, PPSN 2016, 2018, 2020,
and ThRaSH 2013). He was the main coordinator of the 2M euro
EU-funded project SAGE which brought together theory of evolutionary
computation and population genetics.
Pietro Simone Oliveto
Pietro S. Oliveto is a Professor of Computer Science and Chair of Algorithms at the University of Sheffield, UK. He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. He has been EPSRC PhD+ Fellow (2009-2010) and EPSRC Postdoctoral Fellow (2010-2013) at Birmingham and Vice-Chancellor's Fellow (2013-2016) and EPSRC Early Career Fellow at Sheffield.
His main research interest is the performance analysis of bio-inspired computation techniques including evolutionary algorithms, genetic programming, artificial immune systems, hyper-heuristics and algorithm configuration. He has guest-edited journal special issues of Computer Science and Technology, Evolutionary Computation, Theoretical Computer Science, IEEE Transactions on Evolutionary Computation and Algorithmica. He has co-Chaired the the IEEE symposium on Foundations of Computational Intelligence (FOCI) from 2015 to 2021 and has been co-program Chair of the ACM Conference on Foundations of Genetic Algorithms (FOGA 2021) and Theory Track co-chair at GECCO 2022. He is part of the Steering Committee of the annual workshop on Theory of Randomized Search Heuristics (ThRaSH), was Leader of the Benchmarking Working Group of the COST Action ImAppNIO, is member of the EPSRC Peer Review College and is Associate Editor of IEEE Transactions on Evolutionary Computation.
Single and Multi-Objective Bilevel Optimization
Bilevel optimization problems model hierarchical non-cooperative decision processes in which the upper-level decision maker (the leader) and the lower level decision maker (the follower) control different sets of variables and have their own objective functions subject to interdependent constraints. The lower level problem is embedded in the constraints of the upper-level problem. Decisions are made sequentially, as the leader makes his decisions first by selecting values for his variables. The follower then reacts by optimizing his objective function(s) on the feasible choices restricted by the leader’s decisions. Thus, the leader needs to consider the follower’s reaction to the setting of his variables since this influences feasibility and the leader’s objective function(s) value(s). Sequential decision-making processes that can be modelled by bilevel optimization problems arise in many aspects of resource planning, management and policy-making, namely the design of pricing policies.
A multiobjective bilevel problem (MOBP) may have multiple objective functions at one or both levels. A special case of MOBP is the semivectorial bilevel problem (SVBP), in which there is a single objective function at the upper level and multiple objectives at the lower level. The existence of multiple objective functions at the lower level problem adds further challenges and difficulties to a bilevel problem because the leader has to deal with the uncertainty related to the follower’s reaction. For each leader’s decision, the follower has a set of efficient solutions. If the leader has no (or has little) knowledge about the follower’s preferences, it may be very difficult for him to anticipate the follower’s choice among his efficient set.
This tutorial aims to present the main concepts and algorithmic approaches in single and multi-objective bilevel optimization using illustrative graphical examples, as well as applications in real-world problems, particularly in the energy sector. A particular application in the electricity retail market will be presented, which studies the interaction between electricity retailers and consumers engaging in demand response. Attention is devoted to hybrid approaches combining population-based meta-heuristics, as evolutionary algorithms, for the upper level search, with mathematical programming solvers to tackle the lower level problems once the upper level variables are instantiated.
Topics covered:
• Formulations and main concepts of single-objective bilevel optimization;
• Algorithmic approaches to solve bilevel optimization models - exact, metaheuristic and hybrid approaches;
• An application in the energy sector: optimization of time-of-use electricity pricing considering demand response;
• Challenges of multi-objective bilevel optimization problems;
• The semivectorial bilevel problem (SVBP);
• Different types of solutions to the SVBP: optimistic, pessimistic, deceiving and rewarding solutions;
• A SVBP hybrid approach to optimize electricity dynamic retail pricing.
Carlos Antunes
Carlos Henggeler Antunes holds a PhD in Electrical Engineering (specialization in Optimization and Systems Theory) from the University of Coimbra (UC). He is a Full Professor at the Department of Electrical and Computer Engineering, University of Coimbra, Director of the R&D institute INESC Coimbra, and member of the coordination committee of the Energy for Sustainability Initiative at UC. His areas of interest include multi-objective optimization, meta-heuristics, multi-criteria analysis, bilevel optimization, applications in energy systems and policies with focus on energy efficiency and demand response. He is co-author of the book “Multiobjective Linear and Integer Programming“ and co-editor of the book “Energy and Behaviors” (Academic Press, 2019). He is senior editor of Energy Policy.
Maria João Alves
Maria João Alves holds a PhD in Management (specialization in Operational Research) from the University of Coimbra. She is currently an Associate Professor at the Faculty of Economics of the University of Coimbra, a researcher at the Centre for Business and Economics Research of the University of Coimbra (CeBER) and collaborator at the Institute for Systems Engineering and Computers at Coimbra (INESC Coimbra). Her main research interests include multiobjective programming, bilevel optimization, meta-heuristics, applications in energy systems and decision support systems. She is co-author of the book “Multiobjective Linear and Integer Programming” (Springer, 2016).
Theory and Practice of Population Diversity in Evolutionary Computation
Divergence of character is a cornerstone of natural evolution. On the contrary, evolutionary optimization processes are plagued by an endemic lack of population diversity: all candidate solutions eventually crowd the very same areas in the search space. The problem is usually labeled with the oxymoron “premature convergence” and has very different consequences on the different applications, almost all deleterious. At the same time, case studies from theoretical runtime analyses irrefutably demonstrate the benefits of diversity.
This tutorial will give an introduction into the area of “diversity promotion”: we will define the term “diversity” in the context of Evolutionary Computation, showing how practitioners tried, with mixed results, to promote it.
Then, we will analyze the benefits brought by population diversity in specific contexts, namely global exploration and enhancing the power of crossover. To this end, we will survey recent results from rigorous runtime analysis on selected problems. The presented analyses rigorously quantify the performance of evolutionary algorithms in the light of population diversity, laying the foundation for a rigorous understanding of how search dynamics are affected by the presence or absence of diversity and the introduction of diversity mechanisms.
Dirk Sudholt
Dirk Sudholt is a Full Professor and Chair of Algorithms for Intelligent Systems at the University of Passau, Germany. He previously held a post as Senior Lecturer at the University of Sheffield, UK, and founding head of the Algorithms research group. He obtained his PhD in computer science in 2008 from the Technische Universitaet Dortmund, Germany, under the supervision of Prof. Ingo Wegener. His research focuses on the computational complexity of randomized search heuristics such as evolutionary algorithms and swarm intelligence. Most relevant to this tutorial is his work on runtime analysis of diversity mechanisms and the benefits of crossover in genetic algorithms. Dirk has served as chair of FOGA 2017, the GECCO Theory track in 2016 and 2017 and as guest editor for Algorithmica. He is a member of the editorial board of Evolutionary Computation and The Computer Journal and associate editor for Natural Computing. He has more than 100 refereed publications and won 9 best paper awards at GECCO and PPSN.
Giovanni Squillero
Giovanni Squillero is an associate professor of computer science at Politecnico di Torino, Department of Control and Computer Engineering. His research mixes computational intelligence and machine learning, with particular emphasis on evolutionary computation, bio-inspired meta-heuristics, and multi-agent systems; in more down-to-earth activities, he studies approximate optimization techniques able to achieve acceptable solutions with reasonable resources. The industrial applications of his work range from electronic CAD to bio-informatics. Up to April 2022, he is credited as an author in 3 books, 36 journal articles, 11 book chapters, and 154 papers in conference proceedings; he is also listed among the editors in 16 volumes.
Squillero has been a Senior Member of the IEEE since 2014; currently, he is serving in the technical committee of the IEEE Computational Intelligence Society Games, and in the editorial board of Genetic Programming and Evolvable Machines. He was the program chair of the European Conference on the Applications of Evolutionary Computation in 2016 and 2017, and he is now a member of the EvoApplications steering committee. In 2018 he co-organized EvoML, the workshop on Evolutionary Machine Learning at PPSN; in 2016 and 2017, MPDEA, the workshop on Measuring and Promoting Diversity in Evolutionary Algorithms at GECCO; and from 2004 to 2014, EvoHOT, the Workshops on Evolutionary Hardware Optimization Techniques.
Since 1998, Squillero lectured 66 university courses (15 Ph.D. and 51 M.S./B.Sc.; 36 in Italian and 30 in English); he contributed to additional 37 courses as an assistant or in other subsidiary roles. He was given the opportunity to present his research in 14 international events among invited talks, seminars and tutorials.
Transfer Learning in Evolutionary Spaces
Abstract:
Evolutionary algorithms have been effectively applied to various search spaces. Traditionally evolutionary algorithms explore a solution space. However, since their inception the application of evolutionary algorithms has been extended to other spaces including the program, heuristic and design spaces. More recently the potential of transfer learning in evolutionary algorithms, focusing predominantly on the solution and program spaces, has been established. This tutorial examines the use of transfer learning in the application of evolutionary algorithms to four spaces, namely, the solution, program, heuristic (hyper-heuristics) and design (automated design of machine learning and search algorithms) spaces. The tutorial will provide an overview of transfer learning for the four spaces in terms of what to transfer, when to transfer and how to transfer knowledge. A case study for each of the spaces will be presented. The benefits of transfer learning for each of the four spaces will be highlighted. Determining what knowledge to transfer, when to transfer the knowledge and how to transfer the knowledge for the different spaces is itself an optimization problem. Traditionally this has been done manually. The tutorial will also look at how this process can be automated. A Python library ATLEA (Automated Transfer Learning for Evolutionary Algorithms) for the automated design of transfer learning in evolutionary algorithms will be presented. The use of explainable artificial intelligence in ATLEA will also be presented.
Topic Overview:
Since its introduction in neural networks, transfer learning is a rapidly growing field which has more recently began making an impact in evolutionary algorithms. This can be seen by the increase in special sessions at CEC and GECCO and special issues of computational intelligence journals on evolutionary transfer learning, as well as IEEE CIS task forces established on this topic. The tutorial will examine transfer learning in various evolutionary spaces, including the solution, program, heuristic and design spaces. The tutorial will also look at the automation of determining what knowledge to transfer, when to transfer the knowledge and how to transfer the knowledge.
Outline of Tutorial Structure:
1. Evolutionary Spaces
This tutorial firstly provides an overview of the evolutionary spaces that it will be focussing on:
1.1 Solution space
1.2 Program space
1.3 Heuristic space
1.4 Design space
2. Introduction to Transfer Learning in Search
The tutorial will then provide an overview of the use of transfer learning in search:
2.1 An overview of transfer learning
2.2 How can transfer learning be used in search?
2.2 Benefits of using transfer learning in search
3. Transfer learning in the solution space
This part of the tutorial will focus on transfer learning in evolutionary solution space:
3.1 An overview of transfer learning in the evolutionary solution space (ESS)
3.2 A case study of transfer learning in the ESS
4. Transfer learning in the program space
This part of the tutorial will focus on transfer learning in evolutionary progrm space:
4.1 An overview of transfer learning in the evolutionary program space (EPS)
4.2 A case study of transfer learning in the EPS
5. Transfer learning in the heuristic space
This part of the tutorial will focus on transfer learning in evolutionary heuristic space:
5.1 An overview of transfer learning in the evolutionary heuristic space (EHS)
5.2 A case study of transfer learning in the EHS
6. Transfer learning in the design space
This part of the tutorial will focus on transfer learning in evolutionary design space:
6.1 An overview of transfer learning in the evolutionary solution space (EDS)
6.2 A case study of transfer learning in the EDS
7. Transfer learning for different evolutionary spaces
Based on 3-6, this part of the tutorial will highlight differences and similarities between transfer learning in the four different spaces, namely, ESS, EPS, EHS and EDS, and introduce the concept of inter-space transfer learning.
8. Automated transfer learning in evolutionary spaces
This part of the tutorial will focus on the automated design of transfer learning in evolutionary spaces:
8.1 An overview of the automated design of transfer learning
8.2 Demonstration of ATLEA (Automated Transfer Learning in Evolutionary Algorithms) Python library for the automated design of transfer learning
8.3 XAI in ATLEA
9. Future research directions and discussion
Nelishia Pillay
Nelishia Pillay is a Professor at the University of Pretoria, South Africa. She holds the Multichoice Joint-Chair in Machine Learning and SARChI Chair in Artificial Intelligence for Sustainable Development. She is chair of the IEEE Technical Committee on Intelligent Systems Applications, IEEE CIS WCI sub-commitee and the IEEE Task Force on Automated Algorithm Design, Configuration and Selection. Her research areas include hyper-heuristics, automated design of machine learning and search techniques, combinatorial optimization, genetic programming, genetic algorithms and deep learning for and more generally machine learning and optimization for sustainable development. These are the focus areas of the NICOG (Nature-Inspired Computing Optimization) research group which she has established.
What You Always Wanted to Know About Evolution Strategies, But Never Dared to Ask
While Evolution Strategies (ES) are widely known as one of the streams of evolutionary computation and are meanwhile regarded as a competitive alternative to standard learning techniques in Reinforcement Learning, most people associate ES with the covariance matrix evolution strategy. However, there is more than just this particular evolutionary algorithm that performs superbly in unconstrained real-valued search spaces.
This introductory tutorial provides a broader perspective and view stressing the design philosophy of Evolution Strategies being not restricted to a specific search space, such as unconstrained real-valued optimization, but also includes discrete and combinatorial search spaces. This philosophy can be best understood from the ES history that started from the evolution of material objects - nowadays often referred to as hardware-in-the-loop evolutionary optimization. That is, evolution is done on the "phenotype." Accepting the constraints involved in such optimizations, one naturally can derive design principles for mutation and recombination operators and the control of those operators by self-adaptation - one of the great inventions of ES. Special emphasis will be put on a vivid understanding of how the ES explores the search spaces. Recent findings will be presented and supported by live experiments to explain the ES's ability to locate global optima in landscapes with a huge number of local optima. The tutorial will also investigate the reasons why ESs are now regarded as a scalable alternative to Reinforcement Learning.
The tutorial will include some live computer experiments demonstrating the relevance of the design principles discussed. This tutorial will be on an introductory level requiring only a minimum of maths.
Hans-Georg Beyer
Hans-Georg Beyer is best known for his theoretical analyses and the design
of Evolution Strategies based on the stochastic dynamical systems approach.
Dr. Beyer received the Diploma degree in Theoretical Electrical Engineering
from the Ilmenau Technical University, Germany, in 1982 and the Ph.D. in
physics from Bauhaus-University Weimar, Weimar, Germany, in 1989,
and the Habilitation degree in computer science from the University of
Dortmund, Dortmund, Germany, in 1997.
He was an R&D Engineer with the Reliability Physics Department,
VEB Gleichrichterwerk, Stahnsdorf, Germany, from 1982 to 1984.
From 1984 to 1989, he was a Research and Teaching Assistant and
later on Post-Doctoral Research Fellow with the Physics Department and
the Computer Science Department, Bauhaus-University Weimar. From 1990 to
1992, he was a Senior Researcher with the Electromagnetic Fields Theory
Group, Darmstadt University of Technology, Darmstadt, Germany.
From 1993 to 2004, he was with the Computer Science Department, University
of Dortmund. In 1997, he became a DFG (German Research Foundation)
Heisenberg Fellow. He was leader of a working group and a Professor of
Computer Science from 2003 to 2004. Since 2004 he has been professor
with the Vorarlberg University of Applied Sciences, Dornbirn, Austria.
He authored the book "The Theory of Evolution Strategies"
(Heidelberg: Springer-Verlag, 2001) and authored/coauthored about 200 papers.
Dr. Beyer was the Editor-in-Chief of the MIT Press Journal "Evolutionary
Computation" from 2010 to 2016. He was an Associate Editor for the IEEE
"Transactions on Evolutionary Computation" from 1997 to 2021 and is a member
of the advisory board of the Elsevier journal "Swarm and Evolutionary
Computation."