Call for Workshop Papers

Instructions for Workshop papers

Workshop papers must be submitted using the GECCO submission site. After login, the authors need to select the "Workshop Paper" submission form. In the form, the authors must select the workshop they are submitting to. To see a sample of the "Workshop Paper" submission form go to GECCO's submission site and chose "Sample Submission Forms".

Submitted papers must not exceed 8 pages (excluding references) and are required to be in compliance with the GECCO 2023 Papers Submission Instructions. Please, review the Workshops instructions, since some Workshops could reduce the page limit. It is recommended to use the same templates as the papers submitted to the main tracks. It is not required to remove the author information if the workshop does not have a double-blind review process (please, check the workshop description or the workshop organizers on this).

All accepted papers will be presented at the corresponding workshop and appear in the GECCO Conference Companion Proceedings. By submitting a paper, the author(s) agree that, if their paper is accepted, they will:

• Register at least one author before May 10, 2023 to attend the conference
• Provide a pre-recorded version of the talk or poster and be present during the assigned slot to present the work and/or answer questions from the audience. Most workshops will facilitate hybrid-mode (again, please check the workshop website for details).

Important Dates

• Submission opening: February 13, 2023
• Submission deadline: April 14, 2023
• Author's mandatory registration: May 10, 2023

Each paper accepted needs to have at least one author registered before the author registration deadline. If an author is presenting more than one paper at the conference, she/he does not pay any additional registration fees.

List of Workshops

TitleOrganizers
AABOH — Analysing algorithmic behaviour of optimisation heuristics
• Anna V Kononova LIACS, Leiden University, The Netherlands
• Bas van Stein LIACS, Leiden University, The Netherlands
• Daniela Zaharie West University of Timisoara, Romania
• Fabio Caraffini Institute of Artificial Intelligence, De Montfort University, Leicester, UK
• Thomas Bäck LIACS, Leiden University, The Netherlands
BBOB 2023 — Black Box Optimization Benchmarking 2023
• Anne Auger Inria, France
• Dimo Brockhoff Inria and Ecole Polytechnique, France
• Paul Dufossé Inria and Thales Defense Mission Systems
• Nikolaus Hansen Inria and Ecole Polytechnique, France
• Olaf Mersmann Technische Hochschule Köln
• Petr Pošík Czech Technical University, Czech Republic
• Tea Tušar Jožef Stefan Institute, Slovenia
BENCH@GECCO23 — Good Benchmarking Practices for Evolutionary Computation
• Boris Naujoks Cologne University of Applied Sciences, Germany
• Carola Doerr CNRS and Sorbonne University, France
• Pascal Kerschke TU Dresden, Germany
• Mike Preuss Leiden Institute of Advanced Computer Science
• Vanessa Volz modl.ai (Denmark)
• Olaf Mersmann Technische Hochschule Köln
EC + DM — Evolutionary Computation and Decision Making
• Tinkle Chugh University of Exeter, UK
• Richard Allmendinger The University of Manchester, UK
• Jussi Hakanen Silo AI
• Julia Handl Alliance Manchester Business School, University of Manchester, UK
ECADA 2023 — 13th Workshop on Evolutionary Computation for the Automated Design of Algorithms
• Daniel Tauritz Auburn University, USA
• John Woodward Queen Mary University of London, UK
• Emma Hart Edinburgh Napier University
ECCBI — Evolutionary Computation in Computational Biology and Bioinformatics
• José Santos University of A Coruña
• Julia Handl Alliance Manchester Business School, University of Manchester, UK
• Amarda Shehu George Mason University, USA
ECXAI — Workshop on Evolutionary Computing and Explainable AI
• Giovanni Iacca University of Trento, Italy
• David Walker University of Plymouth, UK
• Alexander Brownlee University of Stirling
• Stefano Cagnoni University of Parma
• John McCall Robert Gordon University, UK
• Jaume Bacardit Newcastle University, UK
EGML-EC — 2nd GECCO workshop on Enhancing Generative Machine Learning with Evolutionary Computation (EGML-EC) 2023
• Jamal Toutouh University of Málaga, Málaga, Spain
• UnaMay OReilly MIT, USA
• João Correia University of Coimbra, Portugal
• Penousal Machado University of Coimbra, CISUC, DEI
• Sergio Nesmachnow Universidad de la República, Uruguay
ERBML — 26th International Workshop on Evolutionary Rule-based Machine Learning
• David Pätzel University of Augsburg, Germany
• Alexander Wagner University of Hohenheim, Germany
• Michael Heider University of Augsburg
• Abubakar Siddique Wellington Institute of Technology, Te Pūkenga – Whitireia WelTec
EvoRL — Evolutionary Reinforcement Learning Workshop
• Giuseppe Paolo Huawei Technologies France
• Antoine Cully Imperial College London, UK
• Adam Gaier Autodesk Research, London, UK
EvoSoft — Evolutionary Computation Software Systems
• Stefan Wagner University of Applied Sciences Upper Austria
• Michael Affenzeller University of Applied Sciences Upper Austria
GEWS2023 — Grammatical Evolution Workshop - 25 years of GE
• Conor Ryan University of Limerick, Ireland
• Mahsa Mahdinejad University of Limerick
• Aidan Murphy University College Dublin, Ireland
GGP — Graph-based Genetic Programming
• Roman Kalkreuth TU Dortmund University
• Thomas Bäck LIACS, Leiden University, The Netherlands
• Dennis G. Wilson ISAE-SUPAERO, University of Toulouse, France
• Paul Kaufmann Westphalian University of Applied Sciences, Germany
• Leo Francoso Dal Piccol Sotto Fraunhofer Institute for Algorithms and Scientific Computing, Sankt Augustin, Germany
• Timothy Aktinson NNAISENSE, Lugano, Switzerland
IAM 2023 — 8th Workshop on Industrial Applications of Metaheuristics (IAM 2023).
• Silvino Fernández Alzueta Arcelormittal, Spain
• Pablo Valledor Pellicer ArcelorMittal Global R&D
• Thomas Stützle Université Libre de Bruxelles, Belgium
iGECCO — Interactive Methods at GECCO
• Matthew Johns University of Exeter, UK
• Ed Keedwell University of Exeter, UK
• Nick Ross University of Exeter, UK
• David Walker University of Plymouth, UK
Keep Learning — Keep Learning: Towards optimisers that continually improve and/or adapt
• Emma Hart Edinburgh Napier University
• Ian Miguel University of St Andrews, UK
• Christopher Stone University of St Andrews, UK
• Quentin Renau Edinburgh Napier University, UK
LAHS — Landscape-Aware Heuristic Search
• Sarah L. Thomson University of Stirling
• Nadarajen Veerapen Université de Lille, France
• Katherine Malan University of South Africa
• Arnaud Liefooghe University of Lille, France
• Sébastien Verel Univ. Littoral Côte d'Opale, France
• Gabriela Ochoa University of Stirling, UK
LEOL — Large-Scale Evolutionary Optimization and Learning
• Mohammad Nabi Omidvar University of Leeds, United Kingdom
• Yuan Sun University of Melbourne
• Xiaodong Li RMIT University, Australia
NEWK — Neuroevolution at work
• Ernesto Tarantino Institute on High Performance Computing - National Research Council of Italy
• Edgar Galvan Naturally Inspired Computation Research Group, Computer Science, Maynooth University, Ireland
• De Falco Ivanoe Institute of High-Performance Computing and Networking (ICAR-CNR), ITALY
• Antonio Della Cioppa Natural Computation Lab, DIEM, University of Salerno, ITALY
• Scafuri Umberto Institute of High-Performance Computing and Networking (ICAR-CNR), ITALY
• Mengjie Zhang Victoria University of Wellington, New Zealand
QD-Benchmarks — QD-Benchmarks — Workshop on Quality Diversity Algorithm Benchmarks
• Antoine Cully Imperial College London, UK
• Stéphane Doncieux ISIR, Université Pierre et Marie Curie-Paris 6, CNRS UMR 7222, Paris
• Matthew C. Fontaine University of Southern California
• Stefanos Nikolaidis University of Southern California
• Adam Gaier Autodesk Research, London, UK
• Amy K Hoover New Jersey Institute of Technology
• Jean-Baptiste Mouret Inria Nancy - Grand Est, CNRS, Université de Lorraine, France
• John Rieffel Union College
• Julian Togelius New York University
QuantOpt — Workshop on Quantum Optimization
• Alberto Moraglio University of Exeter, UK
• Mayowa Ayodele Fujitsu Laboratories of Europe, UK
• Francisco Chicano University of Malaga, Spain
• Oleksandr Kyriienko University of Exeter, UK
• Ofer Shir Tel-Hai College and Migal Institute, Israel
• Lee Spector Amherst College, Hampshire College, and the University of Massachusetts, Amherst
SAEOpt — Workshop on Surrogate-Assisted Evolutionary Optimisation
• Alma Rahat Swansea University
• Richard Everson University of Exeter
• Jonathan Fieldsend University of Exeter, UK
• Handing Wang Xidian University, China
• Yaochu Jin Bielefeld University, Germany
• Tinkle Chugh University of Exeter, UK
SBOX-COST — Strict box-constraint optimization studies
• Anna V Kononova LIACS, Leiden University, The Netherlands
• Olaf Mersmann Technische Hochschule Köln
• Diederick Vermetten Leiden Institute for Advanced Computer Science
• Manuel López-Ibáñez University of Manchester, UK
• Richard Allmendinger The University of Manchester, UK
• Youngmin Kim Alliance Manchester Business School, University of Manchester, UK
SWINGA — Swarm Intelligence Algorithms: Foundations, Perspectives and Challenges
• Roman Senkerik Tomas Bata University in Zlin, Faculty of Applied Informatics, Czech Republic
• Ivan Zelinka VSB - Technical University of Ostrava
• Pavel Kromer VSB Technical University of Ostrava, Czech Republic
• Swagatam Das Indian Statistical Institute
SymReg — Symbolic Regression Workshop
• Michael Kommenda University of Applied Sciences Upper Austria
• William La Cava Harvard, Boston Children’s Hospital, USA
• Gabriel Kronberger University of Applied Sciences Upper Austria
• Steven Gustafson Noonum, Inc

AABOH — Analysing algorithmic behaviour of optimisation heuristics

Summary

Optimisation and Machine Learning tools are among the most used tools in the modern world with their omnipresent computing devices. Yet, while both these tools rely on search processes (search for a solution or a model able to produce solutions), their dynamics have not been fully understood. This scarcity of knowledge on the inner workings of heuristic methods is largely attributed to the complexity of the underlying processes, which cannot be subjected to a complete theoretical analysis. However, this is also partially due to a superficial experimental setup and, therefore, a superficial interpretation of numerical results. In fact, researchers and practitioners typically only look at the final result produced by these methods. Meanwhile, a great deal of information is wasted in the run. In light of such considerations, it is now becoming more evident that such information can be useful and that some design principles should be defined that allow for online or offline analysis of the processes taking place in the population and their dynamics.
Hence, with this workshop, we call for both theoretical and empirical achievements identifying the desired features of optimisation and machine learning algorithms, quantifying the importance of such features, spotting the presence of intrinsic structural biases and other undesired algorithmic flaws, studying the transitions in algorithmic behaviour in terms of convergence, any-time behaviour, traditional and alternative performance measures, robustness, exploration vs exploitation balance, diversity, algorithmic complexity, etc., with the goal of gathering the most recent advances to fill the aforementioned knowledge gap and disseminate the current state-of-the-art within the research community.
Thus, we encourage submissions exploiting carefully designed experiments or data-heavy approaches that can come to help in analysing primary algorithmic behaviours and modelling internal dynamics causing them.

Organizers

Anna V Kononova

Anna V. Kononovais an Assistant Professor at the Leiden Institute of Advanced ComputerScience. She received her MSc degree in Applied Mathematics from Yaroslavl State University (Russia) in 2004 and PhD degree in Computer Science from University of Leeds (UK) in 2010. After a total of 5 years of postdoctoral experiences at Technical University Eindhoven (The Netherlands) and Heriot-Watt University (Edinburgh, UK), Anna has spent a number of years working as a mathematician in industry. Her current research interests include analysis of optimisation algorithms and machine learning.

Bas van Stein

Bas van Stein received his PhD degree in Computer Science in 2018, from the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands.
From 2018 until 2021 he was a Postdoctoral Researcher at LIACS, Leiden University and he is currently an Assistant Professor at LIACS. His research interests lie in surrogate-assisted optimisation, surrogate-assisted neural architecture search and explainable AI techniques for industrial applications.

Daniela Zaharie

Daniela Zaharie is a Professor at the Department of Computer Science from the West University of Timisoara (Romania) with a PhD degree on a topic related to stochastic modelling of neural networks and a Habilitation thesis on the analysis of the behaviour of differential evolution algorithms. Her current research interests include analysis and applications of metaheuristic algorithms, interpretable machine learning models and data mining.

Fabio Caraffini

Fabio Caraffini is an Associate Professor in Computer Science at De Montfort University (Leicester, UK). Fabio holds a PhD in Computer Science (De Montfort University, UK, 2014) and a PhD in Mathematical Information Technology (University of Jyväkylä, Finland, 2016) and was awarded a BSc in Electronics Engineering and an MSc in Telecommunications Engineering by the University of Perugia (Italy) in 2008 and 2011 respectively. His research interests include theoretical and applied computational intelligence with a strong emphasis on metaheuristics for optimisation.

Thomas Bäck

Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas Bäck has more than 350 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and most recently, the Handbook of Natural Computing. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft für Informatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.

BBOB 2023 — Black Box Optimization Benchmarking 2023

Summary

Benchmarking optimization algorithms is a crucial part in the design and application of them in practice. Since 2009, the Blackbox Optimization Benchmarking Workshop at GECCO has been a place to discuss general recent advances of benchmarking practices and the concrete results from actual benchmarking experiments with a large variety of (blackbox) optimizers.

The Comparing Continuous Optimizers platform (COCO, 1, https://github.com/numbbo/coco) has been developed in this context to
support algorithm developers and practicioners alike by automating benchmarking experiments for blackbox optimization algorithms
in single- and bi-objective, unconstrained continuous problems in exact and noisy, as well as expensive and non-expensive scenarios.

Also in the next BBOB 2023 edition of the workshop, we invite participants to discuss all kind of aspects of (blackbox) benchmarking. Like in previous years, presenting benchmarking results on the supported test suites of COCO are a focus, but submissions are not limited to those topics:

- single-objective unconstrained problems (bbob)
- single-objective unconstrained problems with noise (bbob-noisy)
- biobjective unconstrained problems (bbob-biobj)
- large-scale single-objective problems (bbob-largescale) and
- mixed-integer single- and bi-objective problems (bbob-mixint and bbob-biobj-mixint)
- constrained single-objective problems (bbob-constrained)

We encourage particularly submissions about algorithms from outside the evolutionary computation community and papers analyzing the large amount of already publicly available algorithm data of COCO (see https://numbbo.github.io/data-archive/). Comparing algorithms on the newly released bbob-constrained test suite will be another focus in 2023. Like for the previous editions, we will provide source code in various languages (C/C++, Matlab/Octave, Java, and Python) to benchmark algorithms on the various test suites mentioned. Postprocessing data and comparing algorithm performance will be equally automatized with COCO (up to already prepared ACM-compliant LaTeX templates for writing papers).

For details, please see the separate BBOB-2023 web page at
https://numbbo.github.io/workshops/BBOB-2023/index.html (available upon acceptance of the workshop)

1 Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar, and Dimo Brockhoff. "COCO: A platform for comparing continuous optimizers in a black-box setting." Optimization Methods and Software (2020): 1-31.

Organizers

Anne Auger

Anne Auger is a research director at the French National Institute for Research in Computer Science and Control (Inria) heading the RandOpt team. She received her diploma (2001) and PhD (2004) in mathematics from the Paris VI University. Before to join INRIA, she worked for two years (2004-2006) at ETH in Zurich. Her main research interest is stochastic continuous optimization including theoretical aspects, algorithm designs and benchmarking. She is a member of ACM-SIGECO executive committee and of the editorial board of Evolutionary Computation. She has been General chair of GECCO in 2019. She has been organizing the biannual Dagstuhl seminar "Theory of Evolutionary Algorithms" in 2008 and 2010 and all seven previous BBOB workshops at GECCO since 2009. She is co-organzing the forthcoming Dagstuhl seminar on benchmarking.

Dimo Brockhoff

Dimo Brockhoff received his diploma in computer science from University of Dortmund, Germany in 2005 and his PhD (Dr. sc. ETH) from ETH Zurich, Switzerland in 2009. After two postdocs at Inria Saclay Ile-de-France (2009-2010) and at Ecole Polytechnique (2010-2011), he joined Inria in November 2011 as a permanent researcher (first in its Lille - Nord Europe research center and since October 2016 in the Saclay - Ile-de-France one). His research interests are focused on evolutionary multiobjective optimization (EMO), in particular on theoretical aspects of indicator-based search and on the benchmarking of blackbox algorithms in general. Dimo has co-organized all BBOB workshops since 2013 and was EMO track co-chair at GECCO'2013 and GECCO'2014.

Paul Dufossé

Paul Dufossé received his diploma in 2017 from Université Paris-Dauphine and ENS Paris-Saclay in statistics and machine learning. Since late 2018, he is pursuing a PhD in computer science at Institut Polytechnique de Paris, France with the RandOpt team under Nikolaus Hansen and industrial partner Thales Defense Mission Systems. His research interests are optimization, machine learning and digital signal processing. In particular, he aims to design evolutionary algorithms to solve constrained optimization problems emerging from radar and antenna signal processing applications.

Nikolaus Hansen

Nikolaus Hansen is a research director at Inria and the Institut Polytechnique de Paris, France. After studying medicine and mathematics, he received a PhD in civil engineering from the Technical University Berlin and the Habilitation in computer science from the University Paris-Sud. His main research interests are stochastic search algorithms in continuous, high-dimensional search spaces, learning and adaptation in evolutionary computation, and meaningful assessment and comparison methodologies. His research is driven by the goal to develop algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).

Olaf Mersmann

Olaf Mersmann is a Professor for Data Science at TH Köln - University of Applied Sciences. He received his BSc, MSc and PhD in Statistics from TU Dortmund. His research interests include using statistical and machine learning methods on large benchmark databases to gain insight into the structure of the algorithm choice problem.

Petr Pošík

Petr Posik works as a lecturer at the Czech Technical University in Prague, where he also recieved his Ph.D. in Artificial Intelligence and Biocybernetics in 2007. From 2001 to 2004 he worked as a statistician, analyst and lecturer for StatSoft, Czech Republic. Since 2005 he works at the Department of Cybernetics, Czech Technical University. Being on the boundary of optimization, statistics and machine learning, his research interests are aimed at improving the characteristics of evolutionary algorithms with techniques of statistical machine learning. He serves as a reviewer for several journals and conferences in the evolutionary-computation field. Petr served as the student chair at GECCO 2014, as tutorials chair at GECCO 2017, and as the local chair at GECCO 2019.

Tea Tušar

Tea Tušar is a research fellow at the Department of Intelligent Systems of the Jozef Stefan Institute in Ljubljana, Slovenia. She was awarded the PhD degree in Information and Communication Technologies by the Jozef Stefan International Postgraduate School for her work on visualizing solution sets in multiobjective optimization. She has completed a one-year postdoctoral fellowship at Inria Lille in France where she worked on benchmarking multiobjective optimizers. Her research interests include evolutionary algorithms for singleobjective and multiobjective optimization with emphasis on visualizing and benchmarking their results and applying them to real-world problems.

BENCH@GECCO23 — Good Benchmarking Practices for Evolutionary Computation

Summary

Benchmarking plays a vital role in understanding the performance and search behavior of sampling-based optimization techniques such as evolutionary algorithms. This workshop will continue our workshop series on good benchmarking practices at different conferences in the context of EC that we started in 2020. The core theme is on benchmarking evolutionary computation methods and related sampling-based optimization heuristics, but each year, the focus is changed.

The focus in 2023 will be problem representation and representativeness. The following questions present potential directions for the workshop

o Do we consider the right benchmark problems in our studies?
o How can we measure problem characteristics (features) and are the challenges that we encounter in practice sufficiently represented in our classic benchmark collections?
o Can we distill properties that we would like to study in isolation? And can we find problems that encode these properties in isolation, such that a benchmarking study would help us understand how the algorithms behave for that given characteristic?

Organizers

Boris Naujoks

Boris Naujoks is a professor for Applied Mathematics at TH Köln - Cologne University of Applied Sciences (CUAS). He joint CUAs directly after he received his PhD from Dortmund Technical University in 2011. During his time in Dortmund, Boris worked as a research assistant in different projects and gained industrial experience working for different SMEs. Meanwhile, he enjoys the combination of teaching mathematics as well as computer science and exploring EC and CI techniques at the Campus Gummersbach of CUAS. He focuses on multiobjective (evolutionary) optimization, in particular hypervolume based algorithms, and the (industrial) applicability of the explored methods.

Carola Doerr

Carola Doerr, formerly Winzen, is a permanent CNRS research director at Sorbonne Université in Paris, France. Carola's main research activities are in the analysis of black-box optimization algorithms, both by mathematical and by empirical means. Carola is associate editor of IEEE Transactions on Evolutionary Computation, ACM Transactions on Evolutionary Learning and Optimization (TELO) and board member of the Evolutionary Computation journal. She is/was program chair for the GECH track at GECCO 2023, for PPSN 2020, FOGA 2019 and for the theory tracks of GECCO 2015 and 2017. She has organized Dagstuhl seminars and Lorentz Center workshops. Her works have received several awards, among them the CNRS bronze medal, the Otto Hahn Medal of the Max Planck Society, best paper awards at EvoApplications, CEC, and GECCO.

Pascal Kerschke

Pascal Kerschke is professor of Big Data Analytics in Transportation at TU Dresden, Germany. His research interests cover various topics in the context of benchmarking, data science, machine learning, and optimization - including automated algorithm selection, Exploratory Landscape Analysis, as well as continuous single- and multi-objective optimization. Moreover, he is the main developer of flacco, co-authored further R-packages such as smoof and moPLOT, co-organized numerous tutorials and workshops in the context of Exploratory Landscape Analysis and/or benchmarking, and is an active member of the Benchmarking Network and the COSEAL group.

Mike Preuss

Mike Preuss is assistant professor at LIACS, the Computer Science department of Leiden University. He works in AI, namely game AI, natural computing, and social media computing. Mike received his PhD in 2013 from the Chair of Algorithm Engineering at TU Dortmund, Germany, and was with ERCIS at the WWU Muenster, Germany, from 2013 to 2018. His research interests focus on the field of evolutionary algorithms for real-valued problems, namely on multi-modal and multi-objective optimization, and on computational intelligence and machine learning methods for computer games. Recently, he is also involved in Social Media Computing, and he is publications chair of the upcoming multi-disciplinary MISDOOM conference 2019. He is associate editor of the IEEE ToG journal and has been member of the organizational team of several conferences in the last years, in various functions, as general co-chair, proceedings chair, competition chair, workshops chair.

Vanessa Volz

Vanessa Volz is an AI researcher at modl.ai (Copenhagen, Denmark), with focus in computational intelligence in games. She received her PhD in 2019 from TU Dortmund University, Germany, for her work on surrogate-assisted evolutionary algorithms applied to game optimisation. She holds B.Sc. degrees in Information Systems and in Computer Science from WWU Münster, Germany. She received an M.Sc. with distinction in Advanced Computing: Machine Learning, Data Mining and High Performance Computing from University of Bristol, UK, in 2014. Her current research focus is on employing surrogate-assisted evolutionary algorithms to obtain balance and robustness in systems with interacting human and artificial agents, especially in the context of games.

Olaf Mersmann

Olaf Mersmann is a Professor for Data Science at TH Köln - University of Applied Sciences. He received his BSc, MSc and PhD in Statistics from TU Dortmund. His research interests include using statistical and machine learning methods on large benchmark databases to gain insight into the structure of the algorithm choice problem.

EC + DM — Evolutionary Computation and Decision Making

Summary

Solving real-world optimisation problems typically involve an expert or decision-maker. Decision making (DM) tools have been found to be useful in several such applications e.g., health care, education, environment, transportation, business, and production. In recent years, there has also been growing interest in merging Evolutionary Computation (EC) and DM techniques for several applications. This has raised amongst others the need to account for explainability, fairness, ethics and privacy aspects in optimisation and DM. This workshop will showcase research that is at the interface of EC and DM.

The workshop on Evolutionary Computation and Decision Making (EC + DM) to be held in GECCO 2023 aims to promote research on theory and applications in the field. Topics of interest include:

• Interactive multiobjective optimisation or decision-maker in the loop
• Visualisation to support DM in EC
• Aggregation/trade-off operators & algorithms to integrate decision maker preferences
• Fuzzy logic-based DM techniques
• Bayesian and other DM techniques
• Interactive multiobjective optimisation for (computationally) expensive problems
• Using surrogates (or metamodels) in DM
• Hybridisation of EC and DM
• Scalability in EC and DM
• DM and machine learning
• DM in a big data context
• DM in real-world applications
• Use of psychological tools to aid the decision-maker
• Fairness, ethics and societal considerations in EC and DM
• Explainability in EC and DM
• Accounting for trust and security in EC and DM

Organizers

Tinkle Chugh

Dr Tinkle Chugh is a Lecturer in Computer Science at the University of Exeter. He is the Associate Editor of the Complex and Intelligent Systems journal. Between Feb 2018 and June 2020, he worked as a Postdoctoral Research Fellow in the BIG data methods for improving windstorm FOOTprint prediction project funded by Natural Environment Research Council UK. He obtained his PhD degree in Mathematical Information Technology in 2017 from the University of JyvÃ¤skylÃ¤, Finland. His thesis was a part of the Decision Support for Complex Multiobjective Optimization Problems project, where he collaborated with Finland Distinguished Professor (FiDiPro) Yaochu Jin from the University of Surrey, UK. His research interests are machine learning, data-driven optimization, evolutionary computation, and decision-making.

Richard Allmendinger

Richard is Senior Lecturer in Data Science and the Business Engagement Lead of Alliance Manchester Business School, The University of Manchester, and Fellow of The Alan Turing Institute, the UK's national institute for data science and artificial intelligence. Richard has a background in Business Engineering (Diplom, Karlsruhe Institute of Technology, Germany + Royal Melbourne Institute of Technology, Australia), Computer Science (PhD, The University of Manchester, UK), and Biochemical Engineering (Research Associate, University College London, UK). Richard's research interests are in the field of data and decision science and in particular in the development and application of optimization, learning and analytics techniques to real-world problems arising in areas such as management, engineering, healthcare, sports, music, and forensics. Richard is known for his work on non-standard expensive optimization problems comprising, for example, heterogeneous objectives, ephemeral resource constraints, changing variables, and lethal optimization environments. Much of his research has been funded by grants from various UK funding bodies (e.g. Innovate UK, EPSRC, ESRC, ISCF) and industrial partners. Richard is a Member of the Editorial Board of several international journals, Vice-Chair of the IEEE CIS Bioinformatics and Bioengineering Technical Committee, Co-Founder of the IEEE CIS Task Force on Optimization Methods in Bioinformatics and Bioengineering, and contributes regularly to conference organisation and special issues as guest editors.

Jussi Hakanen

Dr Jussi Hakanen is a Senior Researcher at the Faculty of Information Technology at the University of Jyväskylä, Finland. He received MSc degree in mathematics and PhD degree in mathematical information technology, both from the University of Jyväskylä. His research is focused on multiobjective optimization and decision making with an emphasis on interactive multiobjective optimization methods, data-driven decision making, computationally expensive problems, explainable/interpretable machine learning, and visualization aspects related to many-objective problems. He has participated in several industrial projects involving different applications of multiobjective optimization, e.g. in chemical engineering. He has been a visiting researcher in Cornell University, Carnegie Mellon, University of Surrey, University of Wuppertal, University of Malaga and the VTT Technical Research Center of Finland. He has a title of Docent (similar to Adjunct Professor in the US) in Industrial Optimization at the University of Jyväskylä, Finland.

Julia Handl

Julia Handl obtained a Bsc (Hons) in Computer Science from Monash University in 2001, an MSc degree in Computer Science from the University of Erlangen-Nuremberg in 2003, and a PhD in Bioinformatics from the University of Manchester in 2006. From 2007 to 2011, she held an MRC Special Training Fellowship at the University of Manchester, and she is now a Professor in Decision Sciences at Alliance Manchester Business School. A core strand of her work explores the use of multiobjective optimization in unsupervised and semi-supervised classification. She has developed multiobjective algorithms for clustering and feature selection tasks in these settings, and her work has highlighted some of the theoretical and empirical advantages of this approach.

ECADA 2023 — 13th Workshop on Evolutionary Computation for the Automated Design of Algorithms

Summary

Mode: hybrid

Scope

The main objective of this workshop is to discuss hyper-heuristics and algorithm configuration methods for the automated generation and improvement of algorithms, with the goal of producing solutions (algorithms) that are applicable to multiple instances of a problem domain. The areas of application of these methods include optimization, data mining and machine learning.

Automatically generating and improving algorithms by means of other algorithms has been the goal of several research fields, including artificial intelligence in the early 1950s, genetic programming since the early 1990s, and more recently automated algorithm configuration and hyper-heuristics. The term hyper-heuristics generally describes meta-heuristics applied to a space of algorithms. While genetic programming has most famously been used to this end, other evolutionary algorithms and meta-heuristics have successfully been used to automatically design novel (components of) algorithms. Automated algorithm configuration grew from the necessity of tuning the parameter settings of meta-heuristics and it has produced several powerful (hyper-heuristic) methods capable of designing new algorithms by either selecting components from a flexible algorithmic framework or recombining them following a grammar description.

Although most evolutionary algorithms are designed to generate specific solutions to a given instance of a problem, one of the defining goals of hyper-heuristics is to produce solutions that solve more generic problems. For instance, while there are many examples of evolutionary algorithms for evolving classification models in data mining and machine learning, a genetic programming hyper-heuristic has been employed to create a generic classification algorithm which in turn generates a specific classification model for any given classification dataset, in any given application domain. In other words, the hyper-heuristic is operating at a higher level of abstraction compared to how most search methodologies are currently employed; i.e., it is searching the space of algorithms as opposed to directly searching in the problem solution space, raising the level of generality of the solutions produced by the hyper-heuristic evolutionary algorithm. In contrast to standard genetic programming, which attempts to build programs from scratch from a typically small set of atomic functions, hyper-heuristic methods specify an appropriate set of primitives (e.g., algorithmic components) and allow evolution to combine them in novel ways as appropriate for the targeted problem class. While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach as the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence.

As meta-heuristics are themselves a type of algorithm, they too can be automatically designed employing hyper-heuristics. For instance, in 2007, genetic programming was used to evolve mate selection in evolutionary algorithms; in 2011, linear genetic programming was used to evolve crossover operators; more recently, genetic programming was used to evolve complete black-box search algorithms, SAT solvers, and FuzzyART category functions. Moreover, hyper-heuristics may be applied before deploying an algorithm (offline) or while problems are being solved (online), or even continuously learn by solving new problems (life-long). Offline and life-long hyper-heuristics are particularly useful for real-world problem solving where one can afford a large amount of a priori computational time to subsequently solve many problem instances drawn from a specified problem domain, thus amortizing the a priori computational time over repeated problem solving. Recently, the design of multi-objective evolutionary algorithm components was automated.

Very little is known yet about the foundations of hyper-heuristics, such as the impact of the meta-heuristic exploring algorithm space on the performance of the thus automatically designed algorithm. An initial study compared the performance of algorithms generated by hyper-heuristics powered by five major types of genetic programming. Another avenue for research is investigating the potential performance improvements obtained through the use of asynchronous parallel evolution to exploit the typical large variation in fitness evaluation times when executing hyper-heuristics.

Content

We welcome original submissions on all aspects of Evolutionary Computation for the Automated Design of Algorithms, in particular, evolutionary computation methods and other hyper-heuristics for the automated design, generation or improvement of algorithms that can be applied to any instance of a target problem domain. Relevant methods include methods that evolve whole algorithms given some initial components as well as methods that take an existing algorithm and improve it or adapt it to a specific domain. Another important aspect in automated algorithm design is the definition of the primitives that constitute the search space of hyper-heuristics. These primitives should capture the knowledge of human experts about useful algorithmic components (such as selection, mutation and recombination operators, local searches, etc.) and, at the same time, allow the generation of new algorithm variants. Examples of the application of hyper-heuristics, including genetic programming and automatic configuration methods, to such frameworks of algorithmic components are of interest to this workshop, as well as the (possibly automatic) design of the algorithmic components themselves and the overall architecture of metaheuristics. Therefore, relevant topics include (but are not limited to):
- Applications of hyper-heuristics, including general-purpose automatic algorithm configuration methods for the design of metaheuristics, in particular evolutionary algorithms, and other algorithms for application domains such as optimization, data mining, machine learning, image processing, engineering, cyber security, critical infrastructure protection, and bioinformatics.
- Novel hyper-heuristics, including but not limited to genetic programming based approaches, automatic configuration methods, and online, offline and life-long hyper-heuristics, with the stated goal of designing or improving the design of algorithms.
- Empirical comparison of hyper-heuristics.
- Theoretical analyses of hyper-heuristics.
- Studies on primitives (algorithmic components) that may be used by hyper-heuristics as the search space when automatically designing algorithms.
- Automatic selection/creation of algorithm primitives as a preprocessing step for the use of hyper-heuristics.
- Analysis of the trade-off between generality and effectiveness of different hyper-heuristics or of algorithms produced by a hyper-heuristic.
- Analysis of the most effective representations for hyper-heuristics (e.g., Koza style Genetic Programming versus Cartesian Genetic Programming).
- Asynchronous parallel evolution of hyper-heuristics.

Organizers

Daniel Tauritz

Daniel R. Tauritz is an Associate Professor in the Department of Computer Science and Software Engineering at Auburn University (AU), the Director for National Laboratory Relationships in AU's Samuel Ginn College of Engineering, the founding Head of AU’s Biomimetic Artificial Intelligence Research Group (BioAI Group), the founding director of AU’s Biomimetic National Security Artificial Intelligence Laboratory (BONSAI Lab), a cyber consultant for Sandia National Laboratories, a Guest Scientist at Los Alamos National Laboratory (LANL), and founding academic director of the LANL/AU Cyber Security Sciences Institute (CSSI). He received his Ph.D. in 2002 from Leiden University. His research interests include the design of generative hyper-heuristics, competitive coevolution, and parameter control, and the application of computational intelligence techniques in security and defense. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.

John Woodward

John R. Woodward is a lecturer at the Queen Mary University of London. Formerly he was a lecturer at the University of Stirling, within the CHORDS group (http://chords.cs.stir.ac.uk/) and was employed on the DAASE project (http://daase.cs.ucl.ac.uk/). Before that he was a lecturer for four years at the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 50 talks at International Conferences and as an invited speaker at Universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities.

ECCBI — Evolutionary Computation in Computational Biology and Bioinformatics

Summary

In the last three decades, many computer scientists in Artificial Intelligence have made significant contributions to modeling biological systems as a means of understanding the molecular basis of mechanisms in the healthy and diseased cell. The field of computational biology includes the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems. The focus of this workshop is the use of nature-inspired approaches to central problems in computational biology and bioinformatics, including optimization methods under the umbrella of evolutionary computation.
In recent years, significant progress has been made in the development of novel and powerful algorithms to solve outstanding structure-centric problems at the heart of molecular biology, such as structure modeling, prediction, and analysis, molecular optimization and design, characterization of supramolecular complexes, structure-driven prediction of variant effects on stability, function, and dysfunction, and more. These problems often pose difficult search and optimization tasks on modular systems with vast, high-dimensional, continuous search spaces often underlined by non-linear multimodal energy surfaces.
One of the main objectives of this workshop is bring together researchers from different communities and disciplines to exchange the latest knowledge and expertise on computational treatments of small and large molecules for structure-centric problems in molecular biology. The workshop will allow for a broader focus on structure-related problems that necessitate the design of novel evolutionary computation approaches. A particular focus will be on connecting this body of research with latest developments in deep learning to spur the design of novel frameworks, as well as to overcoming prediction problems present in recent successful deep learning architectures (e.g., RoseTTAFold and DeepMind's AlphaFold2 that are capturing the attention of the worldwide research community).
Following the previous editions in GECCO 2015, GECCO 2016 and GECCO 2017 (the first two editions focused on computational structural biology), one of the objectives of this workshop is to aid evolutionary computation researchers to disseminate recent findings and progress. In this edition, we expand this objective to connect the deep learning community with the evolutionary computation community, as well as broaden our focus beyond protein-centric problems to include a larger community of researchers making rapid advances in small molecule optimization and design for novel therapeutics and biotechnology applications.
The workshop will provide a meeting point for authors and attendants of the GECCO conference who have a current or developing interest in computational biology and bioinformatics. We believe that the workshop will also attract researchers working at the intersection of deep learning and bioinformatics to join the GECCO attendance and community and spur novel collaborations. We hope this workshop will stimulate the free exchange and discussion of novel ideas and results and further advance scientific enquiry and progress.
Areas of interest include (but are not restricted to):
• Genome and sequence analysis with nature-inspired approaches.
• Biological network modeling and analysis.
• Use of artificial life models such as cellular automata or Lindenmayer systems in the modeling of biological problems.
• Study and analysis of properties of biological systems such as self-organization, self-assembled systems, emergent behavior or morphogenesis.
• Hybrid approaches and memetic algorithms in the modeling of computational biology problems.
• Multi-objective approaches in the modeling of computational biology problems.
• Use of natural and evolutionary computation algorithms in protein structure classification and prediction (secondary and tertiary).
• Integration of evolutionary computation algorithms with deep-learning architectures.
• Mapping of protein and peptide energy landscapes.
• Modeling of temporal folding of proteins.
• Molecular optimization and design.
• Binding, docking, and complexation.
• Prediction of variant effects on stability, function, and dysfunction.
• Evolutionary search strategies to assist cryo-electron microscopy and other experimental techniques in model building.
• Surrogate models and stochastic approximations of computationally expensive fitness functions of biomolecular systems.

Organizers

José Santos

Prof. José Santos has an MSc in Physics (1989, specialization in Electronics) from the University of Santiago de Compostela (Spain), and a PhD from the same institution (1996, specialization in Artificial Intelligence). He is Professor in the Department of Computer Science and Information Technologies, University of A Coruña (Spain). His research interests include artificial life, neural computation, evolutionary computation, autonomous robotics, and computational biology. In the last years his research focused on computational biology, applying all the knowledge acquired in the other research lines to the computational modeling of biological problems.

Julia Handl

Julia Handl obtained a Bsc (Hons) in Computer Science from Monash University in 2001, an MSc degree in Computer Science from the University of Erlangen-Nuremberg in 2003, and a PhD in Bioinformatics from the University of Manchester in 2006. From 2007 to 2011, she held an MRC Special Training Fellowship at the University of Manchester, and she is now a Professor in Decision Sciences at Alliance Manchester Business School. A core strand of her work explores the use of multiobjective optimization in unsupervised and semi-supervised classification. She has developed multiobjective algorithms for clustering and feature selection tasks in these settings, and her work has highlighted some of the theoretical and empirical advantages of this approach.

Amarda Shehu

Prof. Amarda Shehu is the Associate Vice President of Research for the Institute of Digital InnovAtion (IDIA) and a Professor in the Department of Computer Science in the School of Computing in the College of Engineering and Computing at George Mason University. She is also the Inaugural Founding Co-Director of George Mason University's Transdisciplinary Center for Advancing Human-Machine Partnerships (CAHMP). Shehu served as an NSF Program Director in the Information and Intelligent Systems Division of the Directorate for Computer and Information Science and Engineering during 2019-2022. She is a Fellow of the American Institute for Medical and Biological Engineering (AIMBE) and has received several awards, including the 2022 Outstanding Faculty Award from the State Council of Higher Education for Virginia, the 2021 Beck Family Presidential Medal for Faculty Excellence in Research and Scholarship, and the 2012 NSF CAREER Award. Her research is regularly supported by various NSF programs, the Department of Defense, as well as state and private research awards.

ECXAI — Workshop on Evolutionary Computing and Explainable AI

Summary

Explainable artificial intelligence (XAI) has gained significant traction in the machine learning community in recent years because of the need to generate “explanations” of how these typically black-box tools operate that are accessible to a wide range of users. Nature-inspired optimisation techniques are also often black box in nature, and the attention of the explainability community has begun to consider explaining their operation too. Many of the processes that drive nature-inspired optimisers are stochastic and complex, presenting a barrier to understanding how solutions to a given optimisation problem have been generated.

Explainable optimisation can address some of the questions that arise during the use of an optimiser: Is the system biased? Has the problem been formulated correctly? Is the solution trustworthy and fair? By providing mechanisms that enable a decision maker to interrogate an optimiser and answer these questions trust is built with the system. On the other hand, many approaches to XAI in machine learning are based on search algorithms that interrogate or refine the model to be explained, and have the potential to draw on the expertise of the EC community. Furthermore, many of the broader questions (such as what kinds of explanation are most appealing or useful to end users) are faced by XAI researchers in general.

From an application perspective, important questions have arisen, for which XAI may be crucial: Is the system biased? Has the problem been formulated correctly? Is the solution trustworthy and fair? The goal of XAI and related research is to develop methods to interrogate AI processes with the aim of answering these questions. This can support decision makers while also building trust in AI decision-support through more readily understandable explanations.

We seek contributions on a range of topics related to this theme, including but not limited to:
- Interpretability vs explainability in EC and their quantification
- Landscape analysis and XAI
- Contributions of EC to XAI in general
- Use of EC to generate explainable/interpretable models
- XAI in real-world applications of EC
- Possible interplay between XAI and EC theory
- Applications of existing XAI methods to EC
- Novel XAI methods for EC
- Legal and ethical considerations
- Case studies / applications of EC & XAI technologies

Papers will be double blind reviewed by members of our technical programme committee.

Authors can submit short contributions including position papers of up to 4 pages and regular contributions of up to 8 pages following in each category the GECCO paper formatting guidelines. Software demonstrations will also be welcome.

Organizers

Giovanni Iacca

Giovanni Iacca is an Associate Professor in Computer Engineering at the Department of Information Engineering and Computer Science of the University of Trento, Italy, where he founded the Distributed Intelligence and Optimization Lab (DIOL). Previously, he worked as postdoctoral researcher in Germany (RWTH Aachen, 2017-2018), Switzerland (University of Lausanne and EPFL, 2013-2016), and The Netherlands (INCAS3, 2012-2016), as well as in industry in the areas of software engineering and industrial automation. He is currently co-PI of the PATHFINDER-CHALLENGE project "SUSTAIN" (2022-2026). Previously, he was co-PI of the FET-Open project "PHOENIX" (2015-2019). He has received two best paper awards (EvoApps 2017 and UKCI 2012). His research focuses on computational intelligence, distributed systems, and explainable AI applied e.g. to medicine. In these fields, he co-authored more than 130 peer-reviewed publications. He is actively involved in the organization of tracks and workshops at some of the top conferences in the field of computational intelligence, and he regularly serves as reviewer for several journals and conference committees.

David Walker

David Walker is a Lecturer in Computer Science at the University of Plymouth. He obtained a PhD in Computer Science in 2013 for work on visualising solution sets in many-objective optimisation. His research focuses on developing new approaches to solving hard optimisation problems with Evolutionary Algorithms (EAs), as well as identifying ways in which the use of Evolutionary Computation can be expanded within industry, and he has published journal papers in all of these areas. His recent work considers the visualisation of algorithm operation, providing a mechanism for visualising algorithm performance to simplify the selection of EA parameters. While working as a postdoctoral research associate at the University of Exeter his work involved the development of hyper-heuristics and, more recently, investigating the use of interactive EAs in the water industry. Since joining Plymouth Dr Walker’s research group includes a number of PhD students working on optimisation and machine learning projects. He is active in the EC field, having run an annual workshop on visualisation within EC at GECCO since 2012 in addition to his work as a reviewer for journals such as IEEE Transactions on Evolutionary Computation, Applied Soft Computing, and the Journal of Hydroinformatics. He is a member of the IEEE Taskforce on Many-objective Optimisation. At the University of Plymouth he is a member of both the Centre for Robotics and Neural Systems (CRNS) and the Centre for Secure Communications and Networking.

Alexander Brownlee

Alexander (Sandy) Brownlee is a Lecturer in the Division of Computing Science and Mathematics at the University of Stirling. His main topics of interest are in search-based optimisation methods and machine learning, with a focus on decision support tools, and applications in civil engineering, transportation and software engineering. He has published over 70 peer-reviewed papers on these topics. He has worked with several leading businesses including BT, KLM, and IES on industrial applications of optimisation and machine learning. He serves as a reviewer for several journals and conferences in evolutionary computation, civil engineering and transportation, and is currently an Editorial Board member for the journal Complex And Intelligent Systems. He has been an organiser of several workshops and tutorials at GECCO, CEC and PPSN on genetic improvement of software.

Stefano Cagnoni

Stefano Cagnoni graduated in Electronic Engineering at the University of Florence, Italy, where he also obtained a PhD in Biomedical Engineering and was a postdoc until 1997. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997 he has been with the University of Parma, where he has been Associate Professor since 2004. Recent research grants include: a grant from Regione Emilia-Romagna to support research on industrial applications of Big Data Analysis, the co-management of industry/academy cooperation projects: the development, with Protec srl, of a computer vision-based fruit sorter of new generation and, with the Italian Railway Network Society (RFI) and Camlin Italy, of an automatic inspection system for train pantographs; a EU-funded “Marie Curie Initial Training Network" grant for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing. He has been Editor-in-chief of the "Journal of Artificial Evolution and Applications" from 2007 to 2010. From 1999 to 2018, he was chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, then a track of the EvoApplications conference. From 2005 to 2020, he has co-chaired MedGEC, a workshop on medical applications of evolutionary computation at GECCO. Co-editor of journal special issues dedicated to Evolutionary Computation for Image Analysis and Signal Processing. Member of the Editorial Board of the journals “Evolutionary Computation” and “Genetic Programming and Evolvable Machines”. He has been awarded the "Evostar 2009 Award" in recognition of the most outstanding contribution to Evolutionary Computation.

John McCall

John McCall is Head of Research for the National Subsea Centre at Robert Gordon University. He has researched in machine learning, search and optimisation for 25 years, making novel contributions to a range of nature-inspired optimisation algorithms and predictive machine learning methods, including EDA, PSO, ACO and GA. He has 150+ peer-reviewed publications in books, international journals and conferences. These have received over 2400 citations with an h-index of 22. John and his research team specialise in industrially-applied optimization and decision support, working with major international companies including BT, BP, EDF, CNOOC and Equinor as well as a diverse range of SMEs. Major application areas for this research are: vehicle logistics, fleet planning and transport systems modelling; predictive modelling and maintenance in energy systems; and decision support in industrial operations management. John and his team attract direct industrial funding as well as grants from UK and European research funding councils and technology centres. John is a founding director and CEO of Celerum, which specialises in freight logistics. He is also a founding director and CTO of PlanSea Solutions, which focuses on marine logistics planning. John has served as a member of the IEEE Evolutionary Computing Technical Committee, an Associate Editor of IEEE Computational Intelligence Magazine and the IEEE Systems, Man and Cybernetics Journal, and he is currently an Editorial Board member for the journal Complex And Intelligent Systems. He frequently organises workshops and special sessions at leading international conferences, including several GECCO workshops in recent years.

Jaume Bacardit

Jaume Bacardit is Reader in Machine Learning at Newcastle University in the UK. He has receiveda BEng, MEng in Computer Engineering and a PhD in Computer Science from Ramon Llull University, Spain in 1998, 2000 and 2004, respectively. Bacardit’s research interests include the development of machine learning methods for large-scale problems, the design of techniques to extract knowledge and improve the interpretability of machine learning algorithms, known currently as Explainable AI, and the application of these methods to a broad range of problems, mostly in biomedical domains. He leads/has led the data analytics efforts of several large interdisciplinary consortiums: D-BOARD (EU FP7, €6M, focusing on biomarker identification), APPROACH (EI-IMI €15M, focusing on disease phenotype identification) and PORTABOLOMICS (UK EPSRC £4.3M focusing on synthetic biology). Within GECCO he has organised several workshops (IWLCS 2007-2010, ECBDL’14), been co-chair of the EML track in 2009, 2013, 2014, 2020 and 2021, and Workshops co-chair in 2010 and 2011. He has 90+ peer-reviewed publications that have attracted 4600+ citations and a H-index of 31 (Google Scholar).

EGML-EC — 2nd GECCO workshop on Enhancing Generative Machine Learning with Evolutionary Computation (EGML-EC) 2023

Summary

Generative Machine Learning has become a key field in machine learning and deep learning. In recent years, this field of research has proposed many deep generative models (DGMs) that range from a broad family of methods such as generative adversarial networks (GANs), variational autoencoders (VAEs), autoregressive (AR) models and stable diffusion models (SD). These models combine the advanced deep neural networks with classical density estimation (either explicit or implicit) for mainly generating synthetic data samples. Although these methods have achieved state-of-the-art results in the generation of synthetic data of different types such as images, speech, text, molecules, video, etc.; Deep generative models are still difficult to train.
There are still open problems, such as the vanishing gradient and mode collapse in DGMs, which limit their performance. Although there are strategies to minimize the effect of those problems, they remain fundamentally unsolved. In recent years, evolutionary computation (EC) and related bio-inspired techniques (e.g. particle swarm optimization) and in the form of Evolutionary Machine Learning approaches have been successfully applied to mitigate the problems that arise when training DGMs, leveraging the quality of the results to impressive levels. Among other approaches, these new solutions include GAN, VAE, AR, and SD training methods or fine tuning optimization based on evolutionary and coevolutionary algorithms, the combination of deep neuroevolution with training approaches, and the evolutionary exploration of latent space.
This workshop aims to act as a medium for debate, exchange of knowledge and experience, and encourage collaboration for researchers focused on DGMs and the EC community. Bringing these two communities together will be essential for making significant advances in this research area. Thus, this workshop provides a critical forum for disseminating the experience in the topic of enhancing generative modeling with EC, to present new and ongoing research in the field, and to attract new interest from our community.
Particular topics of interest are (not exclusively):

+ Evolutionary and co-evolutionary algorithms to train deep generative models + EC-based optimization of hyper-parameters for deep generative models + Neuroevolution applied to train deep generative architectures + Dynamic EC-based evolution of deep generative models training parameters + Evolutionary latent space exploration + Real-world applications of EC-based deep generative models solutions + Multi-criteria adversarial training of deep generative models + Evolutionary generative adversarial learning models + Software libraries and frameworks for deep generative models applying EC

Organizers

Jamal Toutouh

Jamal Toutouh is a Researcher Assistant Professor at the University of Málaga (Spain). Previously, he was a Marie Skłodowska Curie Postdoctoral Fellow at Massachusetts Institute of Technology (MIT) in the USA, at the MIT CSAIL Lab. He obtained his Ph.D. in Computer Engineering at the University of Malaga (Spain), which was awarded the 2018 Best Spanish Ph.D. Thesis in Smart Cities. His dissertation focused on the application of Machine Learning methods inspired by Nature to address Smart Mobility problems. His current research explores the combination of Nature-inspired gradient-free and gradient-based methods to address Generative Modelling and Adversarial Machine Learning. The main idea is to devise new algorithms to improve the efficiency and efficacy of the state-of-the-art methodology by mainly applying evolutionary computation and related techniques, such as particle swarm optimization in the form of Evolutionary Machine Learning approaches. Besides, he is on the application of Machine Learning to address problems related to Smart Mobility, Smart Cities, and Climate Change.

UnaMay OReilly

Una-May O'Reilly is the leader of the AnyScale Learning For All (ALFA) group at MIT CSAIL. ALFA focuses on evolutionary algorithms, machine learning, and frameworks for large-scale knowledge mining, prediction and analytics. The group has projects in cyber security using coevolutionary algorithms to explore adversarial dynamics in networks and malware detection. Una-May received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, which has evolved into ACM Sig-EVO. She now serves as Vice-Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference, GECCO, in 2005.

João Correia

João Correia is an Assistant Professor at the University of Coimbra and a researcher of the Computational Design and Visualization Lab. and member of the Evolutionary and Complex Systems (ECOS) of the Centre for Informatics and Systems of the same university. He holds a PhD in Information Science and Technology from the University of Coimbra and also has a MSc and BS in Informatics Engineering from the same university. His main research interests include Evolutionary Computation, Machine Learning, Adversarial Learning, Computer Vision and Computational Creativity. He is involved in different international program committees of international conferences of the area of Evolutionary Computation, Artificial Intelligence, Computational Art and Computational Creativity; and reviewer for various conferences and journals for the mentioned areas, namely GECCO and EvoStar. More recently, he was invited as a remote reviewer for a European Research Council Grant. He was also publicity chair and chair of the International Conference of Evolutionary Art Music and Design conference and currently the publicity chair for EvoStar - The Leading European Event on Bio-Inspired Computation. Furthermore, he has authored and co-authored several articles on the different International Conferences and journals on Artificial Intelligence and Evolutionary Computation and is involved in national and international projects concerning Evolutionary Computation, Machine Learning, Computational Creativity and Data Science.

Penousal Machado leads the Cognitive and Media Systems group at the University of Coimbra. His research interests include Evolutionary Computation, Computational Creativity, and Evolutionary Machine Learning. In addition to the numerous scientific papers in these areas, his works have been presented in venues such as the National Museum of Contemporary Art (Portugal) and the “Talk to me” exhibition of the Museum of Modern Art, NY (MoMA).

Sergio Nesmachnow

Full Professor and Researcher at Universidad de la República, Uruguay. Main research areas: Metaheuristics, Computational intelligence, High Performance Computing, Smart Cities. More than 300 research articles published in journals and conferences.

ERBML — 26th International Workshop on Evolutionary Rule-based Machine Learning

Summary

This workshop is a continuation of the International Workshop on Learning Classifier Systems (IWLCS) and will be its 26th edition. The IWLCS has become an essential part of GECCO's ability to inspire the next generation of evolutionary rule-based machine learning (ERBML) researchers to explore and enhance that field's methods, especially Learning Classifier Systems (LCSs). The motivations for the new name are to broaden the scope, make it more discoverable and target more audiences. This is an ongoing process that has been underway for several years now; for example, the 20th edition of IWLCS at GECCO 2017 was named International Workshop on Evolutionary Rule-based Machine Learning as well.

ERBML is a set of machine learning (ML) methods that leverage the strengths of various metaheuristics to find an optimal set of rules to solve a problem. There are ERBML methods for all kinds of learning tasks, i.e. supervised learning, unsupervised learning and reinforcement learning. The main ERBML methods include Learning Classifier Systems, Ant-Miner, artificial immune systems as well as evolving fuzzy rule-based systems. They have in common, that models or model structures are optimized using evolutionary, symbolic or swarm-based methods. The key feature of the models built is an inherent comprehensibility (explainability, transparency, interpretability), a property becoming a matter of high interest for many ML communities recently as part of the eXplainable AI movement. This workshop will provide an opportunity to highlight research trends in the field of ERBML, demonstrate modern implementations for real-life applications, show effectiveness in creating flexible and eXplainable AI systems and attract new interest in this alternative and often advantageous modelling paradigm.

Topics of interest include but are not limited to:

- Advances in ERBML methods: local models, problem space partitioning, rule mixing, …

- Applications of ERBML: medical, navigation, bioinformatics, computer vision, games, cyber-physical systems, …

- State-of-the-art analysis: surveys, sound comparative experimental benchmarks, carefully crafted reproducibility studies, …

- Formal developments in ERBML: provably optimal parametrization, time bounds, generalization, …

- Comprehensibility of evolved rule sets: knowledge extraction, visualization, interpretation of decisions, eXplainable AI, …

- Advances in ERBML paradigms: Michigan/Pittsburgh style, hybrids, iterative rule learning, …

- Hyperparameter optimization for ERBML: hyperparameter selection, online self-adaptation, …

- Optimizations and parallel implementations: GPU acceleration, matching algorithms, …

Due to the rather disjointed ERBML research community, in addition to full papers (8 pages excluding references) on novel ERBML research, we plan to allow submission of extended abstracts (2 pages excluding references) that summarize recent high-value ERBML research by the authors, showcasing its practical significance. These will then be presented in a dedicated short paper segment with short presentations.

Organizers

David Pätzel

David Pätzel is a doctoral candidate at the Department of Computer Science at the University of Augsburg, Germany. He received his B.Sc. in Computer Science from the University of Augsburg in 2015 and his M.Sc. in the same field in 2017. His main research is directed towards Learning Classifier Systems with a focus on developing a more formal, probabilistic understanding of LCSs that can, for example, be used to improve existing algorithms. Besides that, his research interests include reinforcement learning, evolutionary machine learning algorithms and pure functional programming. He is an elected organizing committee member of the International Workshop on Learning Classifier Systems since 2020.

Alexander Wagner

Alexander Wagner is a doctoral candidate at the Department of Artificial Intelligence in Agricultural Engineering at the University of Hohenheim, Germany. He received his B.Sc. and M.Sc. degrees in computer science from the University of Augsburg in 2018 and 2020, respectively. His bachelor’s thesis already dealt with the field of Learning Classifier Systems. This sparked his interest and he continued working on Learning Classifier Systems, especially XCS, during his master studies. Consequently, he also dedicated his master’s thesis to this topic in greater depth. His current research focuses on the application of Learning Classifier Systems, in particular XCS and its derivatives, to self-learning adaptive systems designed to operate in real-world environments, especially in agricultural domains. In this context, the emphasis of his research is to increase reliability of XCS or LCS in general. His research interests also include reinforcement learning, evolutionary machine learning algorithms, neural networks and neuroevolution. He is an elected organizing committee member of the International Workshop on Learning Classifier Systems since 2021.

Michael Heider

Michael Heider is a doctoral candidate at the Department of Computer Science at the University of Augsburg, Germany. He received his B.Sc. in Computer Science from the University of Augsburg in 2016 and his M.Sc. in Computer Science and Information-oriented Business Management in 2018. His main research is directed towards Learning Classifier Systems, especially following the Pittsburgh style, with a focus on regression tasks encountered in industrial settings. Those have a high regard for both accurate as well as comprehensive solutions. To achieve comprehensibility he focuses on compact and simple rule sets. Besides that, his research interest include optimization techniques and unsupervised learning (e.g. for data augmentation or feature extraction). He is an elected organizing committee member of the International Workshop on Learning Classifier Systems since 2021.

Abubakar Siddique

Dr. Siddique's main research lies in creating novel machine learning systems, inspired by the principles of cognitive neuroscience, to provide efficient and scalable solutions for challenging and complex problems in different domains, such as Boolean, computer vision, navigation, and Bioinformatics. He has provided a tutorial on Learning Classifier Systems: Cognitive inspired machine learning for eXplainable AI at GECCO 2022. He is engaged as an author and reviewer for different journals and international conferences including IEEE Transactions on Cybernetics, IEEE Transactions on Evolutionary Computation, IEEE Computational Intelligence Magazine, GECCO, IEEE CEC, and EuroGP.

Dr. Siddique did his Bachelor's in Computer Science from Quaid-i-Azam University, Master's in Computer Engineering from U.E.T Taxila, and Ph.D. in Computer Engineering from Victoria University of Wellington. He was the recipient of the VUWSA Gold Award and the "Student Of The Session" award during his Ph.D. and bachelor studies, respectively. He spent nine years at Elixir Pakistan, a California (USA) based leading software company. His last designation was a Principal Software Engineer where he led a team of software developers. He developed enterprise-level software for customers such as Xerox, IBM, and Adobe.

EvoRL — Evolutionary Reinforcement Learning Workshop

Summary

In recent years reinforcement learning (RL) has received a lot of attention thanks to its performance and ability to address complex tasks. At the same time, multiple recent papers, notably work from OpenAI, have shown that evolution strategies (ES) can be competitive with standard RL algorithms on some problems while being simpler and more scalable. Similar results were obtained by researchers from Uber, this time using a gradient-free genetic algorithm (GA) to train deep neural networks on complex control tasks. Moreover, recent research in the field of evolutionary algorithms (EA) has led to the development of algorithms like Novelty Search and Quality Diversity, capable of efficiently addressing complex exploration problems and finding a wealth of different policies while improving the external reward (QD) or without relying on any reward at all (NS). All these results and developments have sparked a strong renewed interest in such population-based computational approaches.

Nevertheless, even if EAs can perform well on hard exploration problems they still suffer from low sample efficiency. This limitation is less present in RL methods, notably because of sample reuse, while on the contrary they struggle with hard exploration settings. The complementary characteristics of RL algorithms and EAs have pushed researchers to explore new approaches merging the two in order to harness their respective strengths while avoiding their shortcomings.

Some recent papers already demonstrate that the interaction between these two fields can lead to very promising results. We believe that this is a nascent field where new methods can be developed to address problems like sparse and deceptive rewards, open-ended learning, and sample efficiency, while expanding the range of applicability of such approaches.
In this workshop, we want to highlight this new field currently developing while proposing an outlet at GECCO for the two communities (RL and EA) to meet and interact, in order to encourage collaboration between researchers to discuss past and new challenges and develop new applications.

The workshop will focus on the following topics:

- Evolutionary reinforcement learning
- Population-based methods for policy search
- Evolution strategies
- Neuroevolution
- Hard exploration and sparse reward problems
- Deceptive rewards
- Novelty and diversity search methods
- Divergent search
- Sample-efficient direct policy search
- Intrinsic motivation, curiosity
- Building or designing behaviour characterisations
- Meta-learning, hierarchical learning
- Evolutionary AutoML
- Open-ended learning

Organizers

Giuseppe Paolo

Giuseppe Paolo is a research scientist at Huawei Technologies France. His research focuses on the intersection between evolutionary algorithms and model based reinforcement learning algorithms.
Giuseppe Paolo received his PhD in Robotics and Artificial Intelligence from Sorbonne University in Paris in 2021; he received his M.Sc. in Robotics, Systems and Control at ETH Zurich in 2018 and the engineering degree from the Politecnico di Torino in 2015.

Antoine Cully

Antoine Cully is Lecturer (Assistant Professor) at Imperial College London (United Kingdom). His research is at the intersection between artificial intelligence and robotics. He applies machine learning approaches, like evolutionary algorithms, on robots to increase their versatility and their adaptation capabilities. In particular, he has recently developed Quality-Diversity optimization algorithms to enable robots to autonomously learn large behavioural repertoires. For instance, this approach enabled legged robots to autonomously learn how to walk in every direction or to adapt to damage situations. Antoine Cully received the M.Sc. and the Ph.D. degrees in robotics and artificial intelligence from the Sorbonne Université in Paris, France, in 2012 and 2015, respectively, and the engineer degree from the School of Engineering Polytech’Sorbonne, in 2012. His Ph.D. dissertation has received three Best-Thesis awards. He has published several journal papers in prestigious journals including Nature, IEEE Transaction in Evolutionary Computation, and the International Journal of Robotics Research. His work was featured on the cover of Nature (Cully et al., 2015), received the "Outstanding Paper of 2015" award from the Society for Artificial Life (2016), the French "La Recherche" award (2016), and two Best-Paper awards from GECCO (2021, 2022).

Adam Gaier is a Senior Research Scientist at the Autodesk AI Lab pursuing basic research in evolutionary and machine learning and the application of these techniques to problems in design and robotics. He received master's degrees in Evolutionary and Adaptive Systems form the University of Sussex and Autonomous Systems at the Bonn-Rhein-Sieg University of Applied Sciences, and a PhD from Inria and the University of Lorraine — where his dissertation focused on tackling expensive design problems through the fusion of machine learning, quality diversity, and neuroevolution approaches. His PhD work received recognition at top venues across these fields: including a spotlight talk at NeurIPS (machine learning), multiple best paper awards at GECCO (evolutionary computation), a best student paper at AIAA (aerodynamics design optimization), and a SIGEVO Dissertation Award.

EvoSoft — Evolutionary Computation Software Systems

Summary

Evolutionary computation (EC) methods are applied in many different domains. Therefore, soundly engineered, reusable, flexible, user-friendly, and interoperable software systems are more than ever required to bridge the gap between theoretical research and practical application. However, due to the heterogeneity of application domains and the large number of EC methods, the development of such systems is both, time consuming and complex. Consequently many EC researchers still implement individual and highly specialized software which is often developed from scratch, concentrates on a specific research question, and does not follow state of the art software engineering practices. By this means the chance to reuse existing systems and to provide systems for others to build their work on is not sufficiently seized within the EC community. In many cases the developed systems are not even publicly released, which makes the comparability and traceability of research results very hard. This workshop concentrates on the importance of high-quality software systems and professional software engineering in the field of EC and provides a platform for EC researchers to discuss the following and other related topics:

• development and application of generic and reusable EC software systems
• architectural and design patterns for EC software systems
• software modeling of EC algorithms and problems
• open-source EC software systems
• expandability, interoperability, and standardization
• comparability and traceability of research results
• graphical user interfaces and visualization
• comprehensive statistical and graphical results analysis
• parallelism and performance
• usability and automation
• comparison and evaluation of EC software systems

Organizers

Stefan Wagner

Stefan Wagner received his MSc in computer science in 2004 and his PhD in technical sciences in 2009, both from Johannes Kepler University Linz, Austria. From 2005 to 2009 he worked as associate professor for software project engineering and since 2009 as full professor for complex software systems at the Campus Hagenberg of the University of Applied Sciences Upper Austria. From 2011 to 2018 he was also CEO of the FH OÖ IT GmbH, which is the IT service provider of the University of Applied Sciences Upper Austria. Dr. Wagner is one of the founders of the research group Heuristic and Evolutionary Algorithms Laboratory (HEAL) and is project manager and head architect of the open-source optimization environment HeuristicLab. He works as project manager and key researcher in several R&D projects on production and logistics optimization and his research interests are in the area of combinatorial optimization, evolutionary algorithms, computational intelligence, and parallel and distributed computing.

Michael Affenzeller

Michael Affenzeller has published several papers, journal articles and books dealing with theoretical and practical aspects of evolutionary computation, genetic algorithms, and meta-heuristics in general. In 2001 he received his PhD in engineering sciences and in 2004 he received his habilitation in applied systems engineering, both from the Johannes Kepler University Linz, Austria. Michael Affenzeller is professor at the University of Applied Sciences Upper Austria, Campus Hagenberg, head of the research group Heuristic and Evolutionary Algorithms Laboratory (HEAL), head of the Master degree program Software Engineering, vice-dean for research and development, and scientific director of the Softwarepark Hagenberg.

GEWS2023 — Grammatical Evolution Workshop - 25 years of GE

Summary

Grammatical Evolution (GE) is an evolutionary algorithm that can be used to evolve program described by grammars. It does so by using a simple binary string to represent individuals, which are then mapped into more complex structure. Since it was first introduced 25 years ago, it has enjoyed continued popularity and it is now among the most popular variants of genetic programming. 18 years after the previous workshop, this year sees the reintroduction of the GEWs to celebrate this milestone.
The GEWS aims to present cutting-edge research and to be a premier forum for practitioners and researchers to discuss recent advances in GE and propose new research directions.
All aspects of GE, including its foundations, expansions, analyses, applications, and latest software implementations, will be covered in the workshop. We welcome full-length and short-length research papers, and we especially encourage contributions of industry practice research.

Organizers

Conor Ryan

Prof. Conor Ryan is a Professor of Machine Learning in the Computer Science and Information Systems (CSIS) department at the University of Limerick. He is interested in applying Machine Learning techniques to medical diagnosis, particularly in semi-automated mammography, and studied with Prof. Lásló Tabar in 2005 to obtain American Medical Association accreditation in Breast Cancer Screening. He also uses Machine Learning to perform data analytics on medical data (including so-called "Big Data") to extract insights from large quantities of data. Current health-related projects include an Enterprise Ireland Commercialisation Fund project to develop a Stage 1 Breast Cancer Detection system, involving Cork University Hospital and the Royal Surrey County Hospital, as well as a longer-term project looking at cardiotocograph (CTG) interpretation.

Mahsa Mahdinejad is a Ph.D. student in artificial intelligence at the University of Limerick. Her research interests are Deep learning, Evolutionary Algorithms & Grammatical Evolution, Hybrid-Algorithms and Bioinformatics. She received her bachelor’s degree in Physics from Isfahan University of Technology. She also has worked as an Intern at the Department of Mathematics and Statistics, at the University of Limerick.

Aidan Murphy

Dr. Aidan Murphy received the bachelor's degree in theoretical physics from the Trinity College Dublin, the H.Dip. degree in statistics from the University College Dublin, and the Ph.D. degree in explainable Artificial Intelligence (AI) (X-AI) from the BDS Laboratory, University of Limerick. He is currently a Postdoctoral Research Fellow with the Complex Software Laboratory, University College Dublin, researching software testing and mutation analysis. His research interests include grammatical evolution, transfer learning, fuzzy logic, and X-AI.

GGP — Graph-based Genetic Programming

Summary

While the classical way to represent programs in Genetic Programming (GP) is
using an expression tree, different GP variants with graph-based representations have been proposed and studied throughout the years. Graph-based representations have led to novel applications of GP in circuit design, cryptography, image analysis, and more. This workshop aims to encourage this form of GP by considering graph-based methods from a unified perspective and bringing together researchers in this subfield of GP research.
The GECCO’22 graph-based GP tutorial launched an exchange among graph-based GP researchers. Given the positive outcome of the tutorial, the organizers believe that the first workshop on graph-based GP at GECCO’23 would be an excellent opportunity to promote the further development of visions and the exchange of ideas.
We invite submissions that present recent developments in graph-based Genetic Programming. Submitting work that is in an early stage or in progress is welcomed and appreciated.

Organizers

Roman Kalkreuth

Roman Kalkreuth is currently a research associate at TU Dortmund University in Germany. His research is located in the field of graph-based Genetic Programming. Primarily, his research focuses on the development and analysis of genetic operators and selection strategies for Cartesian Genetic Programming. After receiving a Master of Science in Computer Vision and Computatinal Intelligece (2012) from South Westphalia University of Applied Sciences, he started his Ph.D. study in 2014 at the Department of Computer Science of the TU Dortmund University. Since 2015, he is a research associate of the computational intelligence research group of Prof. Dr. Guenter Rudolph. Roman Kalkreuth defended his dissertation in July this year. Afterwards he became a Postdoc of Professor Rudolph.

Thomas Bäck

Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas Bäck has more than 350 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and most recently, the Handbook of Natural Computing. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft für Informatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.

Dennis G. Wilson

Dennis G. Wilson is an Assistant Professor of AI and Data Science at ISAE-SUPAERO in Toulouse, France. He obtained his PhD at the Institut de Recherche en Informatique de Toulouse (IRIT) on the evolution of design principles for artificial neural networks. Prior to that, he worked in the Anyscale Learning For All group in CSAIL, MIT, applying evolutionary strategies and developmental models to the problem of wind farm layout optimization. His current research focuses on genetic programming, neural networks, and the evolution of learning.

Paul Kaufmann

Paul Kaufmann is a professor for Embedded Systems at the Westphalian University of Applied Sciences, Germany. His research focuses on nature-inspired optimization techniques and their application to adaptive and reconfigurable systems.

Leo Francoso Dal Piccol Sotto

Leo Sotto works as a research scientist at the Fraunhofer Institute for Algorithms and Scientific Computing (SCAI), Germany. He received his bachelor (2015) and his Ph.D. (2020) in Computer Science from the Federal University of Sao Paulo, Brazil. He has worked with Linear Genetic Programming (LGP) applied to, for example, regression and classification of cardiac arrhythmia, Estimation of Distribution Algorithms, investigated properties of the DAG representation in LGP such as the role of non-effective code, and investigated DAG-based representations for genetic programming.

Timothy Aktinson

Timothy Atkinson did his PhD at the University of York, UK, where he developed the graph-based Genetic Programming algorithm Evolving Graphs by Graph Programming. Following this, he did a 6 month post-doc at the University of Manchester, UK, where he focused on applications of graph-based GP for circuit synthesis in the context of embedded machine learning. He has spent the last 2 years working at NNAISENSE in Lugano, Switzerland, where he has focused on applications of evolutionary algorithms in an industrial setting, for control and black-box optimization. During this time, he worked as a leading developer on the open-source EvoTorch project, which provides implementations of state-of-the-art evolutionary algorithms and distributed fitness evaluation within the PyTorch ecosystem.

IAM 2023 — 8th Workshop on Industrial Applications of Metaheuristics (IAM 2023).

Summary

Metaheuristics have been applied successfully to many aspects of applied Mathematics and Science, showing their capabilities to deal effectively with problems that are complex and otherwise difficult to solve. There are a number of factors that make the usage of metaheuristics in industrial applications more and more interesting. These factors include the flexibility of these techniques, the increased availability of high-performing algorithmic techniques, the increased knowledge of their particular strengths and weaknesses, the ever increasing computing power, and the adoption of computational methods in applications. In fact, metaheuristics have become a powerful tool to solve a large number of real-life optimization problems in different fields and, of course, also in many industrial applications such as production scheduling, distribution planning, inventory management and others.

IAM proposes to present and debate about the current achievements of applying these techniques to solve real-world problems in industry and the future challenges, focusing on the (always) critical step from the laboratory to the shop floor. A special focus will be given to the discussion of which elements can be transferred from academic research to industrial applications and how industrial applications may open new ideas and directions for academic research.
As in the previous edition, the workshop together with the rest of the conference will be held in a hybrid mode promoting the participation.
Topic areas of IAM 2023 include (but are not restricted to):

• Success stories for industrial applications of metaheuristics
• Pitfalls of industrial applications of metaheuristics.
• Metaheuristics to optimize dynamic industrial problems.
• Multi-objective optimization in real-world industrial problems.
• Meta-heuristics in very constraint industrial optimization problems: assuring feasibility, constraint-handling techniques.
• Reduction of computing times through parameter tuning and surrogate modelling.
• Parallelism and/or distributed design to accelerate computations.
• Algorithm selection and configuration for complex problem solving.
• Advantages and disadvantages of metaheuristics when compared to other techniques such as integer programming or constraint programming.
• New research topics for academic research inspired by real (algorithmic) needs in industrial applications.

Organizers

Silvino Fernández Alzueta

He is an R&D Engineer at the Global R&D Division of ArcelorMittal for more than 15 years. He develops his activity in the ArcelorMittal R&D Centre of Asturias (Spain), in the framework of the Business and TechnoEconomic Department. His has a Master Science degree in Computer Science and a Ph.D. in Engineering Project Management, both obtained at University of Oviedo in Spain. His main research interests are in analytics, metaheuristics and swarm intelligence, and he has broad experience in using these techniques in industrial environment to optimize production processes. His paper "Scheduling a Galvanizing Line by Ant Colony Optimization" obtained the best paper award in the ANTS conference in 2014.

Pablo Valledor Pellicer

He is a research engineer of the Global R&D Asturias Centre at ArcelorMittal (world's leading integrated steel and mining company), working at the Business & Technoeconomic department. He obtained his MS degree in Computer Science in 2006 and his PhD on Business Management in 2015, both from the University of Oviedo. He worked for the R&D department of CTIC Foundation (Centre for the Development of Information and Communication Technologies in Asturias) until February 2007, when he joined ArcelorMittal. His main research interests are metaheuristics, multi-objective optimization, analytics and operations research.

Thomas Stützle

Thomas Stützle is a research director of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium. He received his PhD and his habilitation in computer science both from the Computer Science Department of Technische Universität Darmstadt, Germany, in 1998 and 2004, respectively. He is the co-author of two books about Stochastic Local Search: Foundations and Applications and Ant Colony Optimization and he has extensively published in the wider area of metaheuristics including 22 edited proceedings or books, 11 journal special issues, and more than 250 journal, conference articles and book chapters, many of which are highly cited. He is associate editor of Computational Intelligence, Evolutionary Computation and Applied Mathematics and Computation and on the editorial board of seven other journals. His main research interests are in metaheuristics, swarm intelligence, methodologies for engineering stochastic local search algorithms, multi-objective optimization, and automatic algorithm configuration. In fact, since more than a decade he is interested in automatic algorithm configuration and design methodologies and he has contributed to some effective algorithm configuration techniques such as F-race, Iterated F-race and ParamILS.

iGECCO — Interactive Methods at GECCO

Summary

As nature-inspired methods have evolved, it has become clear that optimising towards a quantified fitness function is not always feasible, particularly where part or all of the evaluation of a candidate solution is inherently subjective. This is particularly the case when applying search algorithms to problems such as the generation of art and music. In other cases, optimising to a fitness function might result in a highly optimal solution that is not well suited to implementation in the real world. Incorporating a human into the optimisation process can yield useful results in both examples, and as such the work on interactive evolutionary algorithms (IEAs) has matured in recent years. This proposed workshop will provide an outlet for this research for the GECCO audience. Particular topics of interest are:

* Interactive generation of solutions.
* Interactive evaluation of solutions.
* Psychological aspects of IEAs.
* Multi- and many-objective optimisation with IEAs.
* Machine learning approaches within IEAs.
* Novel applications of IEAs.

Most IEAs focus on either asking the user to generate solutions to a problem with which they are interacting, or asking them to evaluate solutions that have been generated by an evolutionary process. To enable users to generate solutions it is necessary to develop mechanisms by which they can interact with a given solution representation. Solution evaluation requires the display of the solution (e.g., with a visualisation of the chromosome) so that the user can choose between two or more solutions having identified characteristics that best suit them.

As well as the basic interaction and solution evaluation, IEAs bring with them additional considerations through the inclusion of the user. A prime example of such a consideration is "user fatigue". The many iterations required by most nature-inspired methods can equate to a very large number of interactions between the user and system. Over many repeated interactions the user can become fatigued, so methods aimed at addressing this (and other similar effects) are of great importance to the future development of IEAs.

iGECCO 2023 will be offered as a hybrid workshop

Organizers

Matthew Johns

Dr Matt Johns is a Research Software Engineer at the University of Exeter. He obtained a PhD in Computer Science from the University of Exeter developing methods for incorporating domain expertise into evolutionary algorithms. His research is focused on developing new approaches to the design and management of complex engineering systems by combining visual analytics, heuristic optimisation, and machine learning. His research interests include evolutionary optimisation, engineering systems optimisation, human-computer interaction, and interactive visualisation.

Ed Keedwell

Ed Keedwell is Professor of Artificial Intelligence, and a Fellow of the Alan Turing Insitute. He joined the Computer Science discipline in 2006 and was appointed as a lecturer in 2009. He has research interests in optimisation (e.g. genetic algorithms, swarm intelligence, hyper-heuristics) machine learning and AI-based simulation and their application to a variety of difficult problems in bioinformatics and engineering yielding over 160 journal and conference publications. He leads a research group focusing on applied artificial intelligence and has been involved with successful funding applications totalling over £3.5 million from the EPSRC, Innovate UK, EU and industry. Particular areas of current interest are the optimisation of transportation systems, the development of sequence-based hyper-heuristics and human-in-the-loop optimisation methods for applications in engineering.

Nick Ross

Nick Ross is a Computer Science PhD Student in the College of Engineering, Mathematics and Physical Sciences at the University of Exeter. He is researching the gamification of optimisation and how it might apply to water distribution systems. His research interests include nature-inspired computing, capturing user heuristics, serious games and gamification, and artificial intelligence.

David Walker

David Walker is a Lecturer in Computer Science at the University of Plymouth. He obtained a PhD in Computer Science in 2013 for work on visualising solution sets in many-objective optimisation. His research focuses on developing new approaches to solving hard optimisation problems with Evolutionary Algorithms (EAs), as well as identifying ways in which the use of Evolutionary Computation can be expanded within industry, and he has published journal papers in all of these areas. His recent work considers the visualisation of algorithm operation, providing a mechanism for visualising algorithm performance to simplify the selection of EA parameters. While working as a postdoctoral research associate at the University of Exeter his work involved the development of hyper-heuristics and, more recently, investigating the use of interactive EAs in the water industry. Since joining Plymouth Dr Walker’s research group includes a number of PhD students working on optimisation and machine learning projects. He is active in the EC field, having run an annual workshop on visualisation within EC at GECCO since 2012 in addition to his work as a reviewer for journals such as IEEE Transactions on Evolutionary Computation, Applied Soft Computing, and the Journal of Hydroinformatics. He is a member of the IEEE Taskforce on Many-objective Optimisation. At the University of Plymouth he is a member of both the Centre for Robotics and Neural Systems (CRNS) and the Centre for Secure Communications and Networking.

Keep Learning — Keep Learning: Towards optimisers that continually improve and/or adapt

Summary

Combinatorial problems are ubiquitous across many sectors, delivering optimised solutions can lead to considerable economic benefits in many fields. In a typical scenario, instances arrive in a continual stream and a solution needs to be quickly produced. Although there are many well-known approaches to developing optimisation algorithms, most suffer from a problem that is now becoming apparent across the breadth of Artificial Intelligence: systems are limited to performing well on data that is similar to that encountered in their design process, and are unable to adapt when encountering situations outside of their original programming.

For real-world optimisation this is particularly problematic. If optimisers are trained in a one-off process then deployed, the system remains static --- despite the fact that optimisation occurs in a dynamic world of changing instance characteristics, changing user- requirements and changes in operating environments that influence solution quality (e.g. changes in staff availability, breakdowns in a factory, or traffic in a city). Such changes may be either gradual, or sudden. In the best case this leads to systems that deliver sub-optimal performance, while at worst, systems that are completely unfit for purpose. Moreover, a system that does not adapt wastes an obvious opportunity to improve its own performance over time as it solves more and more instances.

The goal of this workshop is to discuss mechanisms by which optimisers can “keep on learning”. This includes mechanisms to enable an optimisation system to:
● Improve with practice as it solves more and more instances
● Learn & adapt from its experience of solving problem instances
● Detect drift in instance characteristics and respond accordingly, e.g. by tuning solvers and/or models
● Detect “surprise” in instance characteristics and respond accordingly, e.g. generation of new solvers
● Predict empty regions of an instance-space where future instances might appear; generate new synthetic instances in this space to provide training data for solvers
● Learn across multiple domains, e.g. transfer learning
● Learn to optimise in unseen domains

Developing such a system will likely require an interdisciplinary approach that mixes machine-learning and optimisation techniques. The workshop solicits short papers that address mechanisms by which any of the above can be achieved. We also invite short position papers that do not contain results but propose novel avenues of work might enable the creation of life-long learners.

Possible topics include but are not limited to:
● Per-instance Algorithm Selection
● Developing dynamic algorithm portfolios
● Algorithm Generation
● Algorithm Tuning
● Methods for Warm-Starting Optimisers
● Methods for detecting change in instance characteristics
● Feature-generation and selection
● Synthetic Instance generation
● Creating instance space maps

The workshop will be hybrid.

Organizers

Ian Miguel

Ian Miguel is a Professor and Head of School of Computer Science at the University of St Andrews, where he also held a five-year Royal Academy of Engineering/EPSRC Research Fellowship. Ian's research focuses on Constraint Programming, and in particular the automation of constraint modelling: the task of deriving an encoding of a problem of interest so as to lead to best solver performance. This work is situated in the Essence language for specifying combinatorial optimisation problems and the Conjure and Savile Row automated constraint modelling systems. Ian has attracted research funding of over £4M

Christopher Stone

Christopher Stone is a Research Fellow at the School of Computer Science where he works in instance generation methods in the CSP/SAT domains. His PhD (supervised by Prof. Hart at ENU) developed new methods to automatically generate heuristics and problem instances for combinatorial optimisation problems over multiple domains, using graph based representation and graph rewriting systems that made use of both synthesised data and real world data.

Quentin Renau

Quentin Renau is a Research Fellow at Edinburgh Napier University. He obtained his Engineering diploma in applied mathematics from Institut National des Sciences Appliquées in Rouen (2017) and his PhD in computer science from the French École Polytechnique in collaboration with Sorbonne Université and Thales Research and Technology (2022). He was appointed Outstanding Student in the Evostar 2021 conference. His research interests are in optimisation, search heuristics, algorithm selection and configuration and lifelong learning systems.

LAHS — Landscape-Aware Heuristic Search

Summary

This workshop will run in hybrid format. Fitness landscape analysis and visualisation can provide significant insights into problem instances and algorithm behaviour. The aim of the workshop is to encourage and promote the use of landscape analysis to improve the understanding, the design and, eventually, the performance of search algorithms. Examples include landscape analysis as a tool to inform the design of algorithms, landscape metrics for online adaptation of search strategies, mining landscape information to predict instance hardness and algorithm runtime. The workshop will focus on, but not be limited to, topics such as:

• Visualisations
• Local optima networks
• Exploiting problem structure
• Hyperparameter optimisation
• Feature selection search spaces
• Informed search strategies
• Neural architecture search spaces
• Multi-objective fitness landscapes
• Performance and failure prediction
• Neural network loss landscape analysis
• Landscape metrics for automated algorithm selection
• Proposal of new fitness landscape features
• Real-world applications of fitness landscape analysis

We will invite submissions of three types of articles:

• research papers (up to 8 pages)
• software libraries/packages (up to 4 pages)
• position papers (up to 2 pages)

Organizers

Sarah L. Thomson

Sarah L. Thomson is a lecturer at the University of Stirling in Scotland. Her PhD was in fitness landscape analysis, with a strong focus on algorithm performance prediction. She has published extensively in this field and her work has received recognitions of its quality (shortlisted nominee for best SICSA PhD thesis in Scotland; best paper nomination at EvoCOP; being named an outstanding student of EvoSTAR on two occasions). Her research interests include fractal analysis of landscapes, explainable artificial intelligence, and real-world evolutionary computation applications.

Nadarajen Veerapen is an Associate Professor (maître de conférences) at the University of Lille, France. Previously he was a research fellow at the University of Stirling in Scotland. He holds a PhD in Computing Science from the University of Angers, France, where he worked on adaptive operator selection. His research interests include local search, hybrid methods, search-based software engineering and visualisation. He is the Electronic Media Chair for GECCO 2021 and has served as Electronic Media Chair for GECCO 2020, Publicity Chair for GECCO 2019 and as Student Affairs Chair for GECCO 2017 and 2018. He has previously co-organised the workshop on Landscape-Aware Heuristic Search at PPSN 2016, GECCO 2017-2019.

Katherine Malan

Katherine Malan is an associate professor in the Department of Decision Sciences at the University of South Africa. She received her PhD in computer science from the University of Pretoria in 2014 and her MSc & BSc degrees from the University of Cape Town. She has over 25 years' lecturing experience, mostly in Computer Science, at three different South African universities. Her research interests include automated algorithm selection in optimisation and learning, fitness landscape analysis and the application of computational intelligence techniques to real-world problems. She is editor-in-chief of South African Computer Journal, associate editor for Engineering Applications of Artificial Intelligence, and has served as a reviewer for over 20 Web of Science journals.

Arnaud Liefooghe

Arnaud Liefooghe has been an Associate Professor (Maître de Conférences) with the University of Lille, France, since 2010. He is a member of the CRIStAL research center, CNRS, and of the Inria Lille-Nord Europe research center. He is also the Co-Director of the MODŌ international lab between Shinshu University, Japan, and the University of Lille. He received a PhD degree from the University of Lille in 2009, and the Habilitation in 2022. In 2010, he was a Postdoctoral Researcher with the University of Coimbra, Portugal. In 2020, he was on CNRS sabbatical at JFLI, and an Invited Professor with the University of Tokyo, Japan. Since 2021, he has been appointed as a Collaborative Professor at Shinshu University, Japan. His research activities deal with the foundations, the design and the analysis of stochastic local search and evolutionary algorithms, with a particular interest in multi-objective optimization and landscape analysis. He has co-authored over ninety scientific papers in international journals and conferences. He was a recipient of the best paper award at EvoCOP 2011 and at GECCO 2015. He has recently served as the co-Program Chair for EvoCOP 2018 and EvoCOP 2019, as the Proceedings Chair for GECCO 2018, as the co-EMO Track Chair for GECCO 2019, and as the Virtualization Chair for GECCO 2021.

Sébastien Verel

Sébastien Verel is a professor in Computer Science at the Université du Littoral Côte d'Opale, Calais, France, and previously at the University of Nice Sophia-Antipolis, France, from 2006 to 2013. He received a PhD in computer science from the University of Nice Sophia-Antipolis, France, in 2005. His PhD work was related to fitness landscape analysis in combinatorial optimization. He was an invited researcher in DOLPHIN Team at INRIA Lille Nord Europe, France from 2009 to 2011. His research interests are in the theory of evolutionary computation, multiobjective optimization, adaptive search, and complex systems. A large part of his research is related to fitness landscape analysis. He co-authored of a number of scientific papers in international journals, book chapters, book on complex systems, and international conferences. He is also involved in the co-organization EC summer schools, conference tracks, workshops, a special issue on EMO at EJOR, as well as special sessions in indifferent international conferences.

Gabriela Ochoa

Gabriela Ochoa is a Professor of Computing Science at the University of Stirling in Scotland, UK. Her research lies in the foundations and applications of evolutionary algorithms and metaheuristics, with emphasis on adaptive search, fitness landscape analysis and visualisation. She holds a PhD from the University of Sussex, UK, and has worked at the University Simon Bolivar, Venezuela, and the University of Nottingham, UK. Her Google Scholar h-index is 40, and her work on network-based models of computational search spans several domains and has obtained 4 best-paper awards and 8 other nominations. She collaborates cross-disciplines to apply evolutionary computation in healthcare and conservation. She has been active in organisation and editorial roles in venues such as the Genetic and Evolutionary Computation Conference (GECCO), Parallel Problem Solving from Nature (PPSN), the Evolutionary Computation Journal (ECJ) and the ACM Transactions on Evolutionary Learning and Optimisation (TELO). She is a member of the executive board for the ACM interest group in evolutionary computation, SIGEVO, and the editor of the SIGEVOlution newsletter. In 2020, she was recognised by the leading European event on bio-inspired algorithms, EvoStar, for her outstanding contributions to the field.

LEOL — Large-Scale Evolutionary Optimization and Learning

Summary

Machine learning for optimization has attracted significant attention in both the machine learning and operations research communities. Novel machine learning techniques have been developed to effectively solve high-dimensional and complex optimization problems. These include automatic learning of heuristics, direct prediction of high-quality solutions, learning to branch for branch-and-bound algorithms, and learning to reduce or decompose optimization problems. Conversely, population-based metaheuristics in general, and evolutionary algorithms in particular, have also been used for high-dimensional learning tasks. Neuro-evolution for instance has shown promising results in tackling complex supervised and reinforcement learning problems. The aim of this workshop is to explore the synergy between machine learning and evolutionary algorithms to tackle high-dimensional optimization and learning problems. The workshop broadly covers novel techniques to enhance evolutionary algorithms via machine learning for solving complex large-scale optimization problems and/or novel algorithmic advancements of population-based metaheuristics for solving high-dimensional learning problems. Potential topics may include (but not limited to):

- automatic meta-heuristic design using machine learning (or hyper-heuristic),
- predicting high-quality solutions to warm-start evolutionary algorithms,
- predicting unknown parameters for optimization problems via machine learning,
- algorithm selection using machine learning,
- problem structure learning,
- surrogate models for expensive optimization problems,
- neural architecture search using evolutionary methods,
- deep neuro-evolution.

Organizers

Nabi Omidvar is a University Academic Fellow (Assistant Professor) with the School of Computing, University of Leeds, and Leeds University Business School, UK. He is an expert in large-scale global optimization and is currently a senior member of the IEEE and the chair of IEEE Computational Intelligence Society's Taskforce on Large-Scale Global Optimization. He has made several award winning contributions to the field including the state-of-the-art variable interaction analysis algorithm which won the IEEE Computational Intelligence Society's best paper award in 2017. He also coauthored a paper which won the large-scale global optimization competition in the IEEE Congress on Evolutionary Computation in 2019. Dr. Omidvar's current research interests are high-dimensional (deep) learning and the applications of artificial intelligence in financial services.

Yuan Sun

Yuan Sun is a Research Fellow in the School of Computing and Information Systems, University of Melbourne and the Vice-Chair of the IEEE CIS Taskforce on Large-Scale Global Optimization. He completed his Ph.D degree from University of Melbourne and a Bachelor’s degree from Peking University. His research interests include artificial intelligence, evolutionary computation, operations research, and machine learning. He has published more than twenty research papers in these areas, and his research has been nominated for the best paper award at GECCO 2020 and won the CEC 2019 Competition on Large-Scale Global Optimization.

Xiaodong Li

Xiaodong Li (M’03-SM’07) received his B.Sc. degree from Xidian University, Xi'an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. He is a Professor with the School of Science (Computer Science and Software Engineering), RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, data analytics, multiobjective optimization, multimodal optimization, and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a vice-chair of IEEE Task Force on Multi-modal Optimization, and a former chair of IEEE CIS Task Force on Large Scale Global Optimization. He is the recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS "IEEE Transactions on Evolutionary Computation Outstanding Paper Award".

NEWK — Neuroevolution at work

Summary

In the last years, inspired by the fact that natural brains themselves are the products of an evolutionary process, the quest for evolving and optimizing artificial neural networks through evolutionary computation has enabled researchers to successfully apply neuroevolution to many domains such as strategy games, robotics, big data, and so on. The reason behind this success lies in important capabilities that are typically unavailable to traditional approaches, including evolving neural network building blocks, hyperparameters, architectures, and even the algorithms for learning themselves (meta-learning).
Although promising, the use of neuroevolution poses important problems and challenges for its future development.
Firstly, many of its paradigms suffer from a lack of parameter-space diversity, meaning a failure to provide diversity in the behaviors generated by the different networks.
Moreover, harnessing neuroevolution to optimize deep neural networks requires noticeable computational power and, consequently, the investigation of new trends in enhancing computational performance.

This workshop aims:
- to bring together researchers working in the fields of deep learning, evolutionary computation, and optimization to exchange new ideas about potential directions for future research;
- to create a forum of excellence on neuroevolution that will help interested researchers from various areas, ranging from computer scientists and engineers on the one hand to application-devoted researchers on the other hand, to gain a high-level view of the current state of the art.archers on the other hand, to gain a high-level view about the current state of the art.

Since an increasing trend to neuroevolution in the next years seems likely to be observed, not only will a workshop on this topic be of immediate relevance to get insight into future trends, but it will also provide a common ground to encourage novel paradigms and applications. Therefore, researchers putting emphasis on neuroevolution issues in their work are encouraged to submit their work. This event is also ideal for informal contacts, exchanging ideas, and discussions with fellow researchers.

The scope of the workshop is to receive high-quality contributions on topics related to neuroevolution, ranging from theoretical works to innovative applications in the context of (but not limited to):
- theoretical and experimental studies involving neuroevolution on machine learning in general, and on deep and reinforcement learning in particular
- development of innovative neuroevolution paradigms
- parallel and distributed neuroevolution methods
- new search operators for neuroevolution
- hybrid methods for neuroevolution
- surrogate models for fitness estimation in neuroevolution
- adopt evolutionary multi-objective and many-objective optimisation techniques in neuroevolution
- propose new benchmark problems for neuroevolution
- applications of neuroevolution to Artificial Intelligence agents and to real-world problems.

Organizers

Ernesto Tarantino

Ernesto Tarantino was born in S. Angelo a Cupolo, Italy, in 1961. He received the Laurea degree in Electrical Engineering in 1988 from University of Naples, Italy. He is currently a researcher at National Research Council of Italy. After completing his studies, he conducted research in parallel and distributed computing. During the past decade his research interests have been in the fields of theory and application of evolutionary techniques and related areas of computational intelligence. He is author of more than 100 scientific papers in international journal, book and conferences. He has served as referee and organizer for several international conferences in the area of evolutionary computation.

Edgar Galvan

Edgar Galvan is a Senior Researcher in the Department of Computer Science, Maynooth University. He is the Artificial Intelligence and Machine Learning Cluster Leader at the Innovation Value Institute and at the Naturally Inspired Computation Research Group. Prior to this, he held multiple research positions in Essex University, University College Dublin, Trinity College Dublin and INRIA Paris-Saclay. He is an expert in the properties of encodings, such as neutrality and locality, in Genetic Programming as well as a pioneer in the study of Semantic-based Genetic Programming. His research interests also include applications to combinatorial optimisation, games, software engineering and deep neural networks. Dr. Edgar Galvan has independently ranked as one of the all-time top 1% researchers in Genetic Programming, according to University College London. He has published in excess of nearly 70 peer-reviewed publications. Edgar has over 2,300 citations and a H-index of 27.

De Falco Ivanoe

Ivanoe De Falco received his degree in Electrical Engineering “cum laude” from the University of Naples “Federico II”, Naples, Italy, in 1987. He is currently a senior researcher at the Institute for High-Performance Computing and Networking (ICAR) of the National Research Council of Italy (CNR), where he is the Responsible of the Innovative Models for Machine Learning (IMML) research group. His main fields of interest include Computational Intelligence, with particular attention to Evolutionary Computation, Swarm Intelligence and Neural Networks, Machine Learning, Parallel Computing, and their application to real-world problems, especially in the medical domain. He is a member of the World Federation on Soft Computing (WFSC), the IEEE SMC Technical Committee on Soft Computing, the IEEE ComSoc Special Interest Research Group on Big Data for e-Health, the IEEE Computational Intelligence Society Task Force on Evolutionary Computer Vision and Image Processing, and is an Associate Editor of Applied Soft Computing Journal (Elsevier). He is the author of more than 120 papers in international journals and in the proceedings of international conferences.

Antonio Della Cioppa

Antonio Della Cioppa received the Laurea degree in Physics and the Ph.D. degree in Computer Science, both from University of Naples “Federico II,” Naples, Italy, in 1993 and 1999, respectively. From 1999 to 2003, he was a Postdoctoral Fellow at the Department of Computer Science and Electrical Engineering, University of Salerno, Salerno, Italy. In 2004, he joined the Department of Information Engineering, Electrical Engineering and Mathematical Applications, University of Salerno, where he is currently Associate Professor of Computer Science and Artificial Intelligence. His main fields of interest are in the Computational Intelligence area, with particular attention to Evolutionary Computation, Swarm Intelligence and Neural Networks, Machine Learning, Parallel Computing, and their application to real-world problems. Prof. Della Cioppa is a member of the Association for Computing Machinery (ACM), the ACM Special Interest Group on Genetic and Evolutionary Computation, the IEEE Computational Intelligence Society and the IEEE Computational Intelligence Society Task Force on Evolutionary Computer Vision and Image Processing. He serves as Associate Editor for the Applied Soft Computing journal (Elsevier), Evolutionary Intelligence (Elsevier), Algorithms (MDPI). He has been part of the Organizing or Scientific Committees for tens of international conferences or workshops, and has authored or co-authored about 100 papers in international journals, books, and conference proceedings.

Scafuri Umberto

Umberto Scafuri was born in Baiano (AV) on May 21, 1957. He got his Laurea degree in Electrical Engineering at the University of Naples ""Federico II"" in 1985. He currently works as a technologist at the Institute of High Performance Computing and Networking (ICAR) of the National Research Council of Italy (CNR). His research activity is basically devoted to parallel and distributed architectures and evolutionary models.

Mengjie Zhang

Mengjie Zhang is a Fellow of Royal Society of New Zealand, a Fellow of IEEE, and currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, Associate Dean (Research and Innovation) in the Faculty of Engineering, and Chair of the Research Committee of the Faculty of Engineering and School of Engineering and Computer Science. His research is mainly focused on evolutionary computation, particularly genetic programming, particle swarm optimisation and learning classifier systems with application areas of feature selection/construction and dimensionality reduction, computer vision and image processing, evolutionary deep learning and transfer learning, job shop scheduling, multi-objective optimisation, and clustering and classification with unbalanced and missing data. He is also interested in data mining, machine learning, and web information extraction. Prof Zhang has published over 700 research papers in refereed international journals and conferences in these areas. He has been serving as an associated editor or editorial board member for over 10 international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, the Evolutionary Computation Journal (MIT Press), ACM Transactions on Evolutionary Learning and Optimisation, Genetic Programming and Evolvable Machines (Springer), IEEE Transactions on Emergent Topics in Computational Intelligence, Applied Soft Computing, and Engineering Applications of Artificial Intelligence, and as a reviewer of over 30 international journals. He has been a major chair for eight international conferences. He has also been serving as a steering committee member and a program committee member for over 80 international conferences including all major conferences in evolutionary computation. Since 2007, he has been listed as one of the top ten world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html). He is the Tutorial Chair for GECCO 2014, an AIS-BIO Track Chair for GECCO 2016, an EML Track Chair for GECCO 2017, and a GP Track Chair for GECCO 2020 and 2021. Since 2012, he has been co-chairing several parts of IEEE CEC, SSCI, and EvoIASP/EvoApplications conference (he has been involving major EC conferences such as GECCO, CEC, EvoStar, SEAL). Since 2014, he has been co-organising and co-chairing the special session on evolutionary feature selection and construction at IEEE CEC and SEAL, and also delivered a keynote/plenary talk for IEEE CEC 2018,IEEE ICAVSS 2018, DOCSA 2019, IES 2017 and Chinese National Conference on AI in Law 2017. Prof Zhang was the Chair of the IEEE CIS Intelligent Systems Applications, the IEEE CIS Emergent Technologies Technical Committee, and the IEEE CIS Evolutionary Computation Technical Committee; a Vice-Chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and the IEEE CIS Task Force on Evolutionary Deep Learning and Applications; and also the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.

QD-Benchmarks — QD-Benchmarks — Workshop on Quality Diversity Algorithm Benchmarks

The workshop will have a dedicated page on the QD website: \url{https://quality-diversity.github.io}.

Summary

Quality Diversity (QD) algorithms are a recent family of evolutionary algorithms that aim at generating a large collection of high-performing solutions to a problem. They originated in the Generative and Developmental Systems community of GECCO between 2011 (Lehman \& Stanley, 2011) and 2015 (Mouret and Clune, 2015) with the Novelty Search with Local Competition and MAP-Elites'' evolutionary algorithms. Since then, many algorithms have been introduced (mostly at GECCO), inspired, for example, by surrogate modeling (Gaier et al., 2018, best paper of the CS track), by CMA-ES (Fontaine et al., 2019) or by deep-neuroevolution (Colas et al., 2020, Nilsson et al., 2021 — Best paper in NE track). Hence, 47\% (7/15) of the papers accepted in the GECCO CS track in 2021 used or introduced novel Quality-Diversity optimization algorithms and 56\%(5/9) in 2020 (see \url{https://quality-diversity.github.io} for a list of QD paper).

In 2022 we hosted a full-day GECCO Workshop with 11 contributors. The objective of that workshop was to develop a first set of benchmark functions alongside quantitative and qualitative metrics to compare QD algorithms. The intent was to systematize and normalize these comparisons, facilitating comparison of algorithms and validation of various implementations. Similar sets of indicators and functions were developed for multi-objective algorithms (ZDT set of functions, Zitzler, Deb and Thiele, 2000) and single-objective algorithms (BBOB series of workshops at GECCO, since 2009). These benchmark suites catalysed research in these fields — we aim to do the same for quality diversity algorithms.

The objective of this important follow-up workshop is to refine and publish a preliminary set of benchmarks for QDAs, and to share and discuss teams' preliminary results for those benchmarks. Prior to the workshop, the organizers will publish a common set of benchmarking tools on our website. The major outcome of this workshop will be a common journal article exploring the approaches and results produced by the workshop, thereby providing the first unified benchmark results with as many algorithms as possible.

Quality Diversity algorithms are different from multi-objective, multi-modal and single-objective algorithms, this is why special benchmarks are needed. In particular, (1) they use a behavior space in addition to the genotype space and the fitness value(s), (2) they aim at both covering the behavior space and finding high-performing solutions, which are often two antagonistic objectives.

This workshop will invite several types of contributions, in the form of short papers (1 to 2 pages):
\begin{enumerate}
\item Proposals of new or modified benchmark functions since the last workshop. These benchmarks should ideally be fast to run, easy to implement, and test specific properties (e.g., invariance to a rotation of the behavioral space, alignment between the behavior space and the fitness function, number of local optima, relevance to real-world applications, etc.); for each function, the short paper will at least describe:
\begin{itemize}
\item the genotype space (bounds, etc.);
\item the behavior space;
\item the fitness function.
\end{itemize}
\item Proposals of new or modified indicators to compare algorithms; for instance, the MAP-Elites paper (Mouret \& Clune, 2015) introduced global performance, global reliability and opt-in reliability, but other papers used different indicators. Confronting the challenge of quality diversity'' (Pugh et. al. 2015) introduced the QD-score indicator often used to compare NSLC-based algorithms.
\item Discussion and analysis of the results of running the existing benchmarking suite on existing or new QDA implementations.
\end{enumerate}

The papers will be reviewed by the organizers of the workshop.

Organizers

Antoine Cully

Antoine Cully is Lecturer (Assistant Professor) at Imperial College London (United Kingdom). His research is at the intersection between artificial intelligence and robotics. He applies machine learning approaches, like evolutionary algorithms, on robots to increase their versatility and their adaptation capabilities. In particular, he has recently developed Quality-Diversity optimization algorithms to enable robots to autonomously learn large behavioural repertoires. For instance, this approach enabled legged robots to autonomously learn how to walk in every direction or to adapt to damage situations. Antoine Cully received the M.Sc. and the Ph.D. degrees in robotics and artificial intelligence from the Sorbonne Université in Paris, France, in 2012 and 2015, respectively, and the engineer degree from the School of Engineering Polytech’Sorbonne, in 2012. His Ph.D. dissertation has received three Best-Thesis awards. He has published several journal papers in prestigious journals including Nature, IEEE Transaction in Evolutionary Computation, and the International Journal of Robotics Research. His work was featured on the cover of Nature (Cully et al., 2015), received the "Outstanding Paper of 2015" award from the Society for Artificial Life (2016), the French "La Recherche" award (2016), and two Best-Paper awards from GECCO (2021, 2022).

Stéphane Doncieux

Stéphane Doncieux is Professeur des Universités (Professor) in Computer Science at Sorbonne University, Paris, France. He is engineer of the ENSEA, a French electronic engineering school. He obtained a Master's degree in Artificial Intelligence and Pattern Recognition in 1999. He pursued and defended a PhD in Computer Science in 2003. He was responsible, with Bruno Gas, of the SIMA research team since its creation in 2007 and up to 2011. From 2011 to 2018, he was the head of the AMAC (Architecture and Models of Adaptation and Cognition) research team with 11 permanent researchers, 3 post-doc students and engineers and 11 PhD students. As from January 2019, he is deputy director of the ISIR lab, one of the largest robotics lab in France. He has organized several workshops on ER at conferences like GECCO or IEEE-IROS and has edited 2 books. Stéphane Doncieux was co-chair of the GECCO complex systems track in 2019 and 2020. His research is in cognitive robotics, with a focus on the use of evolutionary algorithms in the context of synthesis of robot controllers. He worked on selective pressures and on the use of evolutionary methods in a developmental robotics approach in which the evolutionary algorithms are used for their creativity to bootstrap a cognitive process and allow it to acquire an experience that can be later redescribed in another representation for a faster and more effective task resolution. This is the goal of the H2020 DREAM European project that he has coordinated (http://dream.isir.upmc.fr).

Matthew C. Fontaine

Matthew C. Fontaine is a PhD candidate at the University of Southern California (2019-present). His research blends the areas of discrete optimization, generative models, quality diversity, neuroevolution, procedural content generation, scenario generation in training, and human-robot interaction (HRI) into powerful scenario generation systems that enhance safety when robots interact with humans. In the field of quality diversity, Matthew has made first-author contributions of the Covariance Matrix Adaptation MAP-Elites (CMA-ME) algorithm and recently introduced the Differentiable Quality Diversity (DQD) problem, including the first DQD algorithm MAP-Elites via a Gradient Arborescence (MEGA). He is also a maintainer of the Pyribs quality diversity optimization library, a library implementing many quality diversity algorithms for continuous optimization. Matthew received his BS (2011) and MS (2013) degrees from the University of Central Florida (UCF) and first studied quality diversity algorithms through coursework with Ken Stanley. He was a research assistant in the Interactive Realities Lab (IRL) at the Institute for Simulation and Training (IST) at UCF from 2008-2014 studying human training, a teaching faculty member at UCF from 2014 to 2017, and a software engineer in simulation at Drive.ai working on scenario generation in autonomous vehicles from 2017-2018.

Stefanos Nikolaidis

Stefanos Nikolaidis is an Assistant Professor of Computer Science at the University of Southern California and leads the Interactive and Collaborative Autonomous Robotics Systems (ICAROS) lab. His research focuses on stochastic optimization approaches for learning and evaluation of human-robot interactions. His work leads to end-to-end solutions that enable deployed robotic systems to act optimally when interacting with people in practical, real-world applications. Stefanos completed his PhD at Carnegie Mellon's Robotics Institute and received an MS from MIT, a MEng from the University of Tokyo and a BS from the National Technical University of Athens. His research has been recognized with an oral presentation at NeurIPS and best paper awards and nominations from the IEEE/ACM International Conference on Human-Robot Interaction, the International Conference on Intelligent Robots and Systems, and the International Symposium on Robotics.

Adam Gaier is a Senior Research Scientist at the Autodesk AI Lab pursuing basic research in evolutionary and machine learning and the application of these techniques to problems in design and robotics. He received master's degrees in Evolutionary and Adaptive Systems form the University of Sussex and Autonomous Systems at the Bonn-Rhein-Sieg University of Applied Sciences, and a PhD from Inria and the University of Lorraine — where his dissertation focused on tackling expensive design problems through the fusion of machine learning, quality diversity, and neuroevolution approaches. His PhD work received recognition at top venues across these fields: including a spotlight talk at NeurIPS (machine learning), multiple best paper awards at GECCO (evolutionary computation), a best student paper at AIAA (aerodynamics design optimization), and a SIGEVO Dissertation Award.

Jean-Baptiste Mouret

Jean-Baptiste Mouret is a senior researcher ("directeur de recherche) at Inria, a French research institute dedicated to computer science and mathematics. He was previously an assistant professor ("mâitre de conférences) at ISIR (Institute for Intelligent Systems and Robotics), which is part of Université Pierre et Marie Curie - Paris 6 (UPMC, now Sorbonne Université). He obtained a M.S. in computer science from EPITA in 2004, a M.S. in artificial intelligence from the Pierre and Marie Curie University (Paris, France) in 2005, and a Ph.D. in computer science from the same university in 2008. He was the principal investigator of an ERC grant (ResiBots - Robots with animal-like resilience, 2015-2020) and was the recipient of a French "ANR young researcher grant (Creadapt - Creative adaptation by Evolution, 2012-2015). Overall, J.-B. Mouret conducts researches that intertwine evolutionary algorithms, neuro-evolution, and machine learning to make robots more adaptive. His work was featured on the cover of Nature (Cully et al., 2015) and it received the "2017 ISAL Award for Distinguished Young Investigator in the field of Artificial Life, the "Outstanding Paper of 2015 award from the Society for Artificial Life (2016), the French "La Recherche" award (2016), 3 GECCO best paper awards (2011, GDS track; 2017 & 2018, CS track), and the IEEE CEC "best student paper" award (2009). He co-chaired the "Evolutionary Machine Learning track at GECCO 2019 and the "Generative and Developmental Systems'' track in 2015.

John Rieffel

John Rieffel is an Associate Professor of Computer Science at Union College in Schenctady, NY, USA. Prior to joining Union he was a postdoc at Cornell University and Tufts University. He received his Ph.D. in Computer Science from Brandeis University in 2006. His undergraduate-driven research lab at Union College focuses on soft robotics, tensegrity robotics, and evolutionary fabrication. John has published at GECCO, ALIFE/ECAL, and IEEE-RoboSoft, conferences, and in em Soft Robotics, Artificial Life, and Proceedings of the Royal Society Interface, among others.

Julian Togelius

Julian Togelius is an Associate Professor at New York University. His research interests include AI, player modeling, procedural content generation, coevolution, neuroevolution, and genetic programming. He has co-invented some early Quality-Diversity methods, like DeLeNoX, and has recently had a hand in creating the CMA-ME algorithm. Additionally, he has been active in inventing ways of using QD for game-playing and game content generation applications. Julian received a BA in Philosophy from Lund University in 2002, and MSc in Evolutionary and Adaptive Systems from University of Sussex in 2003, and a PhD in Computer Science from the university of Essex in 2007.

QuantOpt — Workshop on Quantum Optimization

Summary

Scope

Quantum computers are rapidly becoming more powerful and increasingly applicable to solve problems in the real world. They have the potential to solve extremely hard computational problems, which are currently intractable by conventional computers. Quantum optimization is an emerging field that focuses on using quantum computing technologies to solve hard optimization problems.

There are two main types of quantum computers, quantum annealers and quantum gate computers.

Quantum annealers are specially tailored to solve combinatorial optimization problems: they have a simpler architecture, and are more easily manufactured and are currently able to tackle larger problems as they have a larger number of qubits. These computers find (near) optimum solutions of a combinatorial optimization problem via quantum annealing, which is similar to traditional simulated annealing. Whereas simulated annealing uses ‘thermal’ fluctuations for convergence to the state of minimum energy (optimal solution), in quantum annealing the addition of quantum tunnelling provides a faster mechanism for moving between states and faster processing.

Quantum gate computers are general purpose quantum computers. These use quantum logic gates, a basic quantum circuit operating on a small number of qubits, for computation. Constructing an algorithm involves a fixed sequence of quantum logic gates. Some quantum algorithms, e.g., Grover's algorithm, have provable quantum speed-up. These computers can be used to solve combinatorial optimization problems using the quantum approximate optimization algorithm.

Quantum computers have also given rise to quantum-inspired computers and quantum-inspired optimisation algorithms.

Quantum-inspired computers use dedicated conventional hardware technology to emulate/simulate quantum computers. These computers offer a similar programming interface of quantum computers and can currently solve much larger combinatorial optimization problems when compared to quantum computers and much faster than traditional computers.

Quantum-inspired optimisation algorithms use classical computers to simulate some physical phenomena such as superposition and entanglement to perform quantum computations, in an attempt to retain some of its benefit in conventional hardware when searching for solutions.

To solve optimization problems on a quantum annealer or on a quantum gate computer using the quantum approximate optimization algorithm, we need to reformulate them in a format suitable for the quantum hardware, in terms of qubits, biases and couplings between qubits. In mathematical terms, this requirement translates to reformulating the optimization problem as a Quadratic Unconstrained Binary Optimisation (QUBO) problem. This is closely related to the renowned Ising model. It constitutes a universal class, since in principle all combinatorial optimization problems can be formulated as QUBOs. In practice, some classes of optimization problems can be naturally mapped to a QUBO, whereas others are much more challenging to map. In quantum gates computers, Grover’s algorithm can be used to optimize a function by transforming the optimization problem into a series of decision problems. The most challenging part in this case is to select an appropriate representation of the problem to obtain the quadratic speedup of Grover’s algorithm compared to the classical computing algorithms for the same problem.

Content

A major application domain of quantum computers is solving hard combinatorial optimization problems. This is the emerging field of quantum optimization. The aim of the workshop is to provide a forum for both scientific presentations and discussion of issues related to quantum optimization.

As the algorithms quantum that computers use for optimization can be regarded as general types of heuristic optimization algorithms, there are potentially great benefits and synergy to bringing together the communities of quantum computing and heuristic optimization for mutual learning.

The workshop aims to be as inclusive as possible, and welcomes contributions from all areas broadly related to quantum optimization, and by researchers from both academia and industry.

Particular topics of interest include, but are not limited to:

Formulation of optimisation problems as QUBOs (including handling of non-binary representations and constraints)
Fitness landscape analysis of QUBOs
Novel search algorithms to solve QUBOs
Experimental comparisons on QUBO benchmarks
Theoretical analysis of search algorithms for QUBOs
Speed-up experiments on traditional hardware vs quantum(-inspired) hardware
Decomposition of optimisation problems for quantum hardware
Application of the quantum approximate optimization algorithm
Application of Grover's algorithm to solve optimisation problems
Novel quantum-inspired optimisation algorithms
Optimization/discovery of quantum circuits
Quantum optimisation for machine learning problems
Optical Annealing
Dealing with noise in quantum computing
Quantum Gates’ optimisation, Quantum Coherent Control

Organizers

Alberto Moraglio

Alberto Moraglio is a Senior Lecturer at the University of Exeter, UK. He holds a PhD in Computer Science from the University of Essex and Master and Bachelor degrees (Laurea) in Computer Engineering from the Polytechnic University of Turin, Italy. He is the founder of a Geometric Theory of Evolutionary Algorithms, which unifies Evolutionary Algorithms across representations and has been used for the principled design and rigorous theoretical analysis of new successful search algorithms. He gave several tutorials at GECCO, IEEE CEC and PPSN, and has an extensive publication record on this subject. He has served as co-chair for the GP track, the GA track and the Theory track at GECCO. He also co-chaired twice the European Conference on Genetic Programming, and is an associate editor of Genetic Programming and Evolvable Machines journal. He has applied his geometric theory to derive a new form of Genetic Programming based on semantics with appealing theoretical properties which is rapidly gaining popularity in the GP community. In the last three years, Alberto has been collaborating with Fujitsu Laboratories on Optimisation on Quantum Annealing machines. He has formulated dozens of Combinatorial Optimisation problems in a format suitable for the Quantum hardware. He is also the inventor of a software (a compiler) aimed at making these machines usable without specific expertise by automating the translation of high-level description of combinatorial optimisation problems to a low-level format suitable for the Quantum hardware (patented invention).

Mayowa Ayodele

Mayowa Ayodele is a Principal Researcher at Fujitsu Research of Europe, United Kingdom. She holds a PhD in Evolutionary Computation from Robert Gordon University, Scotland. In the last 10 years, a significant part of her research has been on applying different categories of algorithms for solving problems in logistics such as the scheduling of trucks and trailers, ships, and platform supply vessels. In the last few years, her research has focused on formulating single and multi-objective constrained optimisation problems as Quadratic Unconstrained Binary Optimisation (QUBO) and developing new technology as well as adapting existing quantum-inspired technology for solving such problems.

Francisco Chicano

Francisco Chicano holds a PhD in Computer Science from the University of Málaga and a Degree in Physics from the National Distance Education University. Since 2008 he is with the Department of Languages and Computing Sciences of the University of Málaga. His research interests include quantum computing, the application of search techniques to Software Engineering problems and the use of theoretical results to efficiently solve combinatorial optimization problems. He is in the editorial board of Evolutionary Computation Journal, Engineering Applications of Artificial Intelligence, Journal of Systems and Software, ACM Transactions on Evolutionary Learning and Optimization and Mathematical Problems in Engineering. He has also been programme chair and Editor-in-Chief in international events.

Oleksandr Kyriienko

Dr. Oleksandr Kyriienko is a theoretical physicist and a leader of Quantum Dynamics, Optics, and Computing group https://kyriienko.github.io/. He is a Lecturer (Assistant Professor) at the Physics department of the University of Exeter. Oleksandr obtained the PhD degree in 2014 from the University of Iceland, and was a visiting PhD in diverse institutions, including Nanyang Technological University in Singapore. From 2014 to 2017 he did postdoctoral research at the Niels Bohr Institute, University of Copenhagen. In 2017-2019 he was a Fellow at the Nordic Institute for Theoretical Physics (NORDITA), located in Stockholm, Sweden. Oleksandr’s research encompasses various areas of quantum technologies, starting from designing quantum algorithms and simulators, and ranging into nonlinear quantum optics in two-dimensional materials. Recently, he has been working towards developing quantum machine learning and quantum-based solvers of nonlinear differential equations. Dr. Kyriienko has a strong interest in approaches to quantum optimisation, which represents one of the pinnacles of modern quantum computing.

Ofer Shir

Ofer Shir is an Associate Professor of Computer Science at Tel-Hai College and a Principal Investigator at Migal-Galilee Research Institute – both located in the Upper Galilee, Israel. Ofer Shir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel (conferred 2003), and both MSc and PhD in Computer Science from Leiden University, The Netherlands (conferred 2004, 2008; PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz in the Department of Chemistry – where he specialized in computational aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics. His current topics of interest include Statistical Learning within Optimization and Deep Learning in Practice, Self-Supervised Learning, Algorithmically-Guided Experimentation, Combinatorial Optimization and Benchmarking (White/Gray/Black-Box), Quantum Optimization and Quantum Machine Learning.

Lee Spector

Dr. Lee Spector is a Professor of Computer Science at Amherst College, an Adjunct Professor and member of the graduate faculty in the College of Information and Computer Sciences at the University of Massachusetts, Amherst, and an affiliated faculty member at Hampshire College, where he taught for many years before moving to Amherst College. He received a B.A. in Philosophy from Oberlin College in 1984, and a Ph.D. from the Department of Computer Science at the University of Maryland in 1992. At Hampshire College he held the MacArthur Chair, served as the elected faculty member of the Board of Trustees, served as the Dean of the School of Cognitive Science, served as Co-Director of Hampshire’s Design, Art and Technology program, supervised the Hampshire College Cluster Computing Facility, and served as the Director of the Institute for Computational Intelligence. At Amherst College he teaches computer science and directs an initiative on Artificial Intelligence and the Liberal Arts. My research and teaching focus on artificial intelligence and intersections of computer science with cognitive science, philosophy, physics, evolutionary biology, and the arts. He is the Editor-in-Chief of the Springer journal Genetic Programming and Evolvable Machines and a member of the editorial boards of the MIT Press journal Evolutionary Computation and the ACM journal Transactions on Evolutionary Learning and Optimization. He is a member of the Executive Committee of the ACM Special Interest Group on Evolutionary Computation (SIGEVO) and he has produced over 100 scientific publications. He serves regularly as a reviewer and as an organizer of professional events, and his research has been supported by the U.S. National Science Foundation and DARPA among other funding sources. Among the honors that he has received is the highest honor bestowed by the U.S. National Science Foundation for excellence in both teaching and research, the NSF Director's Award for Distinguished Teaching Scholars.

SAEOpt — Workshop on Surrogate-Assisted Evolutionary Optimisation

Summary

In many real-world optimisation problems evaluating the objective function(s) is expensive, perhaps requiring days of computation for a single evaluation. Surrogate-assisted optimisation attempts to alleviate this problem by employing computationally cheap 'surrogate' models to estimate the objective function(s) or the ranking relationships of the candidate solutions.

Surrogate-assisted approaches have been widely used across the field of evolutionary optimisation, including continuous and discrete variable problems, although little work has been done on combinatorial problems. Surrogates have been employed in solving a variety of optimization problems, such as multi-objective optimisation, dynamic optimisation, and robust optimisation. Surrogate-assisted methods have also found successful applications to aerodynamic design optimisation, structural design optimisation, data-driven optimisation, chip design, drug design, robotics, and many more. Most interestingly, the need for on-line learning of the surrogates has led to a fruitful crossover between the machine learning and evolutionary optimisation communities, where advanced learning techniques such as ensemble learning, active learning, semi-supervised learning and transfer learning have been employed in surrogate construction.

Despite recent successes in using surrogate-assisted evolutionary optimisation, there remain many challenges. This workshop aims to promote the research on surrogate assisted evolutionary optimization including the synergies between evolutionary optimisation and learning. Thus, this workshop will be of interest to a wide range of GECCO participants. Particular topics of interest include (but are not limited to):

• Bayesian optimisation
• Advanced machine learning techniques for constructing surrogates
• Model management in surrogate-assisted optimisation
• Multi-level, multi-fidelity surrogates
• Complexity and efficiency of surrogate-assisted methods
• Small and big data-driven evolutionary optimization
• Model approximation in dynamic, robust, and multi-modal optimisation
• Model approximation in multi- and many-objective optimisation
• Surrogate-assisted evolutionary optimisation of high-dimensional problems
• Comparison of different modelling methods in surrogate construction
• Surrogate-assisted identification of the feasible region
• Comparison of evolutionary and non-evolutionary approaches with surrogate models
• Test problems for surrogate-assisted evolutionary optimisation
• Performance improvement techniques in surrogate-assisted evolutionary computation
• Performance assessment of surrogate-assisted evolutionary algorithms

Organizers

Alma Rahat

Dr Alma Rahat is a Senior Lecturer in Data Science at Swansea University. He is an expert in Bayesian search and optimisation for computationally expensive problems (for example, geometry optimisation using computational fluid dynamics). His particular expertise is in developing effective acquisition functions for single and multi-objective problems, and locating the feasible space. He is one of the twenty-four members of the IEEE Computational Intelligence Society Task Force on Data-Driven Evolutionary Optimization of Expensive Problems, and he has been the lead organiser for the popular Surrogate-Assisted Evolutionary Optimisation workshop at the prestigious Genetic and Evolutionary Computation Conference (GECCO) since 2016. He has a strong track record of working with industry on a broad range of optimisation problems. His collaborations have resulted in numerous articles in top journals and conferences, including a best paper in Real World Applications track at GECCO and a patent. Dr Rahat has a BEng (Hons.) in Electronic Engineering from the University of Southampton, UK, and a PhD in Computer Science from the University of Exeter, UK. He worked as a product development engineer after his bachelor's degree, and held post-doctoral research positions at the University of Exeter. Before moving to Swansea, he was a Lecturer in Computer Science at the University of Plymouth, UK.

Richard Everson

Richard Everson is Professor of Machine Learning and Director of the Institute of Data Science and Artificial Intelligence at the University of Exeter. His research interests lie in statistical machine learning and multi-objective optimisation, and the links between them. Current research is on surrogate methods, particularly Bayesian optimisation, for large expensive-to-evaluate optimisation problems, especially computational fluid dynamics design optimisation.

Jonathan Fieldsend

Jonathan Fieldsend, Editor-in-Chief University of Exeter, UK is Professor of Computational Intelligence at the University of Exeter. He has a degree in Economics from Durham University, a Masters in Computational Intelligence from the University of Plymouth and a PhD in Computer Science from the University of Exeter. He has over 100 peer-reviewed publications in the evolutionary computation and machine learning domains, with particular interests in multiple-objective optimisation, and the interface between optimisation and machine learning. Over the years, he has been a co-organiser of a number of different Workshops at GECCO (VizGEC, SAEOpt and EAPwU), as well as EMO Track Chair in GECCO 2019 and GECCO 2020. He is an Associate Editor of IEEE Transactions on Evolutionary Computation, and ACM Transactions on Evolutionary Learning and Optimization, and on the Editorial Board of Complex and Intelligence Systems. He is a vice-chair of the IEEE Computational Intelligence Society (CIS) Task Force on Data-Driven Evolutionary Optimisation of Expensive Problems, and sits on the IEEE CIS Task Force on Multi-modal Optimisation and the IEEE CIS Task Force on Evolutionary Many-Objective Optimisation.

Handing Wang

Handing Wang received the B.Eng. and Ph.D. degrees from Xidian University, Xi'an, China, in 2010 and 2015. She is currently a professor with School of Artificial Intelligence, Xidian University, Xi'an, China. Dr. Wang is an Associate Editor of IEEE Computational Intelligence Magazine and Complex & Intelligent Systems, chair of the Task Force on Intelligence Systems for Health within the Intelligent Systems Applications Technical Committee of IEEE Computational Intelligence Society. Her research interests include nature-inspired computation, multi- and many-objective optimization, multiple criteria decision making, and real-world problems. She has published over 10 papers in international journal, including IEEE Transactions on Evolutionary Computation (TEVC), IEEE Transactions on Cybernetics (TCYB), and Evolutionary Computation (ECJ).

Yaochu Jin

Yaochu Jin received the B.Sc., M.Sc., and Ph.D. degrees from Zhejiang University, Hangzhou, China, in 1988, 1991, and 1996 respectively, and the Dr.-Ing. degree from Ruhr University Bochum, Germany, in 2001. He is Professor in Computational Intelligence, Department of Computer Science, University of Surrey, Guildford, U.K., where he heads the Nature Inspired Computing and Engineering Group. He is also a Finland Distinguished Professor, University of Jyvaskyla, Finland and a Changjiang Distinguished Professor, Northeastern University, China. His main research interests include evolutionary computation, machine learning, computational neuroscience, and evolutionary developmental systems, with their application to data-driven optimization and decision-making, self-organizing swarm robotic systems, and bioinformatics. He has (co)authored over 200 peer-reviewed journal and conference papers and has been granted eight patents on evolutionary optimization. Dr Jin is the Editor-in-Chief of the IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS and Complex & Intelligent Systems. He was an IEEE Distinguished Lecturer (2013-2015) and Vice President for Technical Activities of the IEEE Computational Intelligence Society (2014-2015). He was the recipient of the Best Paper Award of the 2010 IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology and the 2014 IEEE Computational Intelligence Magazine Outstanding Paper Award. He is a Fellow of IEEE.

Tinkle Chugh

Dr Tinkle Chugh is a Lecturer in Computer Science at the University of Exeter. He is the Associate Editor of the Complex and Intelligent Systems journal. Between Feb 2018 and June 2020, he worked as a Postdoctoral Research Fellow in the BIG data methods for improving windstorm FOOTprint prediction project funded by Natural Environment Research Council UK. He obtained his PhD degree in Mathematical Information Technology in 2017 from the University of JyvÃ¤skylÃ¤, Finland. His thesis was a part of the Decision Support for Complex Multiobjective Optimization Problems project, where he collaborated with Finland Distinguished Professor (FiDiPro) Yaochu Jin from the University of Surrey, UK. His research interests are machine learning, data-driven optimization, evolutionary computation, and decision-making.

SBOX-COST — Strict box-constraint optimization studies

Summary

Benchmarking plays a critical role in the design and development of optimization algorithms. The way in which benchmark suites are set up thus influences the set of algorithms recommended to practitioners and biases the goals of algorithm designers. While relying on such a procedure is generally beneficial, as it allows choosing algorithms which perform well on problems with similar characteristics to the practical problem at hand, there are potential ways in which it can bias algorithm choices. One particular aspect in which this can happen is related to the domain of the search space. In practical applications, evaluating points outside of the domain is often impossible, or not sensible, and as such, should be avoided. However, in benchmarking as practiced today, problems are well-defined even outside of the stated parameter ranges, which translates to the algorithms being able to safely ignore these ranges and operate on infeasible solutions.
Setting upper and lower limits of input variables represents the simplest type of constraints and gives rise to the so-called box-constrained problems. In this workshop, we will discuss in more detail how the treatment of box-constraints imposed on the search space impacts the performance of optimization algorithms. We will provide participants with a variant of the BBOB suite for continuous, single-objective, noiseless optimization, which is originally considered by the COCO platform to be unconstrained. In practice however, there are commonly used bounds, which are given as values to use, e.g., for initialization. For the benefit of this workshop, we will convert these recommended values to be tight box-constraints, outside of which evaluation returns no useful information.
With the inclusion of box-constraints, another aspect which biases the benchmarking-based design of algorithms for box-constrained problems is the location of the optimum relative to these domain boundaries. Functions included in the BBOB/COCO setup are rather limited in this sense, as by design they all have substantial optimum-free regions close to the aforementioned commonly used boundaries for all generated problem instances.
We encourage submissions containing benchmark results on the provided box-constrained variation of the BBOB suite of test functions. Discussion of the impact of box-constraints on algorithm performance or on reproducibility is also encouraged.

Organizers

Anna V Kononova

Anna V. Kononovais an Assistant Professor at the Leiden Institute of Advanced ComputerScience. She received her MSc degree in Applied Mathematics from Yaroslavl State University (Russia) in 2004 and PhD degree in Computer Science from University of Leeds (UK) in 2010. After a total of 5 years of postdoctoral experiences at Technical University Eindhoven (The Netherlands) and Heriot-Watt University (Edinburgh, UK), Anna has spent a number of years working as a mathematician in industry. Her current research interests include analysis of optimisation algorithms and machine learning.

Olaf Mersmann

Olaf Mersmann is a Professor for Data Science at TH Köln - University of Applied Sciences. He received his BSc, MSc and PhD in Statistics from TU Dortmund. His research interests include using statistical and machine learning methods on large benchmark databases to gain insight into the structure of the algorithm choice problem.

Diederick Vermetten

Diederick Vermetten is a PhD student at LIACS. He is part of the core development team of IOHprofiler, with a focus on the IOHanalyzer. His research interests include benchmarking of optimization heuristics, dynamic algorithm selection and configuration as well as hyperparameter optimization.

Manuel López-Ibáñez

Dr. López-Ibáñez is a senior lecturer in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. Between 2020 and 2022, he was also a "Beatriz Galindo" Senior Distinguished Researcher at the University of Málaga, Spain. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 32 journal papers, 9 book chapters and 54 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, multi-objective optimization, and various combinatorial optimization problems. His current research interests are experimental analysis and the automatic configuration and design of stochastic optimization algorithms, for single and multi-objective problems. He is the lead developer and current maintainer of the irace software package for automatic algorithm configuration (http://iridia.ulb.ac.be/irace) and the EAF package for the analysis of multi-objective optimizers (https://mlopez-ibanez.github.io/eaf/).

Richard Allmendinger

Richard is Senior Lecturer in Data Science and the Business Engagement Lead of Alliance Manchester Business School, The University of Manchester, and Fellow of The Alan Turing Institute, the UK's national institute for data science and artificial intelligence. Richard has a background in Business Engineering (Diplom, Karlsruhe Institute of Technology, Germany + Royal Melbourne Institute of Technology, Australia), Computer Science (PhD, The University of Manchester, UK), and Biochemical Engineering (Research Associate, University College London, UK). Richard's research interests are in the field of data and decision science and in particular in the development and application of optimization, learning and analytics techniques to real-world problems arising in areas such as management, engineering, healthcare, sports, music, and forensics. Richard is known for his work on non-standard expensive optimization problems comprising, for example, heterogeneous objectives, ephemeral resource constraints, changing variables, and lethal optimization environments. Much of his research has been funded by grants from various UK funding bodies (e.g. Innovate UK, EPSRC, ESRC, ISCF) and industrial partners. Richard is a Member of the Editorial Board of several international journals, Vice-Chair of the IEEE CIS Bioinformatics and Bioengineering Technical Committee, Co-Founder of the IEEE CIS Task Force on Optimization Methods in Bioinformatics and Bioengineering, and contributes regularly to conference organisation and special issues as guest editors.

Youngmin Kim

Youngmin Kim is a PhD student at Alliance Manchester Business School (AMBS), The University of Manchester. His research focuses on the development and application of benchmark routines and algorithms for safe optimization. Youngmin has an MSc degree in Statistics from the University of Glasgow, and an MSc degree in Business Analytics from AMBS.

SWINGA — Swarm Intelligence Algorithms: Foundations, Perspectives and Challenges

Summary

Evolutionary algorithms, based on the Darwinian theory of evolution and Mendelian theory of genetic processes and swarm algorithms (based on the emergent behavior of natural swarms), are popular and widely used for solving various optimization tasks. These algorithms, in general, are subject to hybridization with original (unconventional) techniques that improve their performance and special tools/frameworks to properly select the best configuration or to help analyze and better understand their inner dynamics, which can be very complex or exhibiting various interesting patterns. Currently, many researchers are investigating performance, efficiency, convergence speed, population diversity, and dynamics, as well as original population models and visualizations for a broad class of swarm and evolutionary algorithms.

This special workshop is focused on swarm intelligence algorithms, like Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Self-Organizing Migrating Algorithm (SOMA), Artificial Bee Colony (ABC), and more original algorithms that were not created only based on the metaphors, but that were built on a solid foundation of balancing between exploration and exploitation, techniques to prevent stagnation in local extremes, competitive-cooperative phases, self-adaptation of movement over the search space, and more.

This special session concerns original research papers discussing new results and novel algorithmic improvements tested on widely accepted benchmark tests. This special session aims to bring together experts from fundamental research and various application fields to develop and introduce a fusion of techniques, deeper insights into population dynamics, and automatic configuration tools. Such research has become a vitally important part of science and engineering at the theoretical and practical levels. Also, a discussion of real-problem-solving experiences will be carried out to define new open problems and challenges in this interesting and fast-growing field of research that is currently undergoing re-exploration of methods due to neuro-evolution. The scope will be focused on, but not limited to, the below-listed topics.

List of topics:
• The theoretical aspect of swarm intelligence.
• The performance improvement, testing, and efficiency of the swarm intelligence based algorithms.
• Autoconfiguration for swarm algorithms
• Component-wise analysis of swarm algorithms
• Population dynamics analysis for swarm algorithms.
• Boundary and constraints handling strategies.
• Visualization of population dynamics in swarms.
• Explorative landscape analysis and relation with swarm algorithm performance.
• Reinforcement learning and swarm algorithms.
• Population diversity measure, control, and analysis.
• Complex systems for swarm algorithms.
• Original models of population dynamics.
• Swarm intelligence and its parallelization
• Swarm intelligence for discrete optimization
• Mutual relations amongst swarm dynamics, complex networks, and its analysis.
• Randomness, chaos, and fractals in evolutionary dynamics and their impact on algorithm performance.
• Recent advances in better understanding, fine-tuning, and adaptation for swarm/evolutionary algorithms.
• Applications (not limited to):
-- constrained optimization
-- multi-objective optimization
-- many-objective optimization
-- multimodal optimization and niching
-- expensive and surrogate-assisted optimization
-- dynamic and uncertain optimization
-- large-scale optimization
-- combinatorial optimization

Organizers

Roman Senkerik

Roman Senkerik was born in Zlin, the Czech Republic, in 1981. He received an MSc degree in technical cybernetics from the Tomas Bata University in Zlin, Faculty of applied informatics in 2004, the Ph.D. degree also in technical Cybernetics, in 2008, from the same university, and Assoc. prof. Degree in Informatics from VSB – Technical University of Ostrava, in 2013.

From 2008 to 2013 he was a Research Assistant and Lecturer with the Tomas Bata University in Zlin, Faculty of applied informatics. Since 2014 he is an Associate Professor and since 2017 Head of the A.I.Lab https://ailab.fai.utb.cz/ with the Department of Informatics and Artificial Intelligence, Tomas Bata University in Zlin. He is the author of more than 40 journal papers, 250 conference papers, and several book chapters as well as editorial notes. His research interests are the development of evolutionary algorithms, their modifications and benchmarking, soft computing methods, and their interdisciplinary applications in optimization and cyber-security, machine learning, neuro-evolution, data science, the theory of chaos, and complex systems. He is a recognized reviewer for many leading journals in computer science/computational intelligence. He was a part of the organizing teams for special sessions/workshops/symposiums at GECCO, IEEE WCCI, CEC, or SSCI events.

Ivan Zelinka (born in 1965, ivanzelinka.eu) is currently associated with the Technical University of Ostrava (VSB-TU), Faculty of Electrical Engineering and Computer Science. He graduated consequently at the Technical University in Brno (1995 - MSc.), UTB in Zlin (2001 - Ph.D.) and again at Technical University in Brno (2004 - Assoc. Prof.) and VSB-TU (2010 - Professor). Prof. Zelinka is the responsible supervisor/co-supervisor of several research projects focused on unconventional control of complex systems, security of mobile devices and communication and Laboratory of parallel computing amongst the others. He was also working on numerous grants and two EU projects as a member of the team (FP5 - RESTORM) and supervisor (FP7 - PROMOEVO) of the Czech team. He is also head of research team NAVY http://navy.cs.vsb.cz/. His research interests are computational intelligence, cyber-security, development of evolutionary algorithms, applications of the theory of chaos, controlling of complex systems. Prof. Zelinka was awarded Siemens Award for his Ph.D. thesis, as well as by journal Software news for his book about artificial intelligence. He is a member of the British Computer Society, Machine Intelligence Research Labs (MIR Labs), IEEE (committee of Czech section of Computational Intelligence), a few international program committees of various conferences, and several well respected journals.

Pavel Kromer

Pavel Krömer graduated in Computer Science from the Faculty of Electrical Engineering and Computer Science (FEECS) of VSB-Technical University of Ostrava (VSB-TUO). He worked as an analyst, developer, and trainer in the private sector between 2005 and 2010. Since 2010, he has been with the Department of Computer Science, FEECS VSB-TUO. In 2014, he was a Postdoctoral Fellow at the University of Alberta. In 2015, he was awarded the title Assoc. Professor of Computer Science. He was a Researcher at the IT4Innovations (National Supercomputing Center) between 2011 and 2016 and has been a member of its scientific council since February 2017. Since September 1, 2017, he has been the Vice Dean for International Cooperation at FEECS. Since 2018, he is a Senior Member of the IEEE. In his research, he focuses on computational intelligence, information retrieval, data mining, machine learning, soft computing, and real-world applications of intelligent methods. He was the principal contributor to a broad range of research projects with results published in high-impact international journals such as Soft Computing (Springer), and others published by Elsevier, Oxford University Press, and Wiley. In this field, he has contributed to a number of major conferences organized by the IEEE and ACM. He has been a reviewer for Information Sciences, IEEE Transactions on Evolutionary Computation, Swarm and Evolutionary Computation, Neurocomputing, Scientific Reports, and other scientific journals. He also acts as a project reviewer for the Research Agency (Slovakia), National Science Centre (Poland), National Research Foundation (South Africa), and the European Commission (DG CONNECT, REA).

Swagatam Das

Swagatam Das received the B. E. Tel. E., M. E. Tel. E (Control Engineering specialization) and Ph. D. degrees, all from Jadavpur University, India, in 2003, 2005, and 2009 respectively. Swagatam Das is currently serving as an associate professor at the Electronics and Communication Sciences Unit of the Indian Statistical Institute, Kolkata, India. His research interests include evolutionary computing, deep learning and non convex optimization in general. Dr. Das has published more than 300 research articles in peer-reviewed journals and international conferences. He is the founding co-editor-in-chief of Swarm and Evolutionary Computation, an international journal from Elsevier. He has also served as or is serving as the associate editors of the IEEE Trans. on Systems, Man, and Cybernetics: Systems, IEEE Computational Intelligence Magazine, Pattern Recognition (Elsevier),Neurocomputing (Elsevier),Engineering Applications of Artificial Intelligence (Elsevier), and Information Sciences (Elsevier). He is a founding Section Editor of Springer Nature Computer Science journal since 2019. Dr. Das has 18000+ Google Scholar citations and an H-index of 63 till date. He has been associated with the international program committees of several regular international conferences including IEEE CEC, IEEE SSCI, SEAL, GECCO, AAAI, and SEMCCO. He has acted as guest editors for special issues in journals like IEEE Transactions on Evolutionary Computation and IEEE Transactions on SMC, Part C. He is the recipient of the 2012 Young Engineer Award from the Indian National Academy of Engineering (INAE). He is also the recipient of the 2015 Thomson Reuters Research Excellence India Citation Award as the highest cited researcher from India in Engineering and Computer Science category between 2010 to 2014.

SymReg — Symbolic Regression Workshop

Summary

Symbolic regression is the search for symbolic models that describe a relationship in provided data. Symbolic regression has been one of the first applications of genetic programming and as such is tightly connected to evolutionary algorithms. However, in recent years several non-evolutionary techniques for solving symbolic regression have emerged. Especially with the focus on interpretability and explainability in AI research, symbolic regression takes a leading role among machine learning methods, whenever model inspection and understanding by a domain expert is desired. Examples where symbolic regression already produces outstanding results include modeling where interpretability is desired, modeling of non-linear dependencies, modeling with small data sets or noisy data, modeling with additional constraints, or modeling of differential equation systems.

The focus of this workshop is to further advance the state-of-the-art in symbolic regression by gathering experts in the field of symbolic regression and facilitating an exchange of novel research ideas. Therefore, we encourage submissions presenting novel techniques or applications of symbolic regression, theoretical work on issues of generalization, size and interpretability of the models produced, or algorithmic improvements to make the techniques more efficient, more reliable and generally better controlled. Furthermore, we invite participants of the symbolic regression competition to present their algorithms and results in detail at this workshop.

Particular topics of interest include, but are not limited to:

• evolutionary and non-evolutionary algorithms for symbolic regression
• improving stability of symbolic regression algorithms
• uncertainty estimation in symbolic regression
• integration of side-information (physical laws, constraints, ...)
• benchmarking symbolic regression algorithms
• symbolic regression for scientific machine learning
• innovative symbolic regression applications

Organizers

Michael Kommenda

Michael Kommenda is a senior researcher and project manager at the University of Applied Sciences Upper Austria, where he leads several applied research projects with a focus on machine learning and data-based modeling. He received his PhD in technical sciences in 2018 from the Johannes Kepler University Linz, Austria. The title of his dissertation is Local Optimization and Complexity Control for Symbolic Regression, which condenses his research on symbolic regression so far. Michael's research interests currently are improving symbolic regression so that it becomes an established regression and machine learning technique. Additionally, Michael is one of the architects of the HeuristicLab optimization framework and contributed significantly to its genetic programming and symbolic regression implementation.

William La Cava

William La Cava is an Assistant Professor in the Computational Health Informatics Program (CHIP) at Boston Children’s Hospital and Harvard Medical School. He received his PhD from UMass Amherst with a focus on interpretable modeling of dynamical systems. Prior to joining CHIP, he was a post-doctoral fellow and research associate in the Institute for Biomedical Informatics at the University of Pennsylvania.

Gabriel Kronberger

Gabriel Kronberger is full professor at the University of Applied Sciences Upper Austria and has been working on algorithms for symbolic regression since his PhD thesis that he defended in 2010. He is currently heading the Josef Ressel Center for Symbolic Regression (https://symreg.at), a five-year nationally funded effort focused on developing improved SymReg methods and applications in collaboration with several Austrian company partners. His current research interests are symbolic regression for scientific machine learning and industrial applications. Gabriel has authored or co-authored 94 publications (SCOPUS) and has been a member of the Program Committee for the GECCO Genetic Programming track since 2016.

Steven Gustafson

Steven Gustafson received his PhD in Computer Science and Artificial Intelligence, and shortly thereafter was awarded IEEE Intelligent System's "AI's 10 to Watch" for his work in algorithms that discover algorithms. For 10+ years at GE's corporate R&D center he was a leader in AI, successful technical lab manager, all while inventing and deploying state-of-the-art AI systems for almost every GE business, from GE Capital to NBC Universal and GE Aviation. He has over 50 publications, 13 patents, was a co-founder and Technical Editor in Chief of the Memetic Computing Journal. Steven has chaired various conferences and workshops, including the first Symbolic Regression and Modeling (SRM) Workshop at GECCO 2009 and subsequent workshops from 2010 to 2014. As the Chief Scientist at Maana, a Knowledge Platform software company, he invented and architected new AutoML and NLP techniques with publications in AAAI and IJCAI. Steven is currently the CTO of Noonum, an investment intelligence company, that is pushing the state-of-the-art of large scale knowledge graph, NLP and machine learning decision support systems.