Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113668
PIRA download icon_1.1View/Download Full Text
Title: Evolutionary computation in the era of large language model : survey and roadmap
Authors: Wu, X 
Wu, SH 
Wu, J 
Feng, L
Tan, KC 
Issue Date: Apr-2025
Source: IEEE transactions on evolutionary computation, Apr. 2025, v. 29, no. 2, p. 534-554
Abstract: Large language models (LLMs) have not only revolutionized natural language processing but also extended their prowess to various domains, marking a significant stride toward artificial general intelligence. The interplay between LLMs and evolutionary algorithms (EAs), despite differing in objectives and methodologies, share a common pursuit of applicability in complex problems. Meanwhile, EA can provide an optimization framework for LLM’s further enhancement under closed box settings, empowering LLM with flexible global search capacities. On the other hand, the abundant domain knowledge inherent in LLMs could enable EA to conduct more intelligent searches. Furthermore, the text processing and generative capabilities of LLMs would aid in deploying EAs across a wide range of tasks. Based on these complementary advantages, this article provides a thorough review and a forward-looking roadmap, categorizing the reciprocal inspiration into two main avenues: 1) LLM-enhanced EA and 2) EA-enhanced LLM. Some integrated synergy methods are further introduced to exemplify the complementarity between LLMs and EAs in diverse scenarios, including code generation, software engineering, neural architecture search, and various generation tasks. As the first comprehensive review focused on the EA research in the era of LLMs, this article provides a foundational stepping stone for understanding the collaborative potential of LLMs and EAs. The identified challenges and future directions offer guidance for researchers and practitioners to unlock the full potential of this innovative collaboration in propelling advancements in optimization and artificial intelligence. We have created a GitHub repository to index the relevant papers: https://github.com/wuxingyu-ai/LLM4EC.
Keywords: Algorithm generation
Evolutionary algorithm (EA)
Large language model (LLM)
Neural architecture search (NAS)
Optimization problem
Prompt engineering
Publisher: Institute of Electrical and Electronics Engineers
Journal: IEEE transactions on evolutionary computation 
ISSN: 1089-778X
EISSN: 1941-0026
DOI: 10.1109/TEVC.2024.3506731
Rights: © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
The following publication X. Wu, S. -H. Wu, J. Wu, L. Feng and K. C. Tan, "Evolutionary Computation in the Era of Large Language Model: Survey and Roadmap," in IEEE Transactions on Evolutionary Computation, vol. 29, no. 2, pp. 534-554, April 2025 is available at https://doi.org/10.1109/TEVC.2024.3506731.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
Wu_Evolutionary_Computation_Era.pdfPre-Published version2.99 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.