LLMatic: Neural Architecture Search via Large Language Models and Quality Diversity Optimization

Abstract

Large language models (LLMs) have emerged as powerful tools capable of accomplishing a broad spectrum of tasks. Their abilities span numerous areas, and one area where they have made a significant impact is in the domain of code generation. Here, we propose to use the coding abilities of LLMs to introduce meaningful variations to code defining neural networks. Meanwhile, Quality-Diversity (QD) algorithms are known to discover diverse and robust solutions. By merging the code-generating abilities of LLMs with the diversity and robustness of QD solutions, we introduce LLMatic, a Neural Architecture Search (NAS) algorithm. While LLMs struggle to conduct NAS directly through prompts, LLMatic uses a procedural approach, leveraging QD for prompts and network architecture to create diverse and high-performing networks. We test LLMatic on the CIFAR-10 and NAS-bench-201 benchmark, demonstrating that it can produce competitive networks while evaluating just 2,000 candidates, even without prior knowledge of the benchmark domain or exposure to any previous top-performing models for the benchmark. The open-sourced code is available at https://github.com/umair-nasir14/LLMatic.

Publication
In Proceedings of the Genetic and Evolutionary Computation Conference
Muhammad Umair Nasir
Muhammad Umair Nasir

I love tackling challenges in Open-ended Learning and Jiu Jitsu.

Christopher  Cleghorn
Christopher Cleghorn
Senior Applied Scientist
Steven James
Steven James
Deputy Lab Director

My research interests include reinforcement learning and planning.