Has NEAT changed in 20 years?

image

This algorithm is more than 20 years old, so has it changed? During this time, the prevailing ones in text generation and classification (LSTM -> Transformer) have changed, AlexNET, ResNET have appeared. But NEAT has not? If it has changed, then how?

Upd.: I heard about HyperNEAT, its continuation es-HyperNEAT, to be able to develop the substrate, CoDeepNEAT. But it seemed to me that they were created for other areas where NEAT is not used and they are not interchangeable (even because I still often come across the usual NEAT), it turns out that this is not so and they are just a logical continuation?

Evgenia Papavasileiou et. al. have a literature review from 2021 titled "A Systematic Literature Review of the Successors of 'NeuroEvolution of Augmented Topologies' " which identifies 60 methods descended from the original paper. The review seems to exist for free on MIT Press Direct.

But for the sake of future explorers I will list a selection of descendant algorithms in no particular order: odNEAT(v2), ES-HyperNEAT(-LEO), Multiagent HyperNEAT, NEATfields, Coevolutionary NEAT, and HA-NEA.

NEAT essentially as a subset of GAs excels in scenarios where smooth gradient-based learning struggles, such as non-differentiable network architecture optimization objectives targeted by NEAT. Similar to traditional GAs, NEAT involves population-based training which is inherently computationally expensive without a major breakthrough of non-differentiable optimization theory, compared to gradient-based optimizers like Adam. This is a less competitive area compared with deep learning like image classification or text generation.

Having said that, there' some big changes for NEAT in its 20 years history so far. NEAT struggled with scalability for high-dimensional problems such as image processing or multi-agent RL systems. Stanley et al in 2009 proposed HyperNEAT to map spatial patterns across a substrate instead of evolving weights directly, enabling it to handle larger neural networks efficiently. Also traditional NEAT struggled to evolve deep architectures effectively, Miikkulainen et al. in 2017 proposed CoDeepNEAT to bridge that gap and enable the design of more modular and scalable deep neural architectures which include layers similar to CNNs and LSTMs.

As deep learning has scaled up to more challenging tasks, the architectures have become difficult to design by hand. This paper proposes an automated method, CoDeepNEAT, for optimizing deep learning architectures through evolution. By extending existing neuroevolution methods to topology, components, and hyperparameters, this method achieves results comparable to best human designs in standard benchmarks in object recognition and language modeling. It also supports building a real-world application of automated image captioning on a magazine website. Given the anticipated increases in available computing power, evolution of deep networks is promising approach to constructing deep learning applications in the future.

Ask AI
#1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 #16 #17 #18 #19 #20 #21 #22 #23 #24 #25 #26 #27 #28 #29 #30 #31 #32 #33 #34 #35 #36 #37 #38 #39 #40 #41 #42 #43 #44 #45 #46 #47 #48 #49 #50 #51 #52 #53 #54 #55 #56 #57 #58 #59 #60 #61 #62 #63 #64 #65 #66 #67 #68 #69 #70