Evolutionary algorithm outperforms deep-learning machines at video games

(technologyreview.com)

125 points | by hunglee2 2098 days ago

8 comments

  • mindcrime 2098 days ago
    There are some things about this article that irk me, like this:

    Neural networks have garnered all the headlines, but a much more powerful approach is waiting in the wings.

    Granted it's probably written for lay-people who don't know much about AI/ML techniques, but this is still pretty sloppy. It hasn't been proven that EA's are "more powerful" than NN's. And so far as that goes, I'm not even sure it makes sense to use a term like "more powerful" at all in this context. And EA approaches aren't some new, up-and-comer that haven't had their chance yet... those techniques have been around for a long time as well.

    All of that said, I'm a big fan of EA's, dating back to when I implemented a parallel GA optimization algorithm in a parallel programming class in college. I've been fascinated with them ever since, and I do think that evolutionary approaches have a lot of potential. So I'm happy to see some positive press being directed in that direction, but I still get annoyed with some of the hand-wavy stuff and sloppy language.

    Anyway, one thing about (some|many|most?) EA approaches is that they parallelize very well. And depending on what you're doing, you aren't necessarily doing Linear Algebra / matrix math, so you can likely accomplish a lot without spending beaucoup $$ on GPU's. A Beowulf cluster of CPU machines can be pretty effective.

    • eggy 2098 days ago
      I read Koza's "Genetic Programming: On the Programming of Computers by Means of Natural Selection" back in 1993, and a lot of the holdups were that computers and memory just weren't as good as today to deal with large solution space trials. Also, dead ends due to the choice of starting functions to evolve. Today using TWEANNs and other methods (Topology and Weight Evolving Neural Networks) is really a good way to go. I still like the early examples in Koza GP where you give it a few functions to evolve a boolean function. I had started studying neural networks way back in the 80s and hit pay dirt with Mark Watson's 1991 "Common LISP Modules. Artificial Intelligence in the Era of Neural Networks and Chaos Theory", which threw me into NLP, Chaos, and ANNs.
  • aub3bhat 2098 days ago
    To quote Ben Recht

    "When you end up with a bunch of papers showing that genetic algorithms are competitive with your methods, this does not mean that we’ve made an advance in genetic algorithms. It is far more likely that this means that your method is a lousy implementation of random search."

    http://www.argmin.net/2018/02/20/reinforce/

    • soVeryTired 2098 days ago
      The article seems reasonable and well-argued. But policy gradients are a major cornerstone of reinforcement learning - just about every textbook will dedicate some time to them.

      So how can we reconcile that observation with the arguments in the article? Is recht overstating his case or is this a big screw-up in the field in general?

      Can anyone who knows about reinforcement learning weigh in?

  • wholemoley 2097 days ago
    I've been using the python-neat library in open-ai's retro with some success. And while it works quickly, it normally finds local maximas. It seems to struggle with long sequences. And defining the fitness function/parameters is an artform.

    Here's a video of Donkey Kong Country played by python-neat in open-ai's retro. It took 8 generations of 20 genones to beat level one. I'll post the code if anyone's interested.

    https://vimeo.com/280611464

  • rdlecler1 2097 days ago
    It seems clear to me that the biggest advances will be made through evolutionary and developmental neural networks where evolution lays down the algorithm that builds the gross neural network architecture and learning then refines this. However this will need massive amounts of computational power because you have G generations of population size P, and each individual phenotype needs to go through a developmental step (neurogensis) and then an evaluation step. On top of that we need a good genotype to phenotype map specifically for neural networks.

    Conveniently gene regulatory networks that would control cell growt, division, and wiring up the neurons are themselves represented mathematically as neural network so in effect you’re evolving one class of neural networks the build another class of neural networks. Nature is quite elegant.

    • MrQuincle 2097 days ago
      Check hyperneat and for example also the work from Josh Bongard on GRNs. Shorthand is evo-devo.

      https://m.youtube.com/playlist?list=PLAuiGdPEdw0iRhEnF5yPuqe...

      I once built a simulator that used evo-devo to evolve morphology of modular robot organisms in response to environmental factors, completing evo-devo-eco. If I remember it right it was creating a snake at night and disassembling during the day. The fitness function becomes complicated though. I think it was something like a distance function in body morphology plus environmental condition (light).

    • Eridrus 2097 days ago
      Everyone who has an approach that hasn't worked likes to say "the right thing to do is to combine the best of both worlds", and it's sort of tautologically true that if you have orthogonal approaches that can be meaningfully combined you will do better.

      The unanswered implicit question is how to do this combination, and whether the actual gain will be large enough to justify a mor.complicated system.

      So, yes, we should try combining symbolic AI, decision trees, genetic algorithms, etc with deep learning, but the results from these combinations haven't all been super convincing.

  • Drdrdrq 2098 days ago
    It would be interesting to see how good the combination of EA an NN would be... This is basically us, humans: evolution + learning. Have there been any attempts to combine the two?
    • wmwragg 2098 days ago
    • ufo 2098 days ago
      Despite the catchy names, Genetic Algorithms and Neural Networks don't work quite the same way as their biological counterparts.
      • sprt 2098 days ago
        I thought GAs did, but I do lack knowledge in the area. Could you highlight some differences?
        • ufo 2097 days ago
          Genetic Algorithms is a description for a wide range of techniques for adding some extra variety to search algorimhs. You maintain a population of candidate solutions and at each step you improve them by a combination of local search (like gradient descent or greedy neighbor search) and solution mixing (to help escape local minima). When doing this there are lots of ways to go about it. How to decide what population to keep, what local optimization to use, and how to mix the candidate optimizations.

          If you look closely, all of these are not quite how biology works. In biology the population is decided by natural or artificial selection while in a GA there may be more factors at play (such as favoring a more diverse pool). The way solutions are represented is also different. In biology you have genes which behave according to the laws of genetics. In a GA, on the other hand, the solution representation and the operations to mutate and mix solutions are carefully planned by an intelligent designer.

      • benp84 2097 days ago
        "All models are wrong. Some are useful."
    • chenglou 2098 days ago
    • flopska 2098 days ago
      There are some papers from Prof. Braun describing experiments where he uses mutations from genetic / evolutionary algorithms to find the optimal structure of a neural net.
    • mrfusion 2097 days ago
      In my nn class Thomas tried using an ea to evolve the weights for a nn instead of using backpropagation. Surprisingly it performed way worse.
    • nothis 2098 days ago
      For what it’s worth, the final paragraph says so, I believe. No links, though.
    • awb 2098 days ago
      Was just thinking the same thing: Deep Evolution
    • EGreg 2098 days ago
      Isn’t that what AlphaZero did with Monte Carlo Tree Search?

      It basically looked only at outcomes and backtracked in a Monte Carlo tree and assigned weights without preconceived training sets.

      And it outperformed the bespoke programs with existing training data.

  • jaclaz 2097 days ago
    Maybe largely off-topic but I still remember the fun I had actually building as a kid (after reading an article in Scientific American by Martin Gardner) the MENACE (Machine Educable Noughts And Crosses Engine):

    http://www.mscroggs.co.uk/blog/19

    For the beads I used Smarties, so each time the machine lost I ate the "wrong colour one".

  • plainOldText 2098 days ago
    I've been meaning to read Handbook of Neuroevolution Through Erlang. Has anyone done it? If so, what's your opinion about it?
    • mindcrime 2098 days ago
      I'd like to read that book, but the pricing... wow. On Amazon the cheapest new copy is ~ $138.00. And this is one of those cases that shows how goofy pricing for marketplace sellers gets: the cheapest used copy is even worse, at a whopping $230.59. And that's for one in "Good" condition.

      Probably all of these resellers are using some brain-dead stupid bot to set their prices for them. Too bad - if somebody was selling a used copy for, say, $75.00, I'd probably order it right now. Instead, it's going to sit in their store forever, because who would pay $230 for a used copy of a book, when they can get a new copy for $138.

      SMH.

      • p1esk 2098 days ago
        • avshyz 2097 days ago
          Thanks mate!
      • eggy 2098 days ago
        Yes, that's a lot, and especially if you don't make six figures today. However, the amount of information it conveys and the potential to round out your skills if this is your area, or if you just like to read and learn like I do, seems fine to me. I don't even work in ML or a related area, and I've spend $1000s on books in this area. I just enjoy studying and practicing it. A friend of mine who criticized the price I spent on another book, spends well over that alone at Starbuck's in 6 weeks! There are PDFs available to see if you think the book is for you before you decide to buy. I like having a hard copy to read when I am not pecking at keys.
        • mindcrime 2097 days ago
          Yes, that's a lot, and especially if you don't make six figures today. However, the amount of information it conveys and the potential to round out your skills if this is your area, or if you just like to read and learn like I do

          Fair enough. I probably would have ordered the $138.00 copy, if I had not just spent a couple of hundred dollars buying other books earlier in the day. :-)

          I don't exactly splurge on lots of expensive stuff in my life, but I always figure three thing (in particular) are worth spending on:

          1. books

          2. food

          3. tools

          That said, it does seem like book prices have been surging lately. And it is a little frustrating that, more and more, you can't even find a used copy for a reasonable price.

    • fractallyte 2098 days ago
      It's very comprehensive, taking the reader from the simplest concepts right up to the state-of-the-art (in 2012). Very friendly, confident and (oddly enough for the subject), exciting! There's a collection of detailed examples, with code. However, unusually for Springer, it clearly wasn't proofread. Lots of repetition and grammatical errors in the text (the code seems solid). It deserves a second edition!

      That said, I haven't come across a better book. I'd buy it again without hesitation, regardless of expense. Erlang is way ahead of other popular platforms for neural architectures.

      • hellofunk 2098 days ago
        2012 was the year when the modern ML era really began and the explosion of techniques, research, and applications has been tremendous since that year -- sounds like this book ends right when things started getting interesting?
        • fractallyte 2098 days ago
          The modern ML era is mired in hype and statistics. Most of the time, all there is to show for it is yet another expert system with baubles.

          Neuroevolution is one of those overlooked niches where I think we'll see real progress in real AI.

      • hellofunk 2098 days ago
        How much of the book requires Erlang or focuses on that language's features, and how much can be generalized to other languages?
        • fractallyte 2098 days ago
          The book relies on Erlang's unique features. In Sher's words: "Erlang was created to develop distributed, process based, message passing paradigm oriented, robust, fault tolerant, concurrent systems. All of these features are exactly what a programming language created specifically for developing neural network based systems would have."

          If another language can do that (Go, perhaps?), I suppose the code can be made to work adequately.

          • guskel 2097 days ago
            Modeling each neuron in an NN as a process is incredibly computationally wasteful, and we often see this in Neuroevolution implementations where neurons are represented as objects. It's clear from advances in DL that neurons and weights should be represented by matrices.

            I've never given much credence to Sher's book because of this. Switching to Erlang just isn't necessary given current techniques.

            • plainOldText 2097 days ago
              How about using Erlang because it provides fault tolerance, and any faults resulting during the neuroevolutionary process will be contained and won't bring down the whole system? Or evolving groups of neurons in parallel, and across a distributed cluster? These things are much easier in Erlang.
          • eggy 2098 days ago
            I have worked through it in Erlang, but others have tried to translate it to LFE (Lisp Flavoured Erlang) and Elixir. It's still relying on the Erlang ecosystem.
  • 0xBA5ED 2098 days ago
    I didn't know this was new. Most of the neat NN experiments you find on YouTube the past several years use genetic algorithms. Good ol' MarIO, for example. The rigged 3d human models "learning" how to walk and run. The various "navigate the maze" ones. Etc.
    • bobbean 2097 days ago
      There's an entertaining guy on YouTube, Code Bullet, who does a bunch of stuff like this, genetic algorithms and neural networks. It's pretty interesting because he rewrites all the games from scratch.

      Here's a video of him playing with The Worlds Hardest Game: https://youtu.be/Yo2SepcNyw4