Hype or Not? Some Perspective on OpenAI’s DotA 2 Bot

(wildml.com)

170 points | by dennybritz 2420 days ago

18 comments

  • dvt 2420 days ago
    Pretty much agree with everything in here. As I said in my earlier posting (and this blog post reiterates), a 1v1 Shadowfiend mid is highly technical and does not require a huge search space (like in Go or Chess) or any judgment; all it takes is a few tactics (e.g. creep blocking) and good aim for the razes.

    Also, the bot was already beaten 50+ times[1]. There are at least 3 strategies that work. It just goes to show how primitive AI is, as it took the AI team thousands of generations to get it to this stage, but a few determined gamers outsmarted it (using a few cheap meta-strategies) it in less than 6 hours after release.

    [1] https://www.reddit.com/r/DotA2/comments/6t8qvs/openai_bots_w...

    • nimos 2420 days ago
      If you look at this problem realistically it should be offensively easy for AI. The entire world state is available to you with perfect accuracy at a timescale that is trivial for computers but well beyond human reach. Also, there are clear goals and absolute limits to the functionality of the world.

      But the human players quickly adapted and developed new strategies and the bots just weren't able to adapt as quickly. You can see the players recognizing the idea of a training set and trying things the bots probably hadn't seen before to see if they could confuse the bots.

      As much as there is a lot if impressive work being done in machine learning I can't help but to be generally a bit skeptical of the whole "AI revolution" and people being replaced en mass. I'm far from an expert but from everything I've seen from machine learning seems like this awesome tool to augment human ability rather than replace it.

      I'm not super interested in having a computer doctor but I'd be 100% on board with my doctor having smarter computer.

      I think the greatest challenge to date in machine learning is identifying how to use it to create value.

      • the8472 2420 days ago
        If AI can augment the ability of 1 human then that 1 human + 1 AI can still replace people en masse. That's what automation does. The factories usually are not devoid of people, they just have fewer people and higher productivity than they used to.
    • frik 2420 days ago
      This whole OpenAI gets hyped a lot on HN - it's interesting who is behind OpenAI. They are misusing the "Open" name as there is no substantial open in OpenAI - it's also no open source project - it's not related to OpenGL, OpenAL, etc at all!

      I don't get the hype around AI bots for StarCraft 2 and DotA 2. There is nothing special now, we had such bots developed for many years.

      • Twirrim 2420 days ago
        https://github.com/openai

        While not everything has been opened, they do have a bunch of things they've opened up, particularly the simulations for training AI in.

      • silvershadow 2413 days ago
        We have had hard-coded bots that perform decently well. We haven't had ML-trained bots which perform at this level which is what is exciting about this. It's not about having a computer play DOTA at a high level, it's about having AI learn something in a real-time environment, as suppose to turn based like go or chess.
      • nopinsight 2420 days ago
        Have you seen any bots perform competitively with human pros in Dota, even in the easier 1v1 case?
    • blazespin 2420 days ago
      Yeah, but the method of learning is interesting - unsupervised training in a VR world. As CPUs become faster, than these type of problems will fall much faster.
      • xfer 2420 days ago
        I am not sure it is completely unsupervised, iirc they said it was "coached". I don't think a bot can learn by itself to fake razes/perfect creep blocking by itself.
        • gpm 2420 days ago
          They talked about this twice, the exact text of both follows

          > We didn't program it to understand the rules of Dota, we just let it play lifetimes of 1v1 against itself and coach it on what we thought was good or bad.

          > We've coached it to learn just from playing against itself. We didn't hard code in any strategy. We didn't have it learn from human experts. Just from the very beginning it just playing against a copy of itself. It starts from complete randomness and then it makes very small improvements and eventually reaches the pro level. [... goes on to talk about how it's first "improvement" was to just never leave the base because it kept dying]

          I think you're reading too much into the word coached. Particularly the second quote implies that it wasn't supervised at all.

          My personal explanation for "Fake razes" is that it isn't a fake out to the bot, it's a technique to lower the casting time to zone out the opponent/hit faster if the opponent chooses to dives. Seems like a pretty easy trick to learn.

          Creep blocking is just learning "positioning myself here (in front of the creeps) tends to improve my chances of winning", where "here" isn't far from where it would be positioning itself without creep blocking. It's a minor adjustment to the behavior with a substantial effect on the game outcome, of course it learned it.

          • mannykannot 2420 days ago
            In this post, a member of the OpenAI team characterizes this form of coached self-play as being supervised. If you are assigning a value to an action that is not an explicit goal of the game and is not a consequence of violating the game's rules, it would seem to fit the definition of 'supervised'.

            https://news.ycombinator.com/item?id=15002150

    • teej 2420 days ago
      What hero do you think would be a better test?
      • justicezyx 2420 days ago
        It's not about which hero 1v1 is not a competitive part of dota, if you claim beat pro players it has to be 5v5 with banpick

        Otherwise just say it's 1v1 mid...

        • thomasahle 2420 days ago
          Sure, but it hadn't been done before. They say they'll go for 5v5 next year, and if it wins I'm sure we'll move the goal posts further again.
          • cthor 2420 days ago
            I'll precommit to being very impressed if a bot can achieve >5k MMR in solo ranked or if a bot-controlled team wins a tier 6 (or higher) battle cup.

            When Checkers and Chess and Go pros were beaten by AI at the actual game, people were impressed, because it was impressive. (The goalpost moving involved claiming each consecutive game could not be solved.)

            I don't think anyone here is saying Dota 2 won't be solved eventually or that its complexity is beyond the realm of AI (as was claimed of Chess/Go in the past). They're just saying this particular achievement isn't actually meaningful progress. It's using known techniques to do something those techniques are known to do.

            • TrickyRick 2420 days ago
              > I'll precommit to being very impressed if a bot can achieve >5k MMR in solo ranked or if a bot-controlled team wins a tier 6 (or higher) battle cup.

              Me too but it will never happen. Granted I haven't played DOTA but I've played many other competitive mulitplayer games and they all require one thing which bots currently lack: Communication. A bot playing the entire 5 man team though, that's a different story!

              • Buge 2419 days ago
                Surely the different bots would communicate via RPCs or some other API. It wouldn't be much different from 1 single bot, especially if 1 bot decided to coordinate everything and the other 4 bots decided to just follow orders.
              • sillysaurus3 2419 days ago
                Using four skills in one hero is similar to using 20 skills across 5 heroes. It's not communication, just an extension of a single bot. (It's like a single player micro'ing all 5 bots.)
      • dvt 2420 days ago
        Either, (a) high risk/reward heroes: Pudge, Huskar, maybe Weaver or Puck, or (b) highly complex heroes: Invoker, Morphling, Earth Spirit, Nature's Prophet, Techies, maybe Meepo.
        • unrealhoang 2420 days ago
          Would love to see courier snipe from lv 1 by NP or even man fight in the base
        • teen 2420 days ago
          i would love to see 2 meepo bots go head to head mid
      • thenomad 2420 days ago
        I know it'd need a much larger training set, but honestly you're not playing DOTA unless the full hero pool is in play.

        Never mind 5v5, a 1v1 bot that would let Dendi/Sumail/RTZ etc choose any hero and still win would be much more impressive.

        • tertius 2420 days ago
          And when that's done, 5v5 will be much more impressive, so we none the goal posts.
      • HappyTypist 2420 days ago
        Techies.
  • AndrewKemendo 2420 days ago
    We did not make sudden progress in AI because our algorithms are so smart – it worked because our researchers are smart about setting up the problem in just the right way to work around the limitations of current techniques.

    This statement is like putting wheels and a motor at the base of the goalposts.

    Everyone who practices ML knows the reality that while we're not going to see AGI for a while, and these systems are massively hard to build and do very narrow bounded things, they are also making massive progress in "intelligent" outputs at a pace we've never seen.

    Yes, there is hype, but there are pretty solid reasons to be hyped.

    We'll keep seeing people saying oh well it's not that impressive probably until AGI has clearly taken everyone's job in 2100 and we're all just providing training data for it.

    • dennybritz 2420 days ago
      I agree. The point I was trying to make is not whether it is impressive or not (it is impressive!), but that the general press hype about "AI breakthroughs that will soon kill us all and need regulation" is over the top and misleading. The probably massive engineering effort that went into this is, to me, more impressive than the algorithmic innovation, if any.

      Progress is made with small incremental improvements, including this one, and there have been few real algorithmic "breakthroughts" over the past few years. That's why I think it is important to give some perspective to the hype.

      • sweden 2420 days ago
        I feel that you are missing the whole point of "AI breakthroughs that will soon kill us all and need regulation". It's not about AI becoming self-aware and start to proactively taking over humanity starting from a Dota game, it's how we are finally capable of putting AI anywhere.

        The point Elon Musk was trying to show is that nowadays, we have the technology and the research to replace humans by AI for making judgement calls, no matter how difficult it is. And this was proven with a "simple" game of Dota. And if we are able to build a system to play Dota, we are also able to build AIs for anything at all.

        You claim that there had not been any real breakthroughs over the past years but truth to be told, today we have AI playing Go, AI managing self driving systems, AI playing games of Dota against the top players. All of this happened over the last few years, 5 years ago this all felt like a distant future.

        • debatem1 2420 days ago
          "nowadays, we have the technology and the research to replace humans by AI for making judgement calls, no matter how difficult it is"

          "if we are able to build a system to play Dota, we are also able to build AIs for anything at all"

          These are stated like facts, but are not facts.

          There are decisions made by humans which no AI system today can make well, and there are problems for which we have no idea how to build even mediocre AIs. The claim that someday we will have AGI is plausible but not yet certain to be true.

          • sweden 2420 days ago
            "There are decisions made by humans which no AI system today can make well, (...)"

            Exactly, that is precisely the point. Companies and business will just rush to implement the next big neural network to boost their business, no matter how immature the technology is.

            And the claim is not about AGI, it's more about the little things. For some reason people like to think really big and exaggerated scenarios when it comes to AI.

            Let's take web apps as an example. We can all agree that the state of the art of the current web programming is very poor: JavaScript, NodeJS, Electron, CSS, etc. The technologies are bloated, they are slow, full of hacks and workarounds, and so on. And yet... people use them for everything, it's like the Atwood's law described: "any application that can be written in JavaScript, will eventually be written in JavaScript"

            I imagine that a similar scenario will happen with AI and neural networks. Can you imagine, for example, start dealing with an AI instead of an human when it comes to customer service? Maybe it is already happening, if you look for stories about Google's customer service in the internet, you would think everything is run by some sort of AI there.

            Another example, look at Microsoft is doing with Visual Studio's telemetry data: https://blogs.msdn.microsoft.com/dotnet/2017/07/21/what-weve...

            I wouldn't be surprised if they built a neural network to feed all that data to and to have it to produce a UX "optimized" for mass consumption.

            We are getting into a time in which everything will be powered by "AI" and I think that's what OpenAI is trying to regulate before we get every single business pestered with an half-assed implementation of neural network and data analytics.

        • scribu 2420 days ago
          > AI playing games of Dota against the top players

          This example weakens your argument. As TFA explains, the bot can only play a very restricted version of Dota - much simpler than chess - which means it was thinkable ever since the '90s, when Deep Blue beat Kasparov.

        • lottin 2420 days ago
          Self-driving cars have been around since the 1980s.
      • gnaritas 2420 days ago
        When should people attempt to regulate AI, when it's too late? Stopping a problem before it happens is called foresight. And if you pay attention to the press, it's not about killing us all, it's about disrupting the world, i.e. destroying jobs.
        • semi-extrinsic 2420 days ago
          The reason I don't buy the "destroying jobs" argument is that we have tens of thousands of jobs just in the small country where I live that could be automated away by 1980s technology. For a US-specific example, look at the way you submit your taxes. That shit could be 99% automated away, like it is in many European countries already, and you have several large companies that would come crashing down, together with many thousands of government employees suddenly without a job. There's no AI required. Why hasn't it been done already? Why should AI change that?
          • gnaritas 2419 days ago
            Those jobs exist due to regulatory capture and the difficulty of changing tax law in a bitterly divided nation, just because a few industries found a way to use government to prevent progress and automation doesn't mean progress and automation isn't a danger; most industries won't be protected by regulatory capture. The "destroying jobs" thing is not an argument, it's a fact, it's what automation does, there's nothing to "buy", you either understand it or you don't.
            • chii 2419 days ago
              > most industries won't be protected by regulatory capture.

              And that's s good thing. It's unfortunate for those whose jobs are automated away, but without automation, goods and services can't keep up with the increase in population and demand. What's needed isn't prevention of automation, but how to increase the number of people with enough education to implement even more automation.

              • gnaritas 2418 days ago
                I agree that it's a good thing, but your last statement is simply not in accordance with reality. The median IQ is 100, half the population is of below average intelligence, the problem we face is what to do when labor is no longer valued because machines do it all and half the population or more has no skills of value to the market. Education helps the top few percent stay ahead of the machines, it does nothing for the bulk of society who are not and will not ever be smart. Thus it's an economic problem of resource distribution, we must move away from a labor based economy if we allow automation to continue. We cannot handle massive automation with our current economic system.
                • chii 2418 days ago
                  > bulk of society who are not and will not ever be smart.

                  It used to be that most people are illiterate, and cant do maths either. I'm sure if you measure IQ back then, it'd be quite low. And yet, the population got education, and lo and behold, the avg IQ increased.

                  Making the assumption that intelligence remains constant is wrong. I'd even suggest that most people are capable of doing things that you might consider require high intelligence, like writing code, or doing research, or designing solutions. Those who currently can't are merely so because of lack of opportunity (especially during formative years).

                  Making the money to increase the level of education will have the indirect effect of curbing the automation and unemployment problem.

                • mmirate 2417 days ago
                  > the problem we face is what to do when labor is no longer valued because machines do it all and half the population or more has no skills of value to the market

                  Need anything be done? You said it yourself: that portion of the population has no skills of value to the market. They're worthless.

      • AndrewKemendo 2420 days ago
        I somewhat agree with that but I think the biggest (underappreciated) algorithmic improvement recently was MaskRCNN. The ability to do Segmentation and Detection in the same net is huge.
        • visarga 2420 days ago
          But isn't that slow? It doesn't parallelize well for real time processing, because it goes sequentially pixel by pixel while CNNs process all pixels in parallel.
    • visarga 2420 days ago
      AIs would be better served by other sources of data - such as the game Dota2 - because they can get unlimited data to test out their strategies.

      What AIs need is simulators - in other words - a world/an environment for them, where they can freely move about, interact and learn. The success of AI is linked to the development of realistic sims. Fortunately, the happiness of many people is also linked to the development of realistic sims (games). They go hand in hand.

      • AndrewKemendo 2420 days ago
        My take is that AI's need the real world, with real world reinforcement loops. That's why I'm so focused on AR, it's a perfect human machine interface for training - specifically vision.
    • flamedoge 2420 days ago
      > AGI has clearly taken everyone's job in 2100 and we're all just providing training data for it.

      This got me spooked. What will we do afterwards?

      • AndrewKemendo 2420 days ago
        Your guess is as good as mine. Best case, is that we accept our fate as a species (that our purpose was to build AGI) and then the last 10 Billion of us are comforted in a species wide hospice type scenario until we die off.

        Worst case we kill each other with weaponized AGI's for a few years until it gets smarter and abandons earth leaving us basically where we started.

        • thinkfurther 2420 days ago
          I'm not worried about actual AGI; infinitely powerful, not bound to the whims of any person or conglomerate. I see human atrocities born out of weakness, so I'm not worried about something that would have gain nothing from torturing or destroying us. That's assuming it can do and think everything we can, and doesn't need us as slaves. E.g. I like sparrows lots. I don't understand them, I wouldn't want them in my room, but I like seeing them do stuff on the periphery of my life. I can imagine AGI looking at us with an similarly friendly eye.

          But that won't happen, whatever grows, it will grow out of the diseased now. Today Trump says "I'm a very instinctual person, but my instinct turns out to be right. Hey, look, in the meantime, I guess I can't be doing so badly, because I'm President, and you're not,", tomorrow's Trump will have infinite power over anyone at the press of a button and no uncomfortable questions to even give non-answers to. They'll hand their power over to others who really want it, people who by definition will also be dysfunctional.

          As black and white as it may be, I think Erich Fromm's stuff about biophilia and necrophilia applies here, and necrophilia will win out, things staying the same and people staying as obedient as they are. No fate about it, just cowardice.

          • AndrewKemendo 2420 days ago
            Well I think that's a reasonable thought but in my opinion it's not really describing AGI. Rather, it's describing something near AGI that can be controlled.

            That's why I am vociferously against the "Friendly AI" movement, because fundamentally humans aren't friendly - and that doesn't mean friendly in terms of agreeable or altruistic. Rather "Friendly" as used in the FAI sense, means "Does not reject or override the human values." In effect, the FAI movement wants to try to determine how you make AGI without it having it's own goal system. That it would always be subservient to humans. Which is only marginally a human trait. We pride ourselves as a species on not being beholden or enslaved by someone else's ethos. To try and extend that to human level machines is wrong ethically, and would just create unenlightened super soldiers.

            Could a sufficient independent AGI decide to wipe out all humans? Maybe, but in that case it would have come to that conclusion, and taken the steps necessary to achieve it (harder than AGI), on it's own. I don't find the paperclip maximizer or gray goo scenario a plausible argument against.

            • OtterCoder 2420 days ago
              We pride ourselves on being unbeholden, but it's mostly self-deception. Take a look at this, and realize that even in the west, we aren't that far removed in the way our power structures operate.

              https://aeon.co/essays/this-is-what-slavery-looks-like-today...

            • thinkfurther 2420 days ago
              If I had only the two choices, I would absolutely choose unconstrained, actual AGI over something controllable.

              But that's because the way things are going, I see mostly doom and gloom anyway; controlled AI will increase that, uncontrollable ones are at least wildcards. I'm not proud to feel that way, but I do. I have little hope for the human resistance against other humans that has been necessary for so long and just isn't forthcoming -- and even either AGI or humanity spreading so far over the galaxy that some pockets get cut off from the MCP seem more likely. But that's still the lame option, the honorable one is to get our stuff together and then go to space and create AI. As long as I live, I will "work" on that, I certainly will not go into the night gently.

              Unless that "paradigm shift" happens, technology will continue to serve capital, and unless you can make the "jump" in total seclusion, so nobody knows what you're doing until it's too late, there will always be people with guns to have a word with you. Still godspeed but that's how I see it.

              • nopinsight 2420 days ago
                Technology inventors/project leaders can turn into the majority owners of capital. We already see the shift: 5 of the 10 richest people in the world built their wealth from tech, 7 of the 10 largest public companies in the world by market capitalization are tech companies.

                https://en.wikipedia.org/wiki/List_of_public_corporations_by...

                The shift is still ongoing. Old(er) capital still have much control of the world's economy, but as technological development accelerates and becomes more valuable, the balance could tilt more towards these tech-based capital.

                I am not saying they are saints but at least many of these self-made tech billionaires pledged to give much of their wealth away for philantrophy instead of leaving it to their heirs.

              • AndrewKemendo 2420 days ago
                unless you can make the "jump" in total seclusion

                Not sure if this is possible, but I think we can get pretty close.

                there will always be people with guns to have a word with you

                Yes, well it helps to already be one of those people. I spent 12 years in military intelligence and remain a reserve adviser to the Joint Chiefs and SECDEF on emerging technology.

      • landon32 2420 days ago
        Probably the work that is considered hobbies now. And lots more music/arts festivals, and other fun purely human things.

        That's assuming that the wealth distribution is designed carefully. Obviously if 1-10 organizations dominate the AGI market and they aren't paying massive taxes/donating massive amounts of money, then things will be rough. It's really worth it to ensure this doesn't happen, and it's also the sort of thing that if it did start to happen it would be pretty obviously a problem.

    • justicezyx 2420 days ago
      > Yes, there is hype, but there are pretty solid reasons to be hyped.

      Can you list the “pretty solid reasons” to be hyped?

    • thinkfurther 2420 days ago
      > AGI has clearly taken everyone's job in 2100

      What would be impressive about that? How's that something to get hyped for? The ways people try to shed their humanity are already well described, and all that's left is to outsource the double think to machines that won't mind and that can't be asked uncomfortable questions about their childhood and such things. It's a straight line, moving decision making from the public to sphere to private corporations and from there to black boxes. The cause is still that some people haven't been acknowledged as babies or whatever trauma caused it; the more fanciful the cathedral the more banal the lie it covers up.

      All the interesting things about that have been written decades ago, these days it's back to dwarves cheerleading naked emperors. So the people who accumulate capital because of their stunted development will win out and buy it all, removing any persons worth a damn, and any chance for any to ever get born, for good; and being at each other's throats for the rest of eternity. It's the failure of humanity, not its apex.

      Just so you know.

      • AndrewKemendo 2420 days ago
        What would be impressive about that? How's that something to get hyped for?

        I was being somewhat fatuous, as that is a common scare scenario. However, I do believe that it should be our goal as a human species, to create AGI as our intellectual successor. In the same way someone's child replaces them and they die, I think AGI should be our offspring and we as a species go peacefully into oblivion. It's my life's work to help see that happen.

        It's certainly an "out there" philosophy, but so far I haven't had anyone really challenge the logic when I spell it all out.

        • goatlover 2420 days ago
          I don't understand the logic. Why should humanity go peacefully into oblivion with the advent of smarter than human machines? Do you feel like all the other species on this planet should have gone extinct with the arrival of homo sapiens?

          Your argument is that since B is better than A, only B should exist. But why can't both exist? And what makes your valuation of B objective and universal, such that our grandchildren will go peacefully into oblivion?

          • Joeri 2420 days ago
            Humanity currently is like a young child. We throw temper tantrums, make big messes and refuse to clean them up. We're terrible stewards of the earth and we serve no purpose as a species higher than existing and reproducing. We must grow up. But the fact that we're talking animals driven by primal urges makes this impossible through nurture alone. Either we change ourselves to the point of being a different species, or we build the better people we should have been and go quietly into the night. Either way, humanity as it exists today can't be the template for the future. Whatever comes next may call itself human, but it won't be homo sapiens.
            • scribu 2420 days ago
              > we serve no purpose as a species higher than existing and reproducing

              You say that as if there are (or can ever be) species which do have a higher purpose.

              • Joeri 2420 days ago
                Basically any common purpose at all would suffice, like protecting life against harm. Fundamentally, we haven't agreed on the basic question: why are we here? Not 'why' as in how did we come to be, but why as in what do we want to achieve.
            • lttlrck 2420 days ago
              Yet despite recognizing all these shortcomings we still have the arrogance to think we can create a better ‘human’? That would be a better steward without primal urges?

              That’s distinctly human egotism.

              I feel we would already have to change ourselves to the point of becoming a different species before we were even capable of successfully building such a being. At which point it would be moot.

              • AndrewKemendo 2419 days ago
                I'd be curious how you define "better"

                I feel we would already have to change ourselves to the point of becoming a different species before we were even capable of successfully building such a being

                Well that's what the whole transhumanist movement is about - however it goes hand in hand with AGI. You can get there together in theory.

          • AndrewKemendo 2420 days ago
            The human species will end some day, as most species have. Historically speciation happens through slow pseudo random environmentally pressured transition. My position is that the logical next evolutionary transition will be engineered rather than "natural." And engineered in a way as to accelerate the next transition and so on.

            So it's less so that one should be dominant it's that the progression would effectively end speciation for sapiens.

            It's not related really to other life. However in effect, our acceleration of extinction of species, through resource use etc tells me that it would likely happen to us similarly.

        • rapind 2420 days ago
          Sounds pretty lonely no? Wouldn't the most intelligent AI be the only one left standing (just extended globally)?
          • AndrewKemendo 2420 days ago
            Lonely for who? Mathematically, yes I would expect one would be left standing, but no reason it would have the same sense of loneliness. Or maybe it would - we wouldn't know.
            • goatlover 2420 days ago
              Only one intelligence left standing is an outcome that most people would find negative. It's like if Skynet was the last survivor of the Terminator franchise. Why would anyone want that outcome?
              • AndrewKemendo 2420 days ago
                I think the Borg, if successful in assimilating every known intelligence universally, would be a more appropriate analogy. As to why they would want it, I can only speak for myself but I think it's the most logical outcome if you extend the concept of self awareness to the universe.
                • goatlover 2419 days ago
                  But why would we want that outcome? The Borg are universally despised on ST by all other sentient races, and assimilation is resisted whenever possible.

                  It's the same as arguing that the replicators on Stargate SG1 should be the logical outcome. Technologically, they are superior, but they're not a preferred outcome. Similarly, one could argue that gray goo or Carpenter's creature from The Thing are superior, but we don't want a world of gray goo or things.

                  • AndrewKemendo 2419 days ago
                    I argue because it's the logical progression of our species and continues the trajectory of our highest ideal: understanding the universe.

                    We can't do it with our biology, limits to understanding, reasoning and perception, so something that is not biologically restricted needs to be our successor.

        • thinkfurther 2420 days ago
          I'd rather first deal with the excessed of greed resulting from psychopathology, because any successor of not doing that, multiplying the failure to do that by "infinite". I don't consider us even born, we're still in a holding pattern, about to be aborted.

          > I think AGI should be our offspring and we as a species go peacefully into oblivion.

          This isn't "out there". Just start with religious apocalyptic visions of destruction, e.g. with some square shaped "city of God" floating down from the sky to save the day. There are many ways for people to give up, and many reasons why they do, many rationalizations for it. As I said, this has already been described from all sorts of angles, the more interesting things about that have already been written.

          It's like I write about memory corruption and how it can lead to all sorts of random values, and you are interested in this particular random value, seeing how it's so unique from all the others. I see the underlying dysfunction.

          I'll now try my hand at a bad, literal translation of "Volle Entfaltung" by Erich Fried (1921-1988)

              those who love life
              often just say
              that they love a woman
              or her genital area
              or her voice
              or they love the scent
              of freshly baked bread
              or the sun in the evening
              
              in those cases love means
              a lot of things but always
              kind of also
              that they love life
              
              those who don't
              love life
              but only the idea
              say loudly
              they love life
              the greatness of nature
              and the humanity
              that masters it
              
              because of this love
              they put it on themselves
              to murder
              those who loved life
          
          If I said no to your grand vision of dying off after a shameful history, would you go peacefully into oblivion? I doubt it.

          Intelligence without a personality would not be a successor, it would be a blind, endless maw, eating information and producing nothing. Intelligence with a personality would have a lot of questions, a lot of needs, and before the growing up period comes the time where "humanity" would be the parents. You know what abusive or "just" too weak parents produce? Pain and the means to bring more pain. We're hardly being fit parents for human children, we're already doing our best to dissolve young minds in acid baths of nonsense. And we want to raise some sparkly clean sane AGI? Nah. If it wouldn't entail so much suffering the idea would be hilarious though.

          And then what? Ultimately, heat death of the universe, AGI likely goes into oblivion, too (and no, that Asimov story won't change that). So all that happened is that we outsourced our inability to accept that, and to live in dignity right now, to some future point that then never comes.

          Why would it even have to be our successor? Why would it mean we die off? Ever noticed how bacteria and all sorts of things are still around? Why wouldn't it just be something additional in the world? There's more holes to this than substance.

          > so far I haven't had anyone really challenge the logic when I spell it all out.

          No matter how valid or silly you might find what I said, this is no longer true. I'm happy to be the first, and if it really was true until a minute ago, it says a lot. Either that you keep your ideas to yourself mostly, or that you're around some weird people.

          • AndrewKemendo 2420 days ago
            I'd hardly call a few sentences spelling it all out.

            Your point is one I hear often, especially the "what's the point if the heat death is coming anyway."

            Basically your argument lies in the "virtue ethics" category of philosophy which argues that man-qua-man should be the best man possible. Eudaimonia in the Stoic tradition and all of that.

            Either that you keep your ideas to yourself mostly

            Yes this is the case.

            • thinkfurther 2420 days ago
              Well, do you have it spelled out somewhere? Want to spell it out? Otherwise, what's the point of bringing it up? Me too am completely unchallenged in my actual viewpoints, but I gotta run. That's pointless.

              > Basically your argument lies in the "virtue ethics" category of philosophy which argues that man-qua-man should be the best man possible.

              No, my argument is what I actually wrote, in short that I want the psychological problems and the obedience to them sorted out before we magnify it all so much it becomes impossible to sort out.

              It's like we're driving this car at on a perfectly straight road at 80 mp/h, and some say "steering locked in, activate nitros" and I'm saying "actually, we're slightly skewed, if we use the nitro before correcting that we'll smash into a tree". And in response I get "oh, so you don't like Jazz" or something equally non-reassuring.

              • AndrewKemendo 2420 days ago
                No, I don't have it written out anywhere. I brought it up to test the waters somewhat and see what kind of response I would get. I'm hesitant to write it all out because I expect such a document would just be the source of opposition to implementation. Like putting war plans on the internet. If we make the progress we want I expect heavy opposition. However it's hard to build a movement in the shadows. I'd rather just be building things, and making progress toward the goal like I am now, than doing the politics of creating a movement before we have significant powers.

                To your argument, it's the perfectability of man problem. You state the decoupling of psychological problems with some other undefined idealized personhood. It lacks an understanding of cognitive science, behavioral economics and neuroscience in my opinion. I would argue, and the science generally agrees, that those "psychological problems" however defined are the other edge on the sword of intelligence and relative autonomy.

                So while you argue that yours is a simple argument is really rooted in an old ideal as I stated, which is neither incompatible with (through transhumanism) nor sufficient for, a practical material transcendent philosophy.

                • thinkfurther 2420 days ago
                  I'm not talking about perfection, I'm talking about not being covered in blood and shit so much of the time is all, about not being scared and driven.

                  > You state the decoupling of psychological problems with some other undefined idealized personhood.

                  Can you rephrase that? English isn't my first language, and "stating decoupling of" doesn't parse at all for me. It's like there's a sentence fragment missing.

                  I can't easily define personhood. I know I'm one, that other people are, and I even know animals I recognize as persons, too. With super small animals and plants it gets tricky, but I don't need to fully define first for it to exist, there is something there. Any theory is an abstraction that never fully matches a reality which would remain unchanged if that theory didn't exist. Any intelligence we see and can make theories about results from actual living entitites. There is absolutely no reason to outright assume it can make sense in a vacuum. What would the AGI be intelligent for? Why wouldn't it just calculate whether the heat death of the universe will happen, and shut off if it comes to the conclusion it will? Nothing it would do would make a difference then. Humans are different, more irrational if you will. I don't believe immortality will ever be achieved, I don't even find it desirable, and still I get out of bed in the morning. Everyone I love will die, and everything I do will turn to dust. I still love and make things.

                  So, what would that "spark" be for AGI? I just had the dumbest association, in the movie "The Fifth Element" there is this scene where the [I forgot what the name of her role was, the female protagonist anyway] is taking in all the atrocities of humanity and gets kind of frustrated, and doesn't feel like saving them or something like that. Then Bruce Willis kisses her and it's alright. As I said, it's dumb, but still, if all we give an AI is problems, why would it come up with solutions, and not more problems?

                  > I would argue, and the science generally agrees, that those "psychological problems" however defined are the other edge on the sword of intelligence and relative autonomy.

                  Obviously a piece of rock doesn't have any of the problems or joys we do, but that doesn't just mean anything and everything just "goes with the territory".

                  > If we make the progress we want I expect heavy opposition.

                  And you're just going to steamroll over it? Fantastic. You'll go to the friendly baker next and they'll happily sell you bread because they don't know. And you can't face them one on one, so you have to "achieve significant powers" in the shadows. And you're still not seeing the woods for all the trees. No, it's their fault for not understanding what you're not even telling them.

                  • AndrewKemendo 2419 days ago
                    Can you rephrase that?

                    Your argument is that we should collectively eliminate psychological problems. However it doesn't state what the idealized person would look like in such a scheme. Nor do I think you could probably do so without invoking what is effectively a deus ex machina - which would effectively be a better entity for which we could emulate, if only in theory. That entity I posit is the AGI.

                    Why wouldn't it just calculate whether the heat death of the universe will happen, and shut off if it comes to the conclusion it will? Nothing it would do would make a difference then.

                    What a fantastic result then! To have a more concrete proof of the Absurdist reality of our universe. This would be a great outcome to have such certainty around this problem. Indeed what if it calculated some universal existential problem like heat death and then turned itself off (aka suicide)?

                    No, it's their fault for not understanding what you're not even telling them.

                    As with most things. Few people understand, or even know of the Haber process and that it is the basis our entire society. Yet we exist in spite of our ignorance.

          • OtterCoder 2420 days ago
            Your translation is definitely crude, but I'm completely fascinated by it.

            I think it speaks to the part of hipster-ism that people react so strongly against. Why enjoy things 'ironically', or spend the effort becoming a connoisseur and a snob when you should really just enjoy things on a basic level. Like what you like, and rejoice in it.

            • thinkfurther 2420 days ago
              :) I wish I had come up with something better than "genital area" heh, but "womb" also didn't strike me as quite right.
              • OtterCoder 2420 days ago
                Perhaps replace it with "body"? That has strong sexual and romantic connotations in English while also being tasteful.

                Alternately, "flower" or something more metaphorical. English poets tend to avoid the genitalia directly. We often follow the Nordic tradition of calling a spade anything but a spade.

      • visarga 2420 days ago
        You know that all species have to fight for survival? Now there's one more player on the scene. Nothing has changed. We still have to adapt or die. If there's anything that is the largest danger to humanity is humanity itself.
        • thinkfurther 2420 days ago
          Why not set off all nukes on the planet? All species have to "fight for survival", it'd just one more thing to adapt to, so no actual change.

          Why did you even respond to my comment? I mean, electrons are buzzing about all the time, this just another configuration, nothing changed.

          > If there's anything that is the largest danger to humanity is humanity itself.

          Yeah, and thanks for not helping.

          • visarga 2419 days ago
            Oh sorry, we should burn our computers and destroy all books about AI, that would stop it from appearing. Because we can outlaw matrix multiplication.
  • nbkvjones 2420 days ago
    there was a "discussion" on nadota.com about the bot and the semipro player that openai used to test chimed in.

    apparently the set of items the bot chose to purchase from was limited[1] and recommended by the semipro tester. As someone who knows next to nothing about ai, my question is this: the bot was announced on stage as blank slate, dumped into dota, and built entirely from grinding countless games against itself; is it reasonable to pitch it this way while having this item constraint from an outside source? I also wonder what else was recommended by the tester, and then constrained.

    the "discussion" is linked below and the tester is the user sammyboy. Here's a warning though: nadota is 99% trolling, hate, idiocy, and garbage.

    [1] http://nadota.com/showthread.php?41718-terrifying-1v1-mid-AI...

    • gdb 2420 days ago
      (I work at OpenAI.)

      We'll have another blog post coming in the next few days. But as a sneak peek: we use self-play to learn everything that depends on an interaction with the opponent. Didn't need to with those that don't (e.g. fixed item builds, separately learned creep block).

      • hayd 2420 days ago
        So it's not exclusively unsupervised self-play? This contradicts how it was portrayed during the game/afterwards.
      • nbkvjones 2420 days ago
        That's good to hear.

        There's probably a huge disconnect between what an AI engineer entails with a certain word and what the layman then comprehends. At the same time, I think we (nadota) probably latched onto certain words and downplayed the ones that would make it more impressive to us. Just a result of itching for information without there being much out there.

        Really cool work, really glad you're doing it in dota, and eagerly awaiting your blog post.

      • Flair 2416 days ago
        https://blog.openai.com/more-on-dota-2 A new blog is already here.
      • RSchaeffer 2420 days ago
        Can you give us a sneak peak of OpenAI's approach to 5v5 Dota? :D
    • dennybritz 2420 days ago
      The other big "hardcoded" constraint is most likely the usage of the bot API. The API itself is complex with lots of functionality and I would assume that researchers were extremely smart about picking out just the right API calls that are needed to make it work. That's very different from and much easier than training an agent based on raw keyboard/mouse input.
    • posterboy 2419 days ago
      > is it reasonable

      you ask the wrong question. Sure it's reasonable, noone's going to sue them

  • candiodari 2420 days ago
    > Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too.

    I hate that people actually see things this way. Regulation to prevent AIs from taking over the world will never happen, because nation states won't cooperate on such rules [1]. Additionally you can't catch people using AIs to determine their actions.

    BUT what regulation can do is prevent people from competing with a few of Larry Page's and Elon Musk's businesses.

    [1] https://www.rt.com/news/395375-kalashnikov-automated-neural-...

    • sweden 2420 days ago
      The "taking of the world" scenario is a too narrow view of the whole complexity of AI regulation.

      There are other more realistic and important scenarios that need to be addressed, for example: AI's default behaviour when challenged with a possibility of a car accident on a self driving system. Should the car try to avoid the accident by jumping of a cliff, sacrificing the life of the driver and of the people inside the car, or should it hit the next car making the accident worse for the other person but potentially saving the life of the people inside the car?

      • mizzao 2420 days ago
        You must be thinking of http://moralmachine.mit.edu/

        But in practice, this is more a study of people's ethical tendencies around the world than a practical way to address AI decision-making in bad situations.

    • icebraining 2420 days ago
      There are other dangers besides "taking over the world". In fact, Elon Musk should know, since his company's AI already got people killed.
  • Funnnny 2420 days ago
    It worths noting that, shortly after they offer SF arcana (extremely rare item), 50 people did go and beat the bot on the spot.
    • yskchu 2420 days ago
      The strategy that was used, seems to be surprise the AI and mess up it's decision tree:

      The general strategy is to win by claiming first tower. At 0:00, you aggro the enemy creep wave so that they start following you. Then you walk around in a circle around the jungle, and the enemy wave will start to form a congo line that will follow you around. You then path around the jungle so that on the next wave spawn, you can aggro the wave again and continue to walk around in circles. The AI will burn glyph when your creep wave hits the tower, and for some reason it can't really decide between chasing you or defending the tower. So after about 5 minutes of doing this, your creep waves will eventually destroy the tower and you win the 1v1.

      From https://www.reddit.com/r/DotA2/comments/6t8qvs/openai_bots_w...

      • duskwuff 2420 days ago
        This is, admittedly, a weird enough strategy that a human player would probably get flustered and respond with suboptimal choices, just like the AI did. A human would probably adapt to the situation more quickly, though.

        Heck, I wouldn't be surprised if a few people are trying this strategy in pub games now, just for shits and giggles. :)

      • KVFinn 2420 days ago
        Someone else said they did by using their own courier to bait the bot too deep. Sounded like a harder way to win than this though.
    • derimagia 2420 days ago
      Well, almost all of them did the same strategy - I'm curious once the first few "strategies" how many people will be able to beat it.
  • itchyjunk 2420 days ago
    Is no one upset that they claimed `learned from self play only` when clearly it isn't? Creep blocking ? really? it learned a left over feature in original DOTA from warcraft in a new stand alone DOTA 2 client strictly on self play? And that distinct animation canceling the bot does? Look at amateur players and pro players (pro players do it more often then actually needed to `warm up` the muscles, similar to extra key strokes in SC). I wouldn't be surprised if the bot was trained in tons of pro openings. :/

    Why not be clear about what has been done? Deepmind has said they do supervised learning first and other stuff on top of that. My guess is something similar to that happened.

  • jarsin 2420 days ago
    All the bots have to do is spew tons of toxic crap in chat.

    Then they will be like any other real life DOTA player.

    • perishabledave 2420 days ago
      I think the root of that is the high reliance on teammates and the level of commitment each game takes. Unlike League there is no early surrender, so if you’re in a rough game you have to endure the full duration which can last around an hour. If you feel like it’s your teammate, who is probably randomly paired with you, that is failing it’s easy to get frustrated. This is one of the many things that make Dota either extremely frustrating or extremely satisfying.
  • oldstrangers 2420 days ago
    Surely the author knows that neither Chess nor Go have been "solved". Qoutes or no qoutes, it's still very inaccurate.

    I'd also argue that chess and go are both vastly more difficult problem sets. We literally do not have the computational power to solve a game of chess and it's projected that we won't for another 50-100 years.

  • darod 2420 days ago
    While the author is probably right and this is no huge breakthrough in AI/ML, it is yet another example of AI/ML being able to do an activity that surpasses a human's ability. I am still waiting for an example of how AI/ML will complement a human's life as opposed to demonstrating an area where a human can be replaced.
    • JSONwebtoken 2420 days ago
      How can you not see applications where replacing humans complements another human's life? I'm sure if you thought about it, you could see how self driving cars, real time translation, image to text for visually impaired, could be of some benefit to peoples lives? These applications might be automating a human out of a job, but they are also far superior at their narrow task.

      You can argue the total societal value net the societal cost of putting some people out of jobs, but saying you can't think of any application where AI can complement peoples lives is being intentionally hyperbolic and is a bad start to a discussion.

      • darod 2420 days ago
        In this application who does this AI serve? A self driving car doesn't help me to become a better driver. It removes me from the driving equation all together.
  • musashizak 2420 days ago
    There is top hype in ai but also in neuroscience. Actually there aren't scientific evidences that mind and coscience are materiale and born from the brain. Also the emotions are really important in the logic and thinking process. So without coscience and enotions we can't have real think on a machine.
  • craigsmansion 2420 days ago
    I hope someone can clarify.

    What are the definitions of AI and game complexity in this field?

    These all sound like very exiting developments. As I read about them a lot of times games such as Dota and Starcraft are touted as more complex than Chess or Go, but--at least with Starcraft, the AIs are limited in their number of actions to level the playing field. Isn't that like claiming humans can run faster than greyhounds, provided that the greyhounds only get to use two legs? Or maybe claiming that humans are better at chess when computers are restricted to the maximum human ply depth?

    I also noticed a claim--again, in a Starcraft related article--that the AIs previously couldn't beat the build-in AIs (the computer players). What type of AIs are considered as challengers here? Only blank-slate self learning AIs?

    • Tangokat 2420 days ago
      The reason why the AI is limited in number of actions is because the strategy is interesting, not so much the mechanical skill (think aimbots in FPS games). We know the AI will be better at clicking and moving units and would be capable of moving every unit optimally at all times. But is it also capable of coming up with creative strategies that humans could use as well?

      I think both situations would be interesting to be honest. Have an unrestricted AI and a restricted AI - using ML they would probably develop vastly different ways of playing.

    • lyndonjohnsonbe 2420 days ago
      I know a little about AI in the first starcraft. Writing the AI to use the same interface as humans is difficult. To lower the bar for more people to play around with it, the AI reads data from memory directly. This creates an imbalance that they try to balance out with extra rules like rate limiting.
  • DSrcl 2420 days ago
    A lot of dota's mechanics is designed with the assumption that the player is human -- e.g. skills that can be programmed to be released perfectly but are hard for a human (even pro) to do so reliably (Shadow Fiend's raze is one of them).
  • aorth 2420 days ago
    I'm still just trying to figure out what "Dota" stands for. Is it an acronym? Neither the Valve website or Wikipedia clarify this!
    • natural219 2420 days ago
      Funny story. The original term "DOTA" was an acronym, for "Defense of the Ancients", which was the original Warcraft 3 mod of which the later game, "Dota 2", was based. Since the original DOTA was a community mod using assets from a commercial (IE, trademarked) Blizzard property, it was in sort of a gray area in regards to re-commercialization of what was essentially the same gameplay and characters. To avoid further standing for Blizzard to reclaim its trademarked properties, Valve registered the trademark "Dota" specifically as a word that doesn't refer to anything, officially eliminating the acronym from the name and somehow distancing the new engine from trouble with the trademark.
    • spaceseaman 2420 days ago
    • detaro 2420 days ago
      The original Warcraft mod was called Defense of the Ancients, the Valve title Dota 2 as far as I know always has been just Dota
    • idle_processor 2420 days ago
  • JonathanLIabc 2420 days ago
    I see Elon Musk tweets as a warning about the potential of AI, not a hype of AI nor the current stage of AI.

    The most impressive part to me is that the bots are self-learned. On the other hand, AlphaGo is supervisored. They are different (not to say which one is better).

  • colordrops 2420 days ago
    Big assumptions were made by the author of this post, the biggest being that they used an API to get access to game data rather than pixels. If the AI were limited to pixels then the achievement is much greater.
    • dennybritz 2420 days ago
      Author here. I agree this is an assumption, but based on my experience it is very unlikely that this is trained on pixels. Training would've been orders of magnitude more expensive. If it really is trained on pixel input I would be shocked and extremely impressed, and parts of post would not apply.
    • dvt 2420 days ago
      Not really, as the model was still overfit for highly-technical Shadowfiend 1v1 mid play. With that said, it most likely used the API.
    • taneq 2420 days ago
      Didn't they say (or at least strongly imply) that it uses the standard DOTA bot interface?
    • outdraft 2420 days ago
      Fairly certain they're using the api valve released earlier. https://developer.valvesoftware.com/wiki/Dota_Bot_Scripting
    • snowmaker 2420 days ago
      Maybe the OpenAI team can clarify this point? I agree it is key factor.
  • velobro 2420 days ago
    The bot did not have their creep blocks hard coded. In the event the player purposefully messed up his block to see if the bot would respond and it did
  • zaroth 2420 days ago
    Where do bots go to fight other bots in millions of games, and algorithms compete for superiority? I assume there must be an ongoing "marketplace" to match bots and run the simulations.
    • astrojams 2420 days ago
      They play themselves. No need to play other bots.
      • zaroth 2418 days ago
        Is that a fact? You don't ever see better performance by training in a heterogeneous environment? Or just, no one does it this way?
  • dukovni 2420 days ago
    I would like to see an AI based search engine. So if I search on something it gives the CORRECT results back, not some SEO'd results and almost always useless. A search engine like this could make Google search looks like outdated tech.