119 comments

  • Animats 1596 days ago
    This is encouraging. If you're going to work on artificial general intelligence, a reasonable context in which to work on it is game NPCs. They have to operate in a world, interact with others, survive, and accomplish goals. Simulator technology is now good enough that you can do quite realistic worlds. Imagine The Sims, with a lot more internal smarts and real physics, as a base for work.

    Robotics has the same issues, but you spend all your time fussing with the mechanical machinery. Carmack is a game developer; he can easily connect whatever he's doing to some kind of game engine.

    (Back in the 1990s, I was headed in that direction, got stuck because physics engines were no good, made some progress on physics engines, and sold off that technology. Never got back to the AI part. I'd been headed in a direction we now think is a dead end, anyway. I was trying to use adaptive model-based control as a form of machine learning. You observe a black box's inputs and outputs and try to predict the black box. The internal model has delays, multipliers, integrators, and such. All of these have tuning parameters. You try to guess at the internal model, tune it, see what it gets wrong, try some permutations of the model, keep the winners, dump the losers, repeat. It turns out that the road to machine learning is a huge number of dumb nodes, not a small number of complicated ones. Oh well.)

    • iandanforth 1595 days ago
      Hi John! Animats was very cool! As you know game physics still kinda sucks for this work. Unity/Bullet/MuJoCo are the best we have and even they have limited body collision counts. Luckily we've now got some GPU physics acceleration, but IMO it's not enough.

      What we really need is a scalable, distributed, physics pipeline so we can scale sims to 1000x realtime with billions of colliding objects. My guess is that Google/Stadia or Unity/UnityML are better places to do that work than Facebook, but if Carmack decides to learn physics engines* and make a dent I'm sure he will.

      Until our environments are rich and diverse our agents will remain limited.

      *More, I'm sure his knowledge already exceeds most people's.

      • Animats 1595 days ago
        What we really need is a scalable, distributed, physics pipeline so we can scale sims to 1000x realtime with billions of colliding objects.

        Improbable tried to do that with Spatial OS. They spent $500 million on it.[1] Read the linked article. No big game company uses it, because they cut a deal with Google so their system has to run on Google's servers. It costs too much there, and Google can turn off your air supply any time they want to, so there's a huge business risk.

        [1] https://improbable.io/blog/the-future-of-the-game-engine

        • iandanforth 1595 days ago
          Agree, as a game engine this might power some high-end Stadia games with crazy physics, but the real value is in high-complexity environments for virtual agents.

          Interestingly companies like SideFX are also doing really interesting work in distributed simulations. (e.g. Houdini)

    • PeterStuer 1596 days ago
      I did my msc thesis in AI back then writing a dedicated simulater for a specific robot used in autonomous systems research. You find that especially when trying to faithfully reproduce sensor signals you need to dive deep into not just the physics of e.g. infrared light, but also the specific electronic operation of the sensor itself.

      But that kind of realism is not needed for all AGI research.

      I also spent some years on using evolutionary algorithms to evolve control networks for simple robots. The computational resources available at the time were rather limited though. Should be more promising these days now that your commodity gaming pc can spew out in 30 minutes what back then took all the labs networked machines running each night for a few weeks.

      • ivanhoe 1596 days ago
        Modeling everything realistically is super hard - any interaction with the real-world is so full of the weirdest unexpected electric and mechanical issues. Who hasn't tried it first-hand can't imagine the half of ways that will almost certainly go wrong on the first try :) ... but as you've said for developing the AGI as a concept simplified worlds should work just fine.
        • joe_the_user 1595 days ago
          Indeed, I don't think humans themselves model the world realistically, I think they model the world close enough but are able to adapt their prediction process for situation when their prediction don't work and/or their knowledge is inadequate. To model such a "satisficing" process, you don't need exact simulation.
        • amatic 1596 days ago
          True. Simulations are always a simplification of reality and leave a lot out of the picture.

          On the flip side, successful robotics concepts might have more chance of being relevant to AGI.

    • the_af 1596 days ago
      > This is encouraging. If you're going to work on artificial general intelligence, a reasonable context in which to work on it is game NPCs.

      I don't think so. Game NPCs don't need AI, which would be way overkill; they just need to provide the illusion of agency. I think for general AI you need a field where any other option else would be suboptimal or inadequate, but in videogames general AI is the suboptimal option... more cost effective is to just fake it!

      • seanwilson 1596 days ago
        > Game NPCs don't need AI

        > ... more cost effective is to just fake it!

        Many players complain in story heavy games that their choices have no consequences to the story - this is largely because building stories with meaningful branches isn't economically feasible.

        A game that could make NPCs react to the what the player does dynamically while also creating a cohesive story for the player to experience would be absolutely groundbreaking in my opinion.

        This is more in the realms of AI story generation but I haven't seen any work on this that generates stories you would ever mistake as coming from a human (please correct me if I'm wrong) so it would be amazing to see some progress here.

        • nwallin 1595 days ago
          You're talking about different problems.

          Story AI is basically having a writer sit down and writing a branching story tree with writing the whole way. At best it's a manually coded directed acyclic graph.

          Tactical AI, ie having the bad guy soldiers move about the battlefield and shooting back at you in a realistic manner is 100% about faking it. It's better to despawn non-visible and badly placed enemies and spawn well placed non-visible enemies than have some super smart AI relocate the badly placed enemies into better locations. It's better to have simple mechanisms that lead to difficult to understand behavior than complex behavior that leads to instinctive behavior.

          There was an amazing presentation at gdc maybe 3 years ago that perfectly articulated this. The game was something about rockets chasing each other. I wish I could find the link.

        • Ntrails 1595 days ago
          > Many players complain in story heavy games that their choices have no consequences to the story - this is largely because building stories with meaningful branches isn't economically feasible.

          That's not entirely true - it's just that no games studios are willing to compromise on graphics and art for something silly like the ability to impact the game world.

          • seanwilson 1595 days ago
            I'm not sure if you're being sarcastic about the "something silly" part, but do you have any examples of any games (indie, commercial or academic) that let you meaningfully impact the game world?

            I think they don't exist because it's an exceptionally difficult problem, even for games with lo-fi graphics or text only. I've found it hard to find any AI projects that generate stories or plots that are remotely compelling.

            Big studio game companies push "your choices matter" as a selling point as well, but few deliver.

            • random023987 1595 days ago
              > any examples of any games [...] that let you meaningfully impact the game world?

              Dwarf Fortress

            • edavison1 1595 days ago
              Fallout: New Vegas, a bunch of Telltale games, Myst/Riven, dozens of JRPGs (Chrono Trigger/Cross come to mind immediately) with branching endings and characters who survive or die based on player actions. Yeah, games are made all the time where the player has a meaningful impact on the game world.
            • orbifold 1595 days ago
              Minecraft is successful precisely because you can meaningfully impact the game world.
              • seanwilson 1595 days ago
                Minecraft doesn't have an overarching story or complex NPCs though.
        • the_af 1596 days ago
          Agreed about the meaningful choices and dynamically generated reactions from NPC, but general AI is not needed for this in my opinion.

          You also have to consider whether the complaints of "many" players matter when publishing a game. A percentage of vocal players will complain no matter what. Yes, they will complain even if you somehow implement true AI!

          • seanwilson 1595 days ago
            > Agreed about the meaningful choices and dynamically generated reactions from NPC, but general AI is not needed for this in my opinion.

            Maybe, but it would be an impressive demonstration of AI that would be very different to what has shown for Go, Chess and StarCraft.

            I think a compelling AI written short story for example would be leagues ahead of what is required to write a convincing chatbot e.g. you need an overarching plot, subplots, multiple characters interacting in the world, having to track characters beliefs and knowledge, tracking what the reader must be thinking/feeling.

            It would likely rely a lot on understanding real-world and culture knowledge though - Go and StarCraft are much cleaner in comparison.

            > A percentage of vocal players will complain no matter what.

            Yep, but I can't think of a single game that has a plot that meaningfully adapts to how the player plays. Either there's many endings but the path to get to each is short, or all the choices converge quickly back into the same path.

            Again, please correct me if I'm wrong but I've look quite hard for examples of innovation in the above recently and haven't found much. You can find examples of papers on e.g. automated story generation or game quest generation on Google Scholar from the last 10 years but the examples I found weren't that compelling.

            • the_af 1595 days ago
              AI-generated "true" fiction seems like scifi to me.

              Of course an hypothetical "Turing Test" of fiction-writing might be able to fool some people, and in an age where Netflix has been accused of producing content "by algorithm" this seems increasingly possible, but...

              ... what is "true" or "good" fiction is up for debate. In fact, it's a debate that can never be solved, because there is no right answer except what it feels to you, your friends and authors you respect.

              But that said, I seriously doubt it would fool me, and I think it won't be within reach of an AI any time soon, or ever, not without creating an artificial human being from scratch. And maybe not even then, because how many real people can write compelling fiction anyway? :)

              • seanwilson 1595 days ago
                > Of course an hypothetical "Turing Test" of fiction-writing might be able to fool some people, and in an age where Netflix has been accused of producing content "by algorithm" this seems increasingly possible, but...

                So it feels like you should be able to procedurally generate stories at least something by combining common story arches, templates, character archetypes etc. without too much effort but I've yet to find any compelling examples of this anywhere. When you look into the problem more, you realise it's a lot harder than it seems.

                We've seen lots of examples of chat bots that are said to pass the Turing Test but really aren't that intelligent at all so a "Turing Test of fiction writing" as you put sounds like a super interesting next step to me.

      • chrisweekly 1595 days ago
        If his true purpose were to improve on videogames' NPCs, you're 100% right that working on "real" AGI would be overkill. But in this case, someone w/ deep background in videogaming intends to use that context as a means to the end of AGI R&D -- a possibly subtle, but crucial distinction.
      • Aeolun 1596 days ago
        If you have general AI of enough sophistication to use in a game, haven’t you just created a virtual human?
        • the_af 1596 days ago
          Yes, probably. At which point, the "videogame" part is irrelevant.
      • dorgo 1595 days ago
        >more cost effective is to just fake it

        I struggle to see the distinction. Isn't the turing test defined as 'faking humans (or human's intelligence) convincingly enough'?

        There is a saying: The benefit to be smart is that you can pretend to be stupid. The opposite is more difficult.

        • the_af 1594 days ago
          "Fake it" as in cutting corners and performing sleighs of hand. Instead of moving enemy soldiers strategically, just spawn them close but out of sight, because the player won't know better. This doesn't help if you truly want to devise a military AI and is only useful for games. And that's just one example.

          I think the Turing Test is no longer thought of as adequate metric for general AI (if it ever was to begin with).

      • strbean 1595 days ago
        I don't think parent was referring to game NPCs as a reasonable application of AGI, but rather as a reasonable domain conducive to the development of AGI.
        • the_af 1594 days ago
          Yes, I understand this and was replying to that interpretation. I think it's not a particularly conducive domain because the incentive is just to fake it (because that's enough for games). A better domain would be one where faking it just won't cut it.
    • nikki93 1596 days ago
      Yeah I believe in this game / simulated world NPC idea too. To get the kind of complexity we want we either need sensors in the real world or interfacing in a virtual world that humans bring complexity to (probably both -- the humans are part of the sensing technology to start). Things like AlphaZero etc. got good cuz they had a simulatable model of the world (just a chess board + next state function in their case). We need increasingly complex and intetesting forms of that.

      In some sense you can think of interfacing w/ the online world + trying to win attention to yourself as the kind of general game that is being played.

      • Animats 1596 days ago
        I've long taken the position that intelligence is mostly about getting through the next 10-30 seconds of life without screwing up. Not falling down, not running into stuff, not getting hurt, not breaking things, making some progress on the current task. Common sense. Most of animal brains, and a large fraction of the human brain, is devoted to managing that. On top of that is some kind of coarse planner giving goals to the lower level systems.

        This area is under-studied. The logicians spent decades on the high level planner part. The machine learning people are mostly at the lower and middle vision level - object recognition, not "what will happen next". There's a big hole in the middle. It's embarrassing how bad robot manipulation is. Manipulation in unstructured situations barely works better than it did 50 years ago. Nobody even seems to be talking about "common sense" any more.

        "Common sense" can be though of as the ability to predict the consequences of your actions. AI is not very good at this yet, which makes it dangerous.

        Back when Rod Brooks did his artificial insects, he was talking about jumping to human level AI, with something called "Cog".[1] I asked him "You built a good artificial insect. Why not go for a next step, a good artificial mouse?" He said "Because I don't want to go down in history as the man who created the world's best artificial mouse".

        Cog was a flop, and Brooks goes down in history as the inventor of the mass market robot vacuum cleaner. Oh well.

        [1] http://people.csail.mit.edu/brooks/papers/CMAA-group.pdf

        • baddox 1596 days ago
          I remember this TED talk many years ago where the speaker proposes that intelligence is maximizing the future options available to you:

          https://www.ted.com/talks/alex_wissner_gross_a_new_equation_...

        • PeterStuer 1596 days ago
          The genius of Cog was that it provided an accepted common framework towards building a grounded embodied AI. Rod was the first I saw to have a literal roadmap on the wall of PhD thesis's laid out around a common research platform, Cog, in this branch of AI.

          In a sense, the journey was the reward rather than the very unlikely short term outcome back then.

        • MaximumYComb 1596 days ago
          I was thinking about the manipulation issue tonight. I'd been throwing a tennis ball in the pool with my kids and I realised how instinctual my ability to catch was. A ball leaves my kids hands and I move my hand to a position, fingers just wide enough for a ball, and catch it. All of it happens in a fraction of a second.

          The human brain can model the physics of a ball in flight, accurately, and quickly. As the ball touches the finger tips it makes the smallest adjustments, again in tiny fractions of a second

          • ufmace 1595 days ago
            I don't know if I'd call it modelling the physics of a ball in flight exactly. It kind of seems like the brain has evolved a pathway to be able to predict how ballistic projectiles - affected only by gravity and momentum - move, that it automatically applies to things.

            What make me think of it like that is hearing about how the brain was actually really bad at predicting the path of things that don't act like that. This was in context of aiming unguided rocket launchers (I end up reading a lot of odd things). It seems the brain is really bad at predicting how a continuously accelerating projectile will travel, and you have to train yourself to ignore your intuitions and use the sighting system that compensates for how it actually travels in order to hit a target with the thing.

          • jack_pp 1596 days ago
            You mean the brain has evolved over millennia to model the psychics of the world and specialize in catching and throwing things
            • Mtinie 1596 days ago
              Absolutely. It also requires more than the evolutionary adaptations to do it. The skill requires the catching individual to have practiced the specific motions enough times previously to become proficient to the point it becomes second nature.

              Compare what happens during a practice game of catch between six year old, first time Little Leaguers vs. MLB starters.

          • jacquesm 1596 days ago
            Dogs can do this too. And quite a bit more impressive than most humans.
            • ninkendo 1595 days ago
              It’s always impressive to watch how good my dog is at anticipating the position of the ball way ahead of time.

              If I decide to kick it, he reads my body language scarily well to figure out what direction it will probably go, and will adjust his position way ahead of time. If I throw it at a wall he will run to where the angle will put the ball after it bounces. If I throw it high in the air he knows where to run almost immediately (again using my body language to know where I might be trying to throw it.). He’s very hard to fool, too, and will learn quickly to not commit to a particular direction too quickly if it looks like I’m faking a throw.

              I always feel like he’d make a great soccer goalie if he had a human body.

        • losvedir 1595 days ago
          That's kind of the thesis Rodolfo Llinas puts forward in a book of his, I of the Vortex[0], although more about consciousness than intelligence. That is, consciousness is the machinery that developed in order for us to predict the next short while and control our body through it.

          [0] https://mitpress.mit.edu/books/i-vortex

        • visarga 1596 days ago
          > On top of that is some kind of coarse planner giving goals to the lower level systems.

          There are counterexamples, such as AlphaGo which is all about planning and deep thinking. It also combines learning with evolution (genetic selection).

          • alpaca128 1596 days ago
            True, but AlphaGo is specialized on a very specific task where planning and deep thinking is a basic requirement for high level play.

            We don't need to think 10 "turns" ahead when trying to walk through a door, we just try to push or pull on it. And if the door is locked or if there's another person coming from the opposite side we'll handle that situation when we come across it.

            • toxik 1596 days ago
              That’s not true, human beings plan ahead when opening doors more than many things — should I try to open this bathroom door or will that make it awkward if it’s locked and I have to explain that to my coworker afterwards? Should I keep this door open for a while so the guy behind me gets through as well? Not to mention that people typically route plan at doorways.

              Doors are basically planning triggers more than many things.

              • username90 1596 days ago
                Horses don't plan though, and they are much better than computers at a lot of tasks. If we can make a computer as smart as a horse, then we can likely also make it as smart as a human by bolting some planning logic on top of that.
                • Mtinie 1596 days ago
                  “Horses don’t plan though[...]”

                  Can you expand on this statement? While I have no way to “debug” a horse’s brain in real-time, my experiences suggest they absolutely conduct complex decision-making while engaging in activities.

                  Two examples which immediately come to mind where I believe I see evidence of “if this, then that” planning behavior:

                  1. Equestrian jumping events; horses often balk before a hurdle

                  2. Herds of wild horses reacting to perceived threats and then using topographic and geographic features to escape the situation.

                  • username90 1596 days ago
                    The context was this quote:

                    > intelligence is mostly about getting through the next 10-30 seconds of life without screwing up

                    In this context horses don't plan or have much capacity for shared learning, at least not as far as I know.

                    Quote: “This study indicates that horses do not learn from seeing another horse performing a particular spatial task, which is in line with most other findings from social learning experiments,”

                    https://thehorse.com/16967/navigating-barriers-can-horses-wa...

                    • visarga 1595 days ago
                      > intelligence is mostly about getting through the next 10-30 seconds of life without screwing up

                      This is probably a variant of Andrew Ng's affirmation that ML can solve anything a human could solve in one second, with enough training data.

                      But intelligence actually has a different role. It's not for those repeating situations that we could solve by mere reflex. It's for those rare situations where we have no cached response, where we need to think logically. Reflex is model-free reinforcement learning, and thinking is model-based RL. Both of them are necessary tools for taking decisions, but they are optimised for different situations.

                    • LanceH 1595 days ago
                      In my experience they learn to open gates. They certainly aren't trained to do this, but learn from watching people or each other.

                      They will also open a gate to let another horse out of their stall which I would count as some form of planning.

                      Beyond that I can't think of anything in all the years around them. They can manage to be surprised by the same things every single day.

                      • riversflow 1595 days ago
                        >They can manage to be surprised by the same things everyday.

                        Sounds like most human beings, given an unpleasant stimulus, for example a spider.

                    • Mtinie 1596 days ago
                      Thank you for the context and new resources to learn from.
            • visarga 1595 days ago
              It took us millions/billions of years of evolution and a couple of years of training in real life to be able to walk through a door. It's not a simple task even for humans. It requires maintaining a dynamic equilibrium which is basically solving a differential equation just to keep from falling.
      • visarga 1596 days ago
        Board games have been solved. Now the big boys are working on Starcraft and Dota 2, and it takes a shitload of money to pay for the compute and simulation necessary to train them. No something you can do on the cheap.
        • marcusverus 1595 days ago
          Deepmind's StarCraft AIs are already competing at the Grandmaster level[0], which is the highest tier of competitive play and represents the top 0.6 % of competitors.

          I am pleasantly surprised by how quickly they have been tackling big new decision spaces.

          [0]https://deepmind.com/blog/article/AlphaStar-Grandmaster-leve...

        • toxik 1596 days ago
          The next arena is multi task learning. Sure, I lose to specialized intelligences in each separate game, but I can beat the computer at basically every other game, including the game of coming up with new fun games.
      • defterGoose 1596 days ago
        Perhaps the first sentient program will be born in an MMORPG?
    • fastball 1596 days ago
      • 0-_-0 1596 days ago
        Puzzle game with a great story. I recommend it to the HN people.
        • gray_-_wolf 1596 days ago
          I loved it, but my issue with that game was severe motion sickness after 20-30 minutes... never finished it :(
          • Abishek_Muthian 1596 days ago
            Thanks for the warning, I cannot even play Minecraft. I wish Carmack had tackled motion sickness in VR/Games before switching to AI; he did talk about it in the interviews as being a limitation though.

            There's a need gap[1] to solve Simulation Sickness in VR and First Person games.

            [1]: https://needgap.com/problems/7-simulation-sickness-in-vr-and...

            • phaus 1586 days ago
              I was under the impression that simulation sickness was largely solved outside of extreme cases (like a vr portal game). I thought we're just waiting for hardware to catch up.

              Years ago John said if you have 20k and a dedicated room you can make a convincing vr experience that won't make anyone sick.

          • MichaelHoste 1596 days ago
            I loved and finished the game but I had the same issue on two occasions. I felt sick and I had to stop. Now I know it was not the food I had just eaten but the game itself, thank you!
      • xena 1596 days ago
        Who would have thought that a philosophical puzzle game could come from the creators of Serious Sam.
    • guelo 1596 days ago
      Yes but it sounds weird to me because Carmack has spent his whole life involved with games but has not been known for an interest in game AI before.
      • a1studmuffin 1596 days ago
        Game AI has nothing to do with AGI (or even regular AI) beyond the surface level description OP provided. The reason game AI hasn't progressed in the last few decades isn't because technology is holding us back - after all we can already achieve impressive machine learning feats using current-gen GPUs - it's because bad NPC AI is by design, so players can learn to overcome them and succeed. Very few people want to play a game that always beats them. Most games use simple state machines or behaviour trees with predictable outcomes for their NPCs because it would be a waste of effort to do anything more, and actually negatively impact the game by making it less fun and burning engineering time on things the player won't benefit from.
        • systemcluster 1596 days ago
          Modern big-budget games incresingly don't use behavior trees and state machines for their AI anymore. This approach has been superseded by technologies like GOAP [1] or HTN [2]. These are computationally very expensive, especially in the constrained computation budget of a real-time game.

          While it's true that game AI is often held back by game design decisions, it's not true that technology isn't holding us back in this area as well.

          [1] https://www.youtube.com/watch?v=gm7K68663rA (GDC Talk: Goal-Oriented Action Planning: Ten Years of AI Programming)

          [2] https://en.wikipedia.org/wiki/Hierarchical_task_network

        • PeterStuer 1596 days ago
          You don't optimize for competitive performance (it is trivial to design a game AI that beats every player every time given that you have control over tilting the playing field). You use the AI for bounded response variations (all NPC's act 'natural' and different from the others) and engaging procedural generation (Here is a chapter of a story, now draft an entire zone with landscape, NPC's, cities, quest story lines, etc.).

          Games like PvE MMO's need to find a way to produce engaging content faster than it can be consumed at a pricepoint that is economically viable. The way they do it now is by having the players repeat the same content over and over again with a diminishing returns variable reward behavioral reinforcement system.

          • antris 1596 days ago
            One of the design goals of game AIs is also that they are fun to play against. If they are too smart and coordinated, they try to throw you off in a way that feels "unfair" to the player.

            You have to hit a spot where they are sometimes a bit surprising, but not in a way that cannot be reacted to quickly on your feet. This throws realism out of the window.

            • wtetzner 1595 days ago
              But why would good game AI have to make the characters better than the player? The focus on NPC AI should be to make them interesting, not necessarily really tough opponents.
        • hnaccy 1596 days ago
          You're assuming game AI means an agent that directly competes with the player.

          Plenty of games have NPCs with scripted routines, dialog, triggers, etc that could be improved either by reducing the dev cost to generate them without reducing quality or reacting to player behavior more naturally.

          • sobani 1596 days ago
            Except in those cases it's even more important that the NPCs don't do anything unexpected. Those NPCs are like actors in a stage play, you don't want them to come up with their own lines and confusing the audience.

            Don't forget there is a certain randomness with 'more natural' and with randomness you're going to invite Murphy to the party.

            • CryptoPunk 1596 days ago
              Not all NPCs have to be part of a script. They can just be additional characters that add life and realism to the simulated world.

              A weapons maker with a unique backstory and realistic conversations that reference it is more interesting than a bot, and opens up the possibility of unscripted side-quests.

            • croon 1596 days ago
              In many cases maybe. Personally I would love to play a game with a world inhabited by "individual" NPC AI:s, where they can influence the world as much as I can, with no specific act structure or story arc.

              Some significant part of gaming is risk-free experimentation in a simulated world. The experiments possible are bounded by the simulation quality of the world. More realistic NPC behavior would open up for a lot more games.

              • goostavos 1595 days ago
                There is an older game called STALKER which had (limited) elements of what you describe: autonomous NPCs which influence the game world. Even though it was limited, the NPCs just battled for control of certain territories, I always thought it was a really neat mechanic. It made the world feel more 'real' and alive.

                You would see these factions fighting and gaining/losing territory throughout the game. You could chose to help them or just pass on by, but the actions progressed regardless of your choice.

            • z3phyr 1596 days ago
              It would be fun if they ad lib
        • munificent 1595 days ago
          > it's because bad NPC AI is by design, so players can learn to overcome them and succeed.

          That's part of it, but there are other factors too. The more complex the AI, the harder (i.e. more expensive) the game is to tune and test. Game producers and designers are naturally very uncomfortable shipping a game whose behavior they can't reasonably predict.

          This is a big part of why gamers always talk about loving procedural generation in games but so few games actually do it. When the software can produce a combinatorial number of play experiences, it's really hard to ensure that most of the ones players will encounter are fun.

        • oblio 1596 days ago
          I'd give anything for a "moral"/"nice-guy" AGI that could replace my Dota 2 team mates and opponents.
        • nikki93 1596 days ago
          If the "game" is survival and selection for attention (to get compute space, so literal survival) from humans, "interestingness" is what will matter and I think what people will end up finding most interesting is NPCs that feel like other identities they can empathize with and interact with -- work with to build things, spend time in a community with, fall in love with and so on. This really is about virtual world construction more than simple competitive games. I think it may not end up looking like any particular sense of "AGI" we can currently imagine (I really think we can only properly imagine it exactly when it exists, and it seems not to yet), but it will probably be "distributed" enough that the interfacing may not feel like anything at any one particular site.

          The game may even be played by saying things on Twitter and becoming interesting enough that people DM you and try to build a relationship with you, while you're a bot.

      • baq 1596 days ago
        half. the other half was spent building rockets (armadillo aerospace) and VR tech, which arguably is more interesting in its AR industrial or transportation applications.
    • tomaskafka 1596 days ago
      I love the idea of using the Sims as a platform, as it's a place where it will be blatantly obvious that 'effective' AI without built-in ethics is repulsively inhuman.
    • aruggirello 1595 days ago
      As a side note, if we're living in a simulation [0], I'd really like to know who's "real" vs. who's an AI bot out there...

      [0] https://en.wikipedia.org/wiki/Simulation_hypothesis

      • mindfulmonkey 1595 days ago
        Hate to break it to you, but we're all NPCs
    • anotheryou 1596 days ago
      It however has a huge bias towards human-like ai. Maybe it's not smart to narrow down to copying us so quickly.

      I mean: maybe it's more efficient to have it read all of wikipedia really well before adding all the other noisy senses.

    • goatinaboat 1596 days ago
      Simulator technology is now good enough that you can do quite realistic worlds.

      It is nowhere near good enough to avoid running into Moravec’s Paradox like a brick wall as soon as you try and apply it outside the simulator.

    • simonh 1596 days ago
      I don't think that approach is going to work. For any clearly bounded and delineated task, such as a game, the most optimal, lowest energy and lowest cost solution is not AGI but a custom tuned specialist solver. This is why I don't think Deep Blue or Alphago are paths towards AGI. They are just very highly advanced single-task solvers.

      Now Alphago and it's implementation framework are much more sophisticated than Deep Blue. It's actually a framework for making single-task solvers, but that's all. The fact it can make more than one single-task solver doesn't making it general in the sense we mean it in the term AGI. AlphaGo didn't learn the rules of Go. It has no idea what those rules are, it's just been trained through trial and error not to break them. That's not the same thing. It's not approaching chess or Go as an intelligent thinking being, learning the rules and working out their consequences. It's like an image classifier that can identify an apple, but has no idea what an apple is, or even what things are.

      To build an AGI we need a way to genuinely model and manipulate objects, concepts and decisions. What's happened in the last few decades is we've skipped past all that hard work, to land on quick solutions to specific problems. That's achieved impressive, valuable results but I don't think it's a path to AGI. We need to go back to the hard problems of “computer models of the fundamental mechanisms of thought.”[0]

      [0]https://www.theatlantic.com/magazine/archive/2013/11/the-man...

      • whack 1596 days ago
        > AlphaGo didn't learn the rules of Go. It has no idea what those rules are, it's just been trained through trial and error not to break them. That's not the same thing. It's not approaching chess or Go as an intelligent thinking being, learning the rules and working out their consequences

        There are indeed some people who learn chess by "reading the manual". Or learn a language by memorizing grammar rules. Or learn how to build a business by studying MBA business theories.

        There are also tons of other people who do the opposite. They learn by simply doing and observing. I personally have no idea what an "adverb" is, but people seem perfectly happy with the way I write and communicate my thoughts. Would my English skills count as general intelligence, or am I just a pattern-recognition automaton? I won't dispute the pattern-recognition part, but I somehow don't feel like an automaton.

        I can certainly see the potential upsides of learning some theory and reasoning from first principles. But that seems too high a bar for general intelligence. I would argue that the vast majority of human decisions and actions are made on the basis of pattern recognition, not reasoning from first principles.

        One last note: "working out their consequences" sounds exactly like a lookahead decision tree

        • simonh 1596 days ago
          Alphago and it's kind are doing some things that we do, for sure. We do utilise pattern recognition and some of the neurological tools we bring to bear on these problems might look a bit like Alphago.

          The thing is those are parts of our neurology that have little to do with general intelligence. I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery. In that sense high level Go and Chess players turn themselves into single-task solvers. They're better at bringing that experience and capability to bear in other domains, because they have general intelligence with which to do so, but those specialised capabilities aren't what make them a being with general intelligence. Or if specialising systems are important to general intelligence, it's as just a part of a much broader and more sophisticated set of neurological systems.

          • goldenkey 1596 days ago
            I can't agree with you. I know many chess players at master and Grand Master level. Look at Bobby Fischer too. Human specialization does not carry over very well to other tasks, only marginally...
            • simonh 1596 days ago
              I don't think that's a disagreement, I think you're right. Most of the benefit someone like that would get from their competence in Chess or Go would be incidental ones. In fact I would say your experience confirms my understanding of this, optimizing for a single domain in the way Alphago does or even in the way humans do, has little to do with general intelligence.
              • goldenkey 1596 days ago
                Ah, I misread. So we agree then. On the other hand, I would not be surprised if the hippocampus was highly developed in chess players like Bobby Fischer which could translate into better spatial reasoning. Perhaps general intelligence is best trained by variance.... Not targeted training.
                • simonh 1596 days ago
                  You could be right, and for the record I upvoted your comment for the contribution regarding you experience with high level chess players. I think the downvotes you’re getting are regrettable.
                  • goldenkey 1595 days ago
                    I appreciate your sentiment. This field is my focus right now. My bachelors is in BioChemMed but I am doing a master in CS and have finished many courses including the free ones by Hinton, LeCun, and Bengio.

                    Here is my strongest prediction:

                    AGI is only possible if the AGI is allowed to cause changes to its inputs.

                    Current ML needs to be grafted towards attention mechanisms and more boltzmann net / finite/infinite impulse response nets.

                    • goertzen 1595 days ago
                      > "... if AGI is allowed to cause changes to its inputs."

                      Could you elaborate on this point ?

                      Do you mean that the AGI could change the source of inputs, or change the actual content of those inputs (e.g. filtering) or both?

                      And why do you think this is a critical piece ?

                      • goldenkey 1594 days ago
                        Both. Attention changes the source. Action interacts with the source, modifying it. But the environment will need to respond back. This is reminiscent of reinforcement training but is more traditional NN except where the input is dynamic and evolving with every batch not only in response to the agent but in response to differential equations or cellukar automata / some type of environment evolution. AGI should be able to change the environment in which it inhabits. Attention in some respects is a start - it is essentially equivalent to telling reality to move the page and watching it happen. Until we have attention AND data modification, we will keep getting the specialized NN we are used to.
          • marcus_holmes 1596 days ago
            But in this classification, isn't "general intelligence" just a meta-single-problem-solver, solving the problem of which single purpose solver to bring to bear on this task?

            I think I think, but might I just be using a single problem solver that gives the appearance of thinking?

            • arethuza 1596 days ago
              I suspect the way we think in terms of clear symbols and inference isn't actually how we think but a means of providing a post-hoc narrative to ourselves in a linguistic form.

              Edit: Which kind of explains the failure of good-old fashioned symbolic AI as it was modelling the wrong thing.

              • marcus_holmes 1595 days ago
                That makes a lot of sense. An internal narrator on events explaining them to the passenger, rather than the driver of said events.
                • arethuza 1595 days ago
                  Definitely not my idea though - couldn't find any good references to where I read about that idea.

                  [NB I worked in good-old-fashioned AI for a number of years]

          • dangerface 1596 days ago
            > AlphaGo didn't learn the rules of Go. It has no idea what those rules are, it's just been trained through trial and error not to break them.

            When given a problem it has never seen before it was able to acquire knowledge of the problem and then apply that knowledge as to solve the problem. Thats the definition of learning and intelligence that can generally be applied to any problem.

            > I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery.

            What are you on about? Cognition and intelligence are the same thing if it's capable of cognition or as you put it applying "cognitive tools" then it's capable of intelligence.

            • simonh 1595 days ago
              >When given a problem it has never seen before it was able to acquire knowledge of the problem and then apply that knowledge as to solve the problem. Thats the definition of learning and intelligence that can generally be applied to any problem.

              It can't be applied to any problem though. Take the example i gave elsewhere of a game where you provide the rules, and as the game progresses the rules change. There are real games that work like this, generally card games where the cards contain the rules, so as more cards come into play the rules change. Alpha Zero cannot play such games, because there isn't even a way to provide it with the rules.

              >> I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery. > >What are you on about? Cognition and intelligence are the same thing if it's capable of cognition or as you put it applying "cognitive tools" then it's capable of intelligence.

              I'm saying that human minds apply many cognitive tools, and that Alphago is like one of those tools. It's not like the part choosing and deploying those tools, which is the really interesting and smart part of the system.

              The human brain consists of a whole plethora of different cognitive mechanisms. Cognition is a broad term of a huge variety of mechanisms, none of which by themselves constitute all of intelligence. A lot of people look at Alphago and say aha, that's intelligence because it does something we do. Yes, but it only does a tiny, specialist fragment of what we do, and not even one of the most interesting parts.

        • earthboundkid 1596 days ago
          In a Platonic dialogue, they discuss the definition of knowledge as "true belief with an account". You have true beliefs about language, but you don't know it in the Platonic sense if you can't explain it to someone else. Another way I've heard this defined is, you don't know it if you couldn't write the algorithm for it.
          • whack 1589 days ago
            By that definition, most people don't have "knowledge" over most things which they believe and act on. And yet, no one accuses them of not possessing "general intelligence".

            If an AI shows the same capabilities as the average human being, I would say that is AGI by definition. Regardless of whether it meets the requirement for Platonic Knowledge.

        • mrfusion 1596 days ago
          Really most people learn by a mix of the two. It’s going to take you a lot longer to learn chess is you have no clue on the rules.

          But on the other hand if you get into rote memorization before you start the game it’s going to slow you down by having no context.

        • phkahler 1596 days ago
          I think people do both. You learn the rules in order to play a game or perform a task, but with practice you end up training a task specific system that "knows" how to do it without thinking about the rules and perhaps without knowing them.
      • pingyong 1596 days ago
        I think if you have a framework that can produce arbitrary single-task solvers (which AlphaZero can't yet), you would have something indistinguishable from AGI, since communication between single-task solvers is also kinda just a single-task solver.

        It's certainly not the most efficient way to use our current hardware, and it's not clear to me how big some of these neural nets would have to be, but if we had computers with a trillion times the memory capacity and speed, IMO it'd certainly work on some level.

        • simonh 1596 days ago
          How would a single-task solver, or hierarchy of them, go about constructing a conceptual model of a new problem domain? The problem with a solver is it only really goes in a single direction, but when modeling a system you spend a huge amount of time backtracking and eliminating elements that yielded progress at first but then proved to be obstacles to progress later. You also need to be able to rapidly adapt to changing requirements.

          Imagine playing a game of Chess in which the pieces and rules gradually changed bit by bit until by the end of the game you were playing Go. That's much closer to what real life problems are like and a human child could absolutely do that. They might not be much good at it, but they could absolutely do it even without ever having played either game before just learning as they went. Note to AGI researchers, if your chatbot can't cope with that or a problem like it without any forewarning, don't bother applying for a Turning Test with me on the other side of the teletype.

          • marcus_holmes 1596 days ago
            they'd do it like we do: by comparing the new situation to previous ones we know about, and applying the model that fits best, and then adapting to results.

            For humans, the more previous ones we know about, the better, because we have more chance of applying a model that works in the new environment. That's called "experience".

            • simonh 1595 days ago
              That’s a very broad, general description of behaviour that doesn’t actually describe an implementation. In fact it could apply to many completely different possible implementations. I suspect though that humans do more than this, that we have a way of either constructing entirely new models from scratch, or of dramatically adapting models to new situations without mere iterative fitting to feedback. Humans are actually capable of reasoning effectively about entirely new ideas, scenarios and problems. We have little to no idea how we do this.
              • marcus_holmes 1595 days ago
                I don't know. I'm not so sure that we can create new working models from scratch. We definitely learn by iterative feedback: babies wiggle stuff and watch what happens to learn how to move their bodies. Learning to ride a bicycle is mostly about falling off bicycles until you learn how not to.

                I've seen people apply their normal behaviour to situations that have changed, and then get totally confused (and angry) as to why the result isn't the same. Observe anyone travelling in a new country for examples ("why don't they show the price with the sales tax included here? This is ridiculous!").

                In a perfect world, sure, we'd construct a rational mental model of a new situation and test it carefully to ensure it matched reality before trusting it, and then apply it correctly to the new situation. But it's not a perfect world, and people don't actually do that. Usually we charge in and then cope with the results.

                Of course, I'm not saying that AI should do that. It'll be interesting to see how a "good" general AI copes with a genuinely new situation.

                • simonh 1595 days ago
                  I think we apply radically different cognitive machinery to physical skills like riding a bicycle, compared to playing a card game where the rules are on the cards, and you have no idea what rule will be on the next card or even what rules are possible. We can train Chimps to ride bicycles, so they have the cognitive machinery for that, but we can't teach them to play these kinds of card games.
                  • marcus_holmes 1594 days ago
                    Interesting. True. But is that because we lack the communication skills to explain the rules to chimps, or because they lack the cognitive modelling ability to understand those rules?
          • acollins1331 1596 days ago
            Seems to me you're just describing reinforcement learning. Youre just saying a human child can adapt to the new problem faster than the AI can adapt, which is true but not the argument you've been making in this thread.
            • simonh 1595 days ago
              It's not reinforcement learning, because the child can do it the first time, so there's no reinforcement. I have kids, so many times I have played games with them successfully, purely from a description of the rules and playing as we went. They even beat me once the first time we ever played a game, by employing a rule at the end which had never come up in previous play. Compared to that Alphago isn't even in the race, because we can't even tell it the rules.
        • Quarrelsome 1596 days ago
          > since communication between single-task solvers is also kinda just a single-task solver.

          It would be nice if it worked like that but I think you're massively underestimating the problem set here. I'd suggest its more like the architectural glue one needs as an engineer writing a command line util and a fully fledged piece of Enterprise solution (i.e. orders of magnitude).

          Of course because we don't actually know how intelligence exactly works we're both guessing here.

      • lawlessone 1596 days ago
        The only way i think it can be done is simulated evolution.. be that simulated evolution of neural nets or something else.

        As others have mentioned here though.. this becomes horrifying if we've created something sentient to kill in games or enslave.

        • codeulike 1596 days ago
          I've been thinking for a while that use of AI in games might become a civil rights frontier in about 30 to 50 years or so
        • state_less 1596 days ago
          The open ended simulations, similar to earth conditions, might general enough to sprout some artificial general intelligence. Put multiple intelligences in a massive multiplayer online world and have them compete for shelter and resources. It's an environment that we know has produced intelligence.

          It may be a brutal struggle, but perhaps that struggle is important. Perhaps having a simulated tree fall on you is more meaningful than being reaped by some objective function at the end of an epoch.

          • simonh 1596 days ago
            I think you’re on a potentially productive path, but it took 2 billion years of evolution in a staggeringly vast environment like that to produce results. The question is really how to shortcut that process, but training environments may well have a role to play.
        • arethuza 1596 days ago
          Reverse engineering a human mind would be another approach.
          • olalonde 1596 days ago
            This is often overlooked but it's the only approach that is pretty much guaranteed to succeed given enough time. That said, it's also likely that AGI will come about way earlier from another approach (just as planes came before the "robot birds").
            • Quarrelsome 1595 days ago
              While I agree this leads to haunting outcomes. e.g. If we create a successful interface then whats the point of building our own digital pastiches if we can just strap in the real thing?
          • Animats 1595 days ago
            Check out OpenWorm. They're trying to reverse engineer the simplest organism with a nervous system, a nematode with 302 neurons. They're making progress, but not very fast. That approach is going to be a long haul.
        • betageek 1596 days ago
          Ted Chiang wrote an interesting novella, The Lifecycle of Software Objects, about that very subject

          https://en.wikipedia.org/wiki/The_Lifecycle_of_Software_Obje...

        • loquor 1596 days ago
          > simulated evolution

          Isn't that what genetic algorithms are?

          • lawlessone 1596 days ago
            yeah, one kind i guess. Or perhaps they cover all kinds?
        • marcus_holmes 1596 days ago
          well, "kill" becomes moot if the code is preserved. Like "killing" another player in a multi-player game. You're not actually killing them.
          • bumby 1595 days ago
            This may be a clunky analogy, but is this fundamentally different than a killing a human as long as we keep a record of their DNA sequence? Maintaining the information doesn't seem enough to negate snuffing out the execution of that information
            • marcus_holmes 1595 days ago
              The generic AI could be "playing" in a thousand virtual environments at once. Killing one of them doesn't really have a parallel in human life, or ethics.

              I mean, yes, you killed a sentient being. But if that sentient being has a thousand concurrent lives, then what does "killing" one of those lives even mean? And if it can respawn another identical life in a millisecond, does it even count as killing?

              I suspect that having sentient virtual entities will provide philosophy and ethics majors a lot of deep thinking room. As it already has for SciFi authors.

              • bumby 1595 days ago
                Would this opinion change if scientists were able to prove the multiverse theory where there are infinite numbers of "us" living concurrent lives as well?
            • BjorksEgo 1595 days ago
              Not if we persist the state of the mind before we turn them off. Can't do that with humans
      • MR4D 1595 days ago
        > They are just very highly advanced single-task solvers.

        What if AGI is just a combination of very highly advanced single-task solvers?

        I happen to believe that it is an emergent behavior once the complexity gets high enough, so AGI might just be a (large?) collection of AlphaGo solvers connected to different inputs.

      • have_faith 1596 days ago
        > It's like an image classifier that can identify an apple, but has no idea what an apple is

        Kind of like humans then.

        • noir-york 1596 days ago
          A child picks up an apple but doesn't know what an apple "is". It doesn't even have the vocabulary to describe it.

          As adults we know what an apple is because we understand it as a concept, the ideal "apple", and can manipulate the concept into areas way outside the original concept (say, the phrase "apple of my eye").

          • chongli 1596 days ago
            The child does know that the apple is a thing though. That it’s a separate object that can be carried around. Computer vision ML systems don’t even know that!

            All they know is how to recognize a common pattern on a pixel grid, after seeing a large number of examples, and then draw a box around it.

            The fact that a child has a body and can manipulate the world with all 5 senses working in concert should not be underestimated.

          • stestagg 1596 days ago
            A child comes pre-programmed to put things in their mouth. They also have very sophisticated reward functions built-in that identify tasty sugars entering their mouth.

            Very quickly (assuming said child doesn't eat something too bad), in the absence of an external oracle, the child learns a very productive mental model of what an apple is.

            This type of feedback loop seems eminently translatable to machine learning, assuming we can encode the concept space in a way that allows the model to be encoded and trained in a reasonable set of constraints

            • simonh 1596 days ago
              Right, but that’s actually just a tiny part of the puzzle. The cognitive machinery that knows about edibility, deco possibility (how objects can be decomposed into parts and have internal structure), compost leaves properties (how the parts of an Apple contribute to its attributes as a whole), it’s relationship and interactions with other objects in the environment. All of that cognitive architecture might be a target for your feedback loop, but isn’t a solver and won’t work like a solver.
          • sildur 1596 days ago
            Yes, but you did not manipulate the concept. You did not invent that phrase, but simply learned its meaning. A machine can do that.
            • bildung 1596 days ago
              Every child reinvents the concept. It wasn't there on birth, and the word and phrases didn't contain it. That's a bit difficult topic to wrap ones head around, but it is critically important to differ between signified (the concept) and signifier (the words etc.).

              The child develops concepts and is able to create and evaluate inferences, and thus able to understand metaphors etc.

              The concept is what most AI approaches lack. Googles image search can identify apples, and cherries, and probably can categorize both as fruits, but it can't infer that this probably contains seeds, is a living being etc.

              • simonh 1596 days ago
                Or even that it is a three dimensional object, or what that means.
              • mrd999 1595 days ago
                You're the only person who mentioned metaphors here. My intuition tells me metaphors will be key to developing AGI. Metaphors literally generalize; they predict; they organize and catalogue. Formation, testing, and introspection of metaphors seems to be a way forward.
                • bildung 1595 days ago
                  If you are interested in this direction of research: There is a big body of work regarding human cognitive processes and the role of metaphor. I would suggest "Philosophy in the flesh" by Lakoff and Johnson. A hefty work, but that was one of the publications that fundamentally changed my perspective on the human mind. The concept of embodied reasoning was eye-opening for me.

                  As I have an academic background in learning theory and developmental psychology, I'm pretty pessimistic about the current AI trend, autonomous driving etc. Most smart people in the field are chasing what are effectively more efficient regression functions for over 60 years now, and I almost never stumble upon approaches that have looked at what we know about actual human learning processes, development of the self etc.

                  Moravec's paradox[1] IMO should have been an inflection point for AI research. This is the level of problems AI research has to tackle if it ever wants to create AGI.

                  [1] https://en.wikipedia.org/wiki/Moravec%27s_paradox

                • jimsmart 1595 days ago
                  Related: Martin Hilpert has an excellent lecture/video on metaphor, as part of his Cognitive Linguistics course. Well worth a watch if this is a topic that interests you.

                  https://www.youtube.com/watch?v=R0BYLpwSM6E

    • fastbeef 1596 days ago
      > Imagine The Sims, with a lot more internal smarts and real physics, as a base for work.

      Sounds like the start of a truly horrifying Black Mirror episode

      • adrianN 1596 days ago
        • strbean 1595 days ago
          > YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
        • bgilroy26 1596 days ago
          The internet is so great
      • haffi112 1596 days ago
        > Sounds like the start of a truly horrifying Black Mirror episode

        That episode already exists.

        https://en.wikipedia.org/wiki/Hang_the_DJ

      • DonHopkins 1596 days ago
        Or a Philip K Dick novel that's so weird and prescient, nobody's been able to figure out how to make a movie from it.

        https://en.wikipedia.org/wiki/The_Three_Stigmata_of_Palmer_E...

        http://mxmossman.blogspot.com/2013/01/the-three-stigmata-of-...

        >The Perky Pat Layouts itself is an interesting concept. Here's Dick, in the early 60's, coming up with the idea for virtual worlds. I mean, Second Life and other virtual worlds are just a mapping of the Perky Pat Layouts onto cyberspace. Today Facebook acts like the PP Layouts, taking people's minds off toil and work and letting them engage others in a shared virtual hallucination -- you're not actually physically with your friends, and they might not even be your friends.

        >Dick’s description of the Can-D experience is essentially a description of virtual sex:

        >“Her husband -- or his wife or both of them or everyone in the entire hovel -- could show up while he and Fran were in the state of translation. And their two bodies would be seated at proper distance one from the other; no wrong-doing could be observed, however prurient the observers were. Legally this had been ruled on: no co-habitation could be proved, and legal experts among the ruling UN authorities on Mars and the other colonies had tried -- and failed. While translated one could commit incest, murder, anything, and it remained from a juridicial standpoint a mere fantasy, an impotent wish only.”

        >Another character says “when we chew Can-D and leave our bodies we die. And by dying we lose the weight of -- ... Sin.”

  • Jaruzel 1596 days ago
    Carmack's post in full:

    Starting this week, I’m moving to a "Consulting CTO” position with Oculus.

    I will still have a voice in the development work, but it will only be consuming a modest slice of my time.

    As for what I am going to be doing with the rest of my time: When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague “line of sight” to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn’t in sight. I decided that I should give it a try before I get too old.

    I’m going to work on artificial general intelligence (AGI).

    I think it is possible, enormously valuable, and that I have a non-negligible chance of making a difference there, so by a Pascal’s Mugging sort of logic, I should be working on it.

    For the time being at least, I am going to be going about it “Victorian Gentleman Scientist” style, pursuing my inquiries from home, and drafting my son into the work.

    Runner up for next project was cost effective nuclear fission reactors, which wouldn’t have been as suitable for that style of work.

    --

    We're at 500 comments at the time of posting this, and no-ones pasted his post in full to save us having to visit Facebook...

    • geogra4 1596 days ago
      Too bad he didn't go for his runner up. cost effective/mass produceable fission reactors could save humanity.
      • shashankp 1596 days ago
        Carmack creates AI, AI is used to invent effective fission reactors. Check and mate.
        • tomxor 1595 days ago
          - fusion reactors then used to power spaceX ships to go to Mars... wait what!

          As long as his next project is not teleportation gateways i guess we are safe.

          [edit] it's also 2019 :D

          > In the year 2019, the player character (an unnamed space marine) has been punitively posted to Mars after assaulting a superior officer, who ordered his unit to fire on civilians. The space marines act as security for the Union Aerospace Corporation's radioactive waste facilities, which are used by the military to perform secret experiments with teleportation by creating gateways between the two moons of Mars, Phobos and Deimos. In 2022, Deimos disappears entirely and "something fraggin' evil" starts pouring out of the teleporter gateways, killing or possessing all personnel.

          • tomxor 1595 days ago
            how come 90% of the time HN readers can't see the funny side in anything.
        • ajmurmann 1595 days ago
          AI sees humans as thread, all humans die.
          • MS90 1595 days ago
            > AI sees humans as thread

            We shall all be woven into the fabric of the new artificial reality.

            • shashankp 1595 days ago
              Sounds like a comfy retirement I can look forward to
    • arcturus17 1596 days ago
      Thanks it didn’t so much save me as it enabled me, since I have it blocked.
      • saiya-jin 1595 days ago
        same here, can't access it at work, at its damn interesting topic for many of us
      • bounded_agent 1596 days ago
        +1
    • _hao 1596 days ago
      I have FB blocked so thanks for sharing!
      • davidgrenier 1596 days ago
        I would like to block facebook but I'm on debian. Last thing I tried to block was reddit through /etc/hosts and it didn't work.
        • badocr 1595 days ago
          I have facebook main site and its CDNs blocked in uBlock. Works good for me, but I only use PCs to browse the web, so no idea how to go about blocking it if one uses mobile devices at home. Possibly a DNS caching server like `unbound` can help?
          • Stevvo 1595 days ago
            You can use the same method with uBlock on Firefox Android. For iOS I don't know any way.
        • kiney 1595 days ago
          might be easier to user your adblocker
  • pyentropy 1596 days ago
    Progress in AI is due to data and computational power advances. I wonder what kind of advances are needed for AGI.

    1. Biological brains are non-differentiable spiking networks much more complicated than backpropagated ANNs.

    2. Ion channels may or may not be affected by quantum effects.

    3. The search space is huge (but organisms aren't optimal and natural selection is probably local search)

    4. If it took ~3.8b years to get from cells to humans, how do we fast-forward:

    * brain mapping (replicating the biological "architecture")

    * gene editing on animal models to build tissues and/or brains that can be interfaced (and if such interface could exist how do we prevent someone from trying to use human slaves as computers? Using which tissues for computation is torture?)

    * simulation with computational models outside of ECT (quantum computers or some new physics phenomenon)

    Note: those 3.8b years are from a cell to human. We haven't built anything remotely similar to a cell. And I'm not claiming that an AGI system will need cells or spiking nets, most likely a lot of those are redundant. But the entropy and complexity of biological systems is huge and even rodents can outperform state of the art models at general tasks.

    IMHO, the quickest path to AGI would be to focus on climate change and making academia more appealing.

    • pron 1596 days ago
      > even rodents can outperform state of the art models at general tasks.

      Rodents? Try insects [1]. In the late 40s and early 50s, when neural networks were first explored with great enthusiasm, some of the leading minds of that generation believed (were convinced, in fact) that artificial intelligence (or AGI in today's terms) is five/ten years away; the skeptics, like Alan Turing, thought it was fifty years away. Seventy years later and we've not achieved insect-level intelligence, we don't know what path would lead us to insect-level intelligence, and we don't know how long it would take to get there.

      [1]: To those saying that insects or rodents can't play Go or chess -- they can't sort numbers, either, and even early computers did it better than humans.

      • jcims 1596 days ago
        This jumping spider has ~600k neurons in its brain - https://youtu.be/UDtlvZGmHYk

        They are creepy smart.

        • TeMPOraL 1596 days ago
          Speaking of Portias and smarts, I'm just going to recommend "Children of Time" here (and its recently released sequel, "Children of Ruin"). It's a story of a future where humans accidentally uplifted jumping spiders instead of monkeys, and goes deeply into how the minds, societies and technology of such spiders would be fundamentally different from our own.
        • hn_throwaway_99 1596 days ago
          Just wanted to say holy crap that video was amazing - exciting and suspenseful!
          • jcims 1596 days ago
            Here's another one for ya if you get stuck with a case of the nosleeps - https://www.youtube.com/watch?v=7wKu13wmHog

            Something about predatory nature of both insects seems to tune up their intelligence. Of course it never hurts having the BBC tell your story either.

            • shrimp_emoji 1595 days ago
              >Something about predatory nature of both insects seems to tune up their intelligence.

              Yep. To be a predator, you need to outwit your prey and think fast, so it's thought to be a natural INT grinder. `w´

              Presumably, this could drive up the INT of prey too, but maybe it's cheaper to just be faster/harder to see? But you can't be THAT hard to see, and the speed only saves you in failed ambushes, so planning successful ambushes continues to reward the INT of predators (unless they just enter the speed arms race, like cheetahs or tiger beetles).

              • copperx 1595 days ago
                What is I.N.T.? I couldn't find a definition.
                • ethbro 1595 days ago
                  Parent is using the commonly accepted stat abbreviation for intelligence in role playing games
      • TeMPOraL 1596 days ago
        > [1]: To those saying that insects or rodents can't play Go or chess -- they can't sort numbers, either, and even early computers did it better than humans.

        They probably can, internally; they just can't operate on tokens we recognize as numbers explicitly. For a computer analogy, take Windows Notepad - there's probably plenty of sorting, computing square roots and linear interpolation being done under the hood in the GUI rendering code - but none of that is exposed in the interface you use to observe and communicate with the application.

        • pron 1596 days ago
          Computers still do that much better -- there's no way an insect, or a mammal, brain internally sorts ten million numbers -- and even much better (at least faster) than humans. My point is only that the fact computers can do some tasks better than insects or humans is irrelevant, in itself, to the question of intelligence.
    • JaRail 1596 days ago
      > Progress in AI is due to data and computational power advances.

      I think you'd be surprised how much progress is also being made outside those two factors. It's sort of like saying graphics only improve with more RAM and faster compute. We know there's more to it than that.

      In many cases, the cutting edge of a few years ago is easily bested by today's tutorial samples and 30 seconds of training. We're doing better with less data and orders of magnitude less compute.

      • goatlover 1596 days ago
        But not towards AGI. We're just improving on narrow AI after recent breakthroughs thanks to the hardware being powerful enough and large datasets being available.
        • finebalance 1596 days ago
          The point the poster above is trying to make is that given the same amount of data, improvements in technique is leading to significant improvements in accuracy.

          An illustrative example comes from the first lesson in fastai's deep learning course: an image classifier that would have been SOTA as late as 2012/13, can be built by the hobbyist in like 30 seconds.

          That said, I don't disagree that this is all narrow AI, at best.

        • redisman 1595 days ago
          Having access to cheap and scalable compute and storage should be helpful for AGI too. It doesn't solve anything but it does give more access to more people.
      • draw_down 1596 days ago
        I think it’s meant precisely in contrast to something like graphics, where the human element has obviously contributed alongside computational advances. “The Bitter Lesson”, basically. To the other point, aren’t computational advances the reason that it’s only 30 seconds of training?
    • wahern 1596 days ago
      I'm sure neural nets will herald AI right after the mechanical gears and pneumatic pistons that were envisioned as the secret sauce during the turn of the last century.

      The key, of course, is redefining life and intelligence as whatever the current state-of-the-art accomplishes. (Cue explanations that the brain is just a giant pattern matcher.) It makes drawing parallels and prophesying advancements so much easier. Of all our sciences, that's perhaps the one thing we've perfected--the science of equivocation. And we perfected it long ago; perhaps even millennia ago.

    • buboard 1596 days ago
      > even rodents can outperform state of the art models at general tasks

      Rodents can't play Go or a lot of other humanly-meaningful tasks. We don't need to build an artificial cell. A cell is too many components that by blind luck happened to find ways to work together, this is as far from efficient design as can be. The same way we don't build two-legged airplanes, we don't need anything that's close to the wet spiky mess that happens in human brains. It's more likely that we have all the ingredients already in ML, and we need to connect them in an ingenious way and amp up the parallelism.

      • pyentropy 1596 days ago
        AlphaZero has coded all the rules for the respective three games, they do a tree search and their neural network output layer has exactly n neurons for max(n) possible moves. Although it's impressive they don't teach it heuristics and strategies, it's a very specific task.

        What about pigeons predicting breast cancer with 99% probability, rats learning to drive cars, monkeys building tools?

        Rodents stand a bigger chance at learning Go than AlphaZero spontaneously building stone tools and driving cars.

        • vegarab 1596 days ago
          You are talking about AlphaGo. AlphaZero was not given any prior knowledge of the game and is trained exclusively through self-play -- and it outperforms Monte Carlo tree search-based systems such as AlphaGo and Stockfish in chess 100-0 with a fraction of the training time.

          AlphaZero is also capable of playing Chess, Shogi and Go at a super-super-human.

          • TwoBit 1596 days ago
            As impressive as AlphaZero surely is, I don't think it ever got a proper comparison to Stockfish. It was running on a veritable supercomputer while Stockfish was running in a crippled mode on crippled hardware.
          • tastroder 1596 days ago
            Not working in this area but the abstract of the AlphaZero paper [0] seems to disagree about your /any prior knowledge/ point: "Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case."

            [0] https://arxiv.org/abs/1712.01815

            • vegarab 1596 days ago
              This is my point exactly. The model is trained without any prior domain knowledge at all. It only has access to a game world where the constrains in the world is a representation of the game's rules.
          • sjg007 1596 days ago
            You can view these as optimized pattern recognizer regexes. You start with a blank fully connected graph and it eventually converge on a useful function. That graph has many paths encoded in it that represents specific optimal game play.
            • vegarab 1596 days ago
              Isn't this how the neurons and synapses in our brain work, though?
              • sjg007 1595 days ago
                Maybe... there’s some other properties of biological neurons we don’t capture in NNs currently.
        • buboard 1596 days ago
          The natural environment encodes "all the rules" for real animals, too. You need some constraints or else there is nothing to be learned. One could say that every survival task is also specific , but is a slight variation of previously learned one.

          > pigeons predicting breast cancer with 99%

          pigeons contain 340M neurons (with dendrites and all, giving them higher computational capacity than ANN units).

          > Rodents stand a bigger chance at learning Go

          They probably don't ; probably because they can't understand the objective function and their brain capacity is limited

          • GloriousKoji 1595 days ago
            Scientist have just recently taught rats how to play hide-and-seek for fun. Other scientists have found out that slime mold will model the Japanese railroad system. I wouldn't be surprised if rodents (plural) instinctively have a go strategy once someone figures how to make an analog game for them.
            • buboard 1595 days ago
              its probably safe to assume that even if rodents are behaviorally trained to follow complex rules, they are mostly pattern-matching, and are lacking higher-level abstraction and communication models like humans do. If they did they would at least attempt to communicate with us, like we do with them. In such a case, an elephant that plays go by patternmatching is no different from a neural network that learned by patternmatching
      • romwell 1596 days ago
        The problem with the analogy is that the car, by far, is not a general transportation device. Practically, most cars are solving a very constrained transportation problem: moving on roads that humans made.

        We don't have anything remotely close to a wetware-enabled transportation device, something that can move on flat land, climb mountains, swim in bodies of water, crawl in caves, hide in trees.

        Within the constrained problem, the machine exceeds humans. But generally, the wetware handles moving around much better.

        Same with AI: in a constrained problem, the AI can excel (beat humans in chess and go). But I doubt we will see a general AI any time soon.

        • buboard 1596 days ago
          > constrained problem

          human AI also evolved by solving constrained problems, one at a time. Life existed before the visual system , but once this was solved it moved on to do other things. In AI we have a number of sensory systems seemingly solved: Speech recognition, visual object recognition, and we are closing to certain output (motor) systems: NLP text synthesis systems seem a lot like the central pattern generators that control human gait, except for language. What seems to be missing is the "higher-level ", more abstract kernels that create intent, which are also difficult to train because we don't have a lot of meaningful datasets. Or maybe , we have too big datasets (the entirety of wikipedia) but we don't know how to encode it in a meaningful way for training. It's not clear however that these "integrating systems" are going to be fundamentally different to solve than other subsystems. It certainly doesn't seem to be so in the brain, since neocortex (which hosts both sensory and motor and higher level systems) is rather homogeneous. In any case, it seems we 're solving problems one after another without copying nature's designs, so it's not automatically true that we need to copy nature in order to keep solving more.

          • acdha 1596 days ago
            > In AI we have a number of sensory systems seemingly solved: Speech recognition, visual object recognition,

            Do you have examples of those systems which are competitive in general use rather than specialized niches? The cloud offerings from Amazon, Google, etc. are good in the specific cases they’re trained on but fall off rapidly once you get new variants which a human would handle easily.

            • buboard 1596 days ago
              There are many vision models where classification is better than human. I m not sure what you mean 'fall of rapidly'; they do fail however for certain inputs where humans are better. But we 're talking about models that contain 6 to 7 orders of magnitude less neurons than an adult brain.
        • TeMPOraL 1596 days ago
          It's also interesting in the context of how we build our technology in general: we constraint our environments just as much we develop tools that operate in them. E.g. much as cars were created for roads, we adapted our communities and the terrain around them by building roads and supporting infrastructure. A lot of things around us rely on access to clean water at pressure, which is something we built into our environments, etc.
      • azth 1596 days ago
        > A cell is too many components that by blind luck happened to find ways to work together

        Can't tell if sarcasm.

        • buboard 1596 days ago
          carbon chemistry + thermodynamics
          • apta 1595 days ago
            != "luck"
            • buboard 1595 days ago
              so you think cells had some insight on how to evolve themselves?
              • apta 1594 days ago
                more like caused to happen by the Creator.
                • dkarras 1590 days ago
                  Who created the creator?
    • catalogia 1596 days ago
      From what I understand, quantum effects being essential to the process is a fringe belief. Penrose is probably the most famous 'serious person' (sorry Deepak Chopra) to espouse the idea, but I'm inclined to believe that might be a Linus Pauling/Vitamin C sort of scenario. Penrose started from the perspective of believing there must be quantum effects, then began fishing for physical evidence of it.
      • whymauri 1596 days ago
        I was taught that the quantum theory of memory and cognition generally falls under Eric Schwartz's "neuro-bagging" fallacy [0]. That is:

        >You assert that an area of physics or mathematics familiar to few neuroscientists solves a fundamental problem in their field. Example: "The cerebellum is a tensor of rank 10^12; sensory and motor activity is contravariant and covariant vectors".

        So yeah, I feel that it's pretty fringe (as you suggested).

        [0] https://web.archive.org/web/20170828092031/http://cns-web.bu...

      • hmmmhmmmhmmm 1596 days ago
        One interesting hypothesis, re: lithium isotopes in Posner molecules: https://www.kitp.ucsb.edu/sites/default/files/users/mpaf/p17...
      • AareyBaba 1596 days ago
        "The Secret of Scent" by Luca Turin [0] if I remember correctly goes into research that indicates that there may be quantum effects that explain how shape/chirality of molecules affect smell. [0] https://www.amazon.com/Secret-Scent-Adventures-Perfume-Scien...

        So it is plausible that nature may have evolved to be affected by quantum effects.

      • pvarangot 1596 days ago
        Yeah, "quantum mechanics and cognition are very complex and therefore equivalent", sorry I don't know who to attribute the quote to.
    • m0zg 1596 days ago
      You forgot to mention, crucially, that neurons in close proximity affect each other, which is just one of the things that makes modeling of more than a few neurons in time domain a complete non-starter. It all results in enormous systems of PDEs which we don't know how to solve yet at all. You could say that we do not have the right mathematical apparatus to model any such thing.
      • TaupeRanger 1596 days ago
        I don't follow that. What would prevent (perhaps quite slow) simulation of a larger system of such neurons? E.g. N-body problems are analytically beyond us, but can be simulated to arbitrary precision with certain trade-offs.
        • m0zg 1596 days ago
          Time domain solutions do not exist for more than a dozen neurons. At least they did not when I took a computational neuroscience MOOC a couple of years ago. State of the art at the time was the nervous system of an earthworm. That is, if you consider what you actually need to do to simulate how potentials will change in the brain over time give a certain starting state and stimuli, the math gets so complicated (and awkward) so quickly that it's not really tractable with the mathematical (or simulation) apparatus we currently have to go beyond such trivial systems.
      • McTossOut 1596 days ago
        Pure physical modeling are likely a bad representation for the phenomena resulting in intelligence, especially granted we've simulated it with much simpler discrete structures. PDEs may even be disastrously bad, like trying to describe a line in space with a table of points, instead of the degrees of freedom.

        I would imagine that a PDE may cover diffuse behaviors governing say, how learning happens mechanically, but it is almost certainly a language/representational barrier, the relationship between the structure of the animal mind, learning, and seemingly simple phenomena, like afterimages.

        The molecules are arbitrary and the timescale doesn't matter.

    • andbberger 1596 days ago
      > 1. Biological brains are non-differentiable spiking networks much more complicated than backpropagated ANNs.

      Actually it's not so obvious that the brain is not differentiable. If you do a cursory search, you'll find quite a lot of research into biologically plausible mechanism for backpropagation. Not saying the brain does backprop, we just don't know and it's not outside of the realm of plausibility

    • Balgair 1596 days ago
      > 2. Ion channels may or may not be affected by quantum effects.

      In a sense, everything is affected by quantum effects. However, with neurons, they are generally large enough that quantum effects do not dominate. Voltage gated channels are dozens to hundreds of amino-acids long. Generally, there are hundreds to millions of ion channels in a cell membrane and the quantum tunneling of a few sodium ions in or out of the cell will generally not affect gestalt behavior of the cell, let alone a nervous system's long term state. Suffice to say, ion channels are not dominated by quantum behavior.

      Largely, we have the building blocks to replicate neurons (as we currently understand them) in silico. However, as is typical with modeling, you get out what you put in. Meaning that how you set your models up will mostly determine what they do. Setting your net size, the parameters of you PDEs, boundary values, etc. are the most important things.

      Now, that gets you a result, and it's likely to take a fair bit of time to run through. To get it up to real time the limiting factor really ends up being heat. Silicon takes a LOT of energy as compared to our heads, ~10^4 more per 'neuron'. If we want to get to real time, we're gonna need to deal with the entropy.

    • air7 1596 days ago
      This reminds me of a interesting armchair moral dilemma: Assume we have the tech to replicate/simulate a biological brain. Now say we want to study the effects of extreme pain/torture etc on the brain. Instead of studying living animals or humans we'd just simulate a brain, and simulate sending it pain signals and see what happens.

      But, if this is a 100% replicated brain, doesn't that mean its suffering is just as real as a real brain's suffering, and therefor just as cruel? And if not, what's the difference?

      • balfirevic 1596 days ago
        > But, if this is a 100% replicated brain, doesn't that mean its suffering is just as real as a real brain's suffering, and therefor just as cruel?

        Yes, it does.

        • ars 1596 days ago
          Or, assuming you don't believe in souls, "real" brain's suffering isn't real either. (The brain is just a machine, right?)

          This reminds me of the idea that free will doesn't exist, but that we have to act as if it were.

          So by analogy to that, maybe the AI isn't really suffering, but you have to act as if it were.

          More food for thought:

          Some surgery blocks memory but can be incredibly painful. Do we need to worry about that? Is the suffering that the brain can not remember "real"?

          • mercer 1592 days ago
            I think the word 'real' is way too vague in this context.
      • lonelappde 1596 days ago
      • rfhjt 1596 days ago
        Fwiw, after a certain amount of pain, brain "transcends it": everything disappears, there are some curious colors here and there, but there is no pain. Experienced that during an in ear infection.
    • akira2501 1596 days ago
      > gene editing

      Gene expression is often tied to the environment the organism is in. Mere possession a gene isn't enough to benefit from it. Some expressions don't take effect immediately, but rather activate in subsequent generations.

      Epigenetics is a whole equally large layer on top of this system. A single-focus approach may not be sufficient, and even if it is, it's not likely to cope with environmental entropy very well.

      • wahern 1596 days ago
        If you can craft a gene[1] to express some particular phenotype (a big if), surely you can craft it to express itself without reliance on epigentic[2] chemistry.

        [1] I understand gene to mean some ill-defined, not necessarily contiguous set of genetic sequences (DNA, RNA, and analogs) with an identifiable, particularized expression that effects reproductive (specifically, replicative) success. I think over time "gene" has been redefined and narrowed in a way to make it easier to claim to have made supposedly model-breaking discoveries.

        [2] Some others on HN have made strong cases for why epigenetics isn't a meaningful departure from the classic genetic model; just a cautionary tail for eager reductivists who would draw unsupported conclusions from the classic model. See, also, note #1.

    • debt 1596 days ago
      We still haven’t solved language nor intelligence.

      Like what is language, what is intelligence? Some of the smartest linguists and philosophers would proudly declare they have no fucking clue.

      Making Alexa turn on the lights or using Google Translate are cool party tricks though.

      Idc how many Doom games ya made, but I’m sorry to say a bunch of software engineers aren’t gonna crack this one.

      • jodrellblank 1596 days ago
        > Some of the smartest linguists and philosophers would proudly declare they have no fucking clue.

        to worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance” - https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the...

        Having no clue is not something to be proud (or ashamed) of.

        > I’m sorry to say a bunch of software engineers aren’t gonna crack this one.

        Doesn’t sound like you’re at all sorry, it sounds like you’re thrilling in putting these uppity tryhards in their place for daring to attack something you hold sacred.

  • carlosdp 1596 days ago
    This doesn't surprise me at all. He went on a week long cabin-in-the-middle-of-nowhere trip about a year ago to dive in to AI (that's all this guy needs to become pretty damn proficient). (edit: I'm not claiming he's a field expert in a week guys, just that he can probably learn the basics pretty fast, especially given ML tech shares many base maths with graphics)

    As recent as his last Oculus Connect keynote, he extolled his frustration with having to do the sort of "managing up" of constantly having to convince others of a technical path he sees as critical. He's clearly the type that is happiest when he's deep in a technical problem rather than bureaucracy, and he likes moving fast.

    On top of that, he likes sharing with the community with talks and such, and ever since going under the FB umbrella, he's had to clear everything he says in public with Facebook PR, which clearly annoyed him.

    He's hungry for a new hard challenge. VR isn't really it right now since it's more hardware-bound by the need for hard-core optical research than software right now. With the Quest, he (in my opinion) solidified VR's path to mobile standalones. It's time to try his hand at another magic trick while he's on his game.

    John's the very definition of a world-class, tried and true engineer/scientist. He's shown time and time again the ability to dive into a field and become an expert very quickly (he went from making video games to literally building space rockets for a good bit before inventing the modern VR field with Palmer).

    If there's anyone I'd trust to both be able to dive into AGI quickly and do it the right(tm) way, it's John Carmack.

    • mortenjorck 1596 days ago
      Carmack is unquestionably a genius, but I think it's quite unlikely his solo work in a new domain will leapfrog an entire field of researchers.

      I wouldn't, however, bet against some kind of insanely clever development coming out of his new endeavor. Something like an absurdly efficient new object classifier, that reduces the compute requirements for self-driving cars by a non-trivial factor, would be a very Carmack thing.

      • fastball 1596 days ago
        The problem with the "field of researchers" is that most of us aren't geniuses. We're just plugging away at problems, like normal people.

        The opportunity for a genius is to come in, synthesize all existing information on the subject, and then come up with a novel approach to the whole thing.

        In some part, I think that is what Elon Musk has been able to do effectively. He comes into a field that already exists, reads everything he can get his hands on, and then outputs something novel. You can only do that effectively if you have the mental capacity to keep all that info in your head at once, I think.

        • nkoren 1596 days ago
          Musk actually has credited Carmack's Armadillo Aerospace with providing the inspiration for vertically landing the Falcon 9. Of course Armadillo was likewise inspired by the Delta Clipper, which was in turn inspired by the LEM, etc. But it's one thing to vertically-land a rocket a few times when you have billions of dollars at your disposal; it's another thing to do it hundreds of times for a thousandth the price. That was Carmack's contribution: proving that vertical landing can be both incredibly robust, and cheap as chips. Really valuable work.

          I had the pleasure of meeting Carmack a few times over the years at small aerospace conferences. He's both as true a geek and as much of a gentleman as you might imagine. I'm really looking forward to seeing what he does with AGI.

        • quicknir 1595 days ago
          I normally don't bother but this comment is so profoundly ridiculous I had to say something.

          Tenured ML professors at the top 100 or so universities in the world aren't "most of us". A very large chunk of these people are geniuses. Those jobs are incredibly hard to get, and most of these people are reading everything that is getting published, on an ongoing basis, and are outputting something novel, on an ongoing basis.

          The fact that you think that John Carmack, because he's a name that you've actually heard of, is going to go into ML and suddenly make some giant advance that all the poor plebs in the field weren't able to do, is only a reflection of your misunderstanding of what's already happening in academia, not on Carmack's skills or abilities.

          You're acting as though everyone are just low level practitioners using sklearn, and it would be a great idea to have some smart people work on developing something novel. Guess what: that's already happening, with incredibly smart people, on an incredibly large scale. Carmack doing it would just be another drop in the bucket.

          • fastball 1595 days ago

              Tenured ML professors at the top 100 or so universities in the world aren't "most of us".
            
            Too bad we're talking about AGI, not ML.

              Those jobs are incredibly hard to get,
            
            You don't need to be a genius in order to land a hard-to-get job, and you thinking academia is somehow better at making the absolute smartest people rise to the top is cute.

              The fact that you think that John Carmack, because he's a name that you've actually heard of, is going to go into ML and suddenly make some giant advance that all the poor plebs in the field weren't able to do, is only a reflection of your misunderstanding of what's already happening in academia, not on Carmack's skills or abilities.
            
            I don't think that. Mostly because we're not talking about ML, but also because I don't expect eureka moments from people that have been trying to solve a problem for a long time as much as I expect them from someone that hasn't properly tried their hand at it. Academia produces consistent results and consistent improvement. That's not what I'm looking for.

              You're acting as though everyone are just low level practitioners using sklearn, and it would be a great idea to have some smart people work on developing something novel. Guess what: that's already happening, with incredibly smart people, on an incredibly large scale. Carmack doing it would just be another drop in the bucket.
            
            sklearn hardly seems relevant to AGI, so I'm not sure why I'd act like everyone in the AGI field merely a novice practitioner of it.
          • criddell 1595 days ago
            > Carmack doing it would just be another drop in the bucket.

            If this research is as compute intensive as it seems to be, Carmack's contribution might be that he increases the rate other researchers can add their drops to the bucket.

            Carmack isn't the first techie to take on a big hard problem. Jeff Hawkins, a name many of us also know, did as well.

            • quicknir 1595 days ago
              Yes, he may well improve some algorithm, or rewrite some commonly used tool to improve efficiency. And researchers are often not incentivized to do that, so it would be great. But a far cry from the picture people are painting about him soaking up the field and using his genius to solve some major problem quickly.

              If by "techie" you mean, professional software engineer, that's fine, but there's no reason to assume that a professional software engineer is going to be magically better at AI research than... professional AI researchers? He's probably going to be substantially worse.

              Also, your statement below:

              > That's probably true. I look at this as Carmack running his own PhD program. I expect he will expand what we know about computation and the AGI problem before he's done.

              Makes it clear to me that you don't really get it. Carmack, at best, might know enough right now to be in a PhD program. I doubt that he has anywhere near as much knowledge, insight, or ideas for research, as top graduate students. He's in no position to mentor graduate students.

              • criddell 1595 days ago
                > If by "techie" you mean, professional software engineer, that's fine

                No, I mean technologist. He has a pretty solid history with software, physics, aerospace, optics, etc...

                > might know enough right now to be in a PhD program

                Yeah, that's what I'm saying. The frontier in AGI or even just AI is enormous and I think I would be more surprised if Carmack were not able to find some place he could expand the border of what we know.

          • PaulHoule 1595 days ago
            Granted.

            But the academic activity is focused around the kind of activities that Kuhn calls "Normal Science".

            That is, ML researchers mainly do competitions on the same data sets, trying to put up better numbers.

            In some sense that keeps people honest, it also lowers the cost of creating training data, but it only teaches people how to do the same data set over and over again, not how to do a fresh one.

            So a lot of this activity is meaningful in terms of the field, but not maybe not meaningful in terms of useful use.

            I saw this happen in text retrieval; when I was trying to get my head around with why Google was better than prior search engines, I learned very little from looking at TREC, in fact people in the open literature were having a hard time getting PageRank to improve the performance of a search engine.

            A big part of the problems was that the pre-Google (and a few years into the Google age) TREC tasks wouldn't recognize that Google was a better search engine because Google was not optimized around the TREC tasks, rather it was optimized around something different. If you are optimizing for something different, it may matter more what you are optimizing for rather than the specific technology you are using.

            Later on I realized that TREC biases were leading to "artificial stupidity" in search engines. IBM Watson was famous for returning a probability score for Jeopardy answers, but linking the score of a search result to a probability is iffy at best with conventional search engines.

            It turns out that the TREC tasks were specifically designed not to reward search engines that "know what they don't know" because they'd rather people build search engines that can dig deep into hard-to-find results, and not build ones that stick up their hand really high when they answer something that is dead easy.

            • munificent 1595 days ago
              > But the academic activity is focused around the kind of activities that Kuhn calls "Normal Science".

              True, but even Kuhn would note that most paradigm shifts still come from within the field. You don't need complete outsiders and, as far as I know, outsiders revolutionizing a field are quite rare.

              You need someone (a) who can think outside the box, but you also need (b) someone who has all of the relevant background to not just reinvent some ancient discarded bad idea. Outsiders are naturals at (a) but are at a distinct disadvantage for (b).

              I think what's really happening in this thread is:

              1. Carmack is a well-deserved, beloved genius in his field.

              2. He's also a coder, so "one of us".

              3. Thus we want him to be a successful genius in some other field because that indirectly makes us feel better about ourselves. "Look what this brilliant coder like me did!"

              But the odds of him making some big leap in AGI are very slim. That's not to say he shouldn't give it a try! Society progresses on the back of risky bets that pay off.

              • criddell 1595 days ago
                > But the odds of him making some big leap in AGI are very slim.

                That's probably true. I look at this as Carmack running his own PhD program. I expect he will expand what we know about computation and the AGI problem before he's done.

            • TheCoelacanth 1595 days ago
              > ML researchers mainly do competitions on the same data sets, trying to put up better numbers.

              There are surely a lot of researchers doing that, but do you really think anyone who has a plausible claim at being one of the top 100 researchers in the field in the entire world is doing that? Even if there are only 100 people doing truly novel research, that's still 100 times as many people as are going to be working on Carmack's research.

              • fastball 1595 days ago
                How many people were working on physics before Einstein came along?

                I don't think you understand the desired outcome here. We want eureka moments, and we're hopeful for some. That doesn't mean we expect them to happen. Stop being such a pessimist.

        • ufmace 1595 days ago
          I don't see Elon as a genius at any kind of engineering. Everything he's done there was pretty easily foreseeable as being physically possible. What he is remarkably good at is selecting daring and potentially market-changing business goals, and executing against them consistently and aggressively despite naysayers.

          It's easy to say that it's probably possible to land a orbital rocket first stage. But who would bet a multi-billion dollar business on being able to not only do it, but save money by doing it, when nobody had ever done it before?

          Similarly, electric cars were far from new. Nobody seemed much inclined to build one that was actually a luxury car, instead of a toy for engineer-types who could put up with driving weird things. Any of the big manufacturers could have done it, and easily absorbed the losses if it failed, but none did. Elon made a wild bet on that, making a company that made nothing else, so the whole thing would go down the tubes if the idea flopped. Instead it seems to have worked. Although it seems to be harder than he anticipated, and maybe outside his skillset, to run an organization that does real mass-production.

          • aerovistae 1595 days ago
            If you think what he's done was easily foreseeable as possible, you haven't been paying much attention to headlines the past fifteen years.
            • marvin 1595 days ago
              It's only obvious in retrospect. Every step along the way, there has been thousands of people saying "this is impossible" or "this is theoretically possible, but it can't be engineered" or "this is possible in principle, but it will be so costly to develop that it doesn't make sense".

              When AGI is developed, it will seem obvious in retrospect. Participating engineers will receive middle-brow dismissals saying that this was obviously practically possible, since after all the human brain operates according to the laws of physics.

            • ufmace 1595 days ago
              Don't miss the "physically" part, that's critical. Something being physically possible is very different from it being a practical business.
          • SalmoShalazar 1595 days ago
            Just an aside, Elon did not start Tesla. He was an early investor and part of his deal with the company was to be able to claim to be a founder.
        • vsareto 1596 days ago
          >You can only do that effectively if you have the mental capacity to keep all that info in your head at once, I think.

          Yep, plus all the different perspectives from other endeavors. Extending human memory will be a really great accomplishment with brain-computer interfaces.

        • eanzenberg 1596 days ago
          What, pray tell, did Elon do that is "novel"?
          • wolf550e 1596 days ago
            Falcon 9's reusable first stage has been claimed by reputable people to be impossible, before it happened. Not just "economically not worth pursuing", which was wrong but forgivable, but straight "impossible".
          • tinus_hn 1596 days ago
            He made it cool to drive an electric car.
            • dna_polymerase 1596 days ago
              He shifted a whole industry towards a new paradigm. Look at Germany, they are desperate to catch up with Tesla, finally moving into electric cars. Without Elon they would keep selling their Diesel scam for the next decades.
              • Roark66 1596 days ago
                Actually it was a bunch of Phd students from some Californian Uni that discovered the vw diesel scam. There is a short documentary about them online. Elon has no credit whatsoever in dieselgate.

                However, he did make electric cars something an average person would like to have. He also chose to make it work using the same inefficient principle of hauling 2 tons of steel to transport a single person. What he made is an electric luxury car, not a car for the masses that can replace average Joe's car. Is there anything wrong with that? No, there isn't, but let's not pretend a $35k (in US - much more in EU) car that requires hours of charging after driving 250 miles unless you happen to have Tesla's superchargers on your way is a new "volkswagen - a people's car". Also I find it disingenuous to advertise full battery capacity while at the same time recommending people use only 60% of it "for longevity".

                Many people don't buy new cars, but choose to buy 5-8 year old cars that are really good value if they were maintained well. It remains to be seen how Teslas behave in that market.

                It would be really revolutionary if someone could create and market an electric car that was truly innovative for example: much lighter than current cars while still being safe during collision, use fuel cell technology with fuel such as methanol or similar that can be created in a sustainable way, even using a fuel cell with mined hydrocarbons and electric drive would provide for a huge reduction in emissions due to increase in efficiency.

                Do Teslas have a role to play in reducing emissions? Yes, definitely, but let's not present them as a single solution to all individual transport problems.

                • mav3rick 1596 days ago
                  Jesus, technology evolves. This is a good start.
                • fastball 1596 days ago
                  Nobody is presenting them as a single solution to all individual transport problems. Also nobody is pretending that this is the new "people's car".
                • robdachshund 1595 days ago
                  He sid nothing about Elon discovering the diesel sham. He just said germans want to buy his cars
            • thorin 1596 days ago
              He certainly made it more cool than my hero and his precursor ;-)

              https://en.wikipedia.org/wiki/Sinclair_C5

          • netcan 1596 days ago
            When pray tells come into the conversation... :)
          • paggle 1596 days ago
            Just made electric cars mainstream...
          • Sebguer 1596 days ago
            Convince people to give him a lot of money to set on fire.
            • unionpivo 1596 days ago
              I mean even if both Tesla and SpaceX close tomorrow, he already achieved more in both companies than most of the current "unicorns".

              He successfully made popular mass market electric vehicle, and dragged whole auto moto industry behind. There were other electric cars before tesla. But tesla made it cool, and made the rest of the industry trying hard to catch up.

              SpaceX also is not the first private space firm, with their own rocket, But it's by far the most successful one, and lowered the cost of entry to space by significant amount.

              Also It's probably the first private space company that has rockets that can compete with most government ones.

              I am not rich enough to be buying individual stocks, so I have no personal stake in this.

            • TeMPOraL 1596 days ago
              Well yes, but he's the cheapest provider of self-propelling pyres, and the only provider of pyres that can be used multiple times.
      • carlosdp 1596 days ago
        I keep re-reading my post and idk how it reads as a claim that Carmack is going to re-invent the field or something. All I'm saying is it's possible for him to become a player, just like you suggest.
      • lazyjones 1595 days ago
        > I think it's quite unlikely his solo work in a new domain will leapfrog an entire field of researchers.

        Researchers didn't build the first airplane. Nicolaus Otto, Carl Benz, Gottfried Daimler also weren't researchers. AGI will be a program and not a research paper and John Carmack is pretty good at getting those right.

      • kamaal 1596 days ago
        >>I think it's quite unlikely his solo work in a new domain will leapfrog an entire field of researchers.

        Sometimes, an outsider with his novel or even a different way of looking at things can contribute disproportionately to a field.

        Even experts have blind spots, often they show in the form of bias. If you know something is hard or near impossible to do, you are unlikely to try. If you don't know at times it's possible to stumble upon a solution by merely bringing a new way of thinking to the table.

      • throwaway35784 1596 days ago
        He's not leap frogging it. He's leaping to the next level from its shoulders.
    • cyberjunkie 1596 days ago
      I can trust John Carmack's words when he says in an interview, or on stage. There's a passion in his talks, a nervousness in blurting what he really feels, and those are really good traits, in my mind.

      I genuinely felt a sense of disappointment when he moved to Facebook (via the Occulus acquisition). So yea, fuck you, Facebook and your manipulative, life values corrupting and PR machinery.

      I place John Carmack miles above Zuckerberg.

      • nmfisher 1596 days ago
        I have to admit, I felt a bit disappointed too. Carmack and Facebook always struck me as an antithetical pairing - the creativity/independence of the former didn't seem to sit right with the maniacal/emotional exploitation of the latter.
        • me_me_me 1596 days ago
          I think Carmack just doesn't give a flying f* about Facebook or etc. he is interested in tech and he clearly works on stuff he is passionate about. He worked on VR and not work for Facebook. Facebook just happened to be paying for it.
    • anonytrary 1596 days ago
      AI today is comparable to physics in the 1700s. Back then, it was a bunch of people tinkering with prisms and apples. Today, it's a bunch of people tinkering with hyperparameters. I suspect that we know as little about AGI today as someone in the 1700s knew about QFT. Not only did they not know about QFT, but they didn't even know that they didn't know it.
      • emmanuel_1234 1596 days ago
        Wouldn't it be fun if the next Newton turns out to be the guy who wrote Doom and other FPS, games that were blamed for any kind of surge of violence until the GTA games show up?
        • codingslave 1596 days ago
          Yeah, weird to think that todays newton could be seen on Joe Rogans Podcast talking about the future of gaming
          • toxik 1596 days ago
            It would fit, Newton was apparently quite insufferable in social settings — Carjack has a stroke of that
        • TremendousJudge 1595 days ago
          Wouldn't be out of place -- Newton worked on alchemy, teology and managed the Royal Mint
      • Iv 1595 days ago
        Too many people make this mistake of conflating machine learning with AI. I hope someone as external to the field as Carmack will see the value of rule based inference as well. The Good Old Fashioned AI as it used to be called
    • drongoking 1596 days ago
      > (edit: I'm not claiming he's a field expert in a week guys, just that he can probably learn the basics pretty fast, especially given ML tech shares many base maths with graphics)

      This may be his biggest impediment. ML has gotten very far with looking at problems as linear algebraic systems, where optimizing a loss function mathematically yields a good solution to a precisely defined (and well circumscribed) classification or regression problem. These techniques are very seductive and very powerful, but the problems they solve have almost nothing in common with AGI.

      Put another way, Machine Learning as a field diverged from human learning (and cognitive science) decades ago, and the two are virtually unrecognizable to each other now. Human learning is the best example of AGI we have, and using ML tech as a way to get there may be a seductive dead end.

      • visarga 1596 days ago
        Humans are not AGI's. We're specialised in human survival, not general intelligence. We're pretty limited in intelligence in many ways, actually, and the environment doesn't support generality. Without a proper challenge an agent would not become super intelligent. The cost of developing such an intelligence would conflict with the need to minimise energy for survival.
        • nathias 1596 days ago
          We are AGI, the general part comes from language and specialization of brains for language use.
          • visarga 1595 days ago
            No, if we were we could figure out the genetic code, or how a neural net makes its decisions. But we can't because, among other things, we have a limited working memory of 7-12 objects.

            Programmers know how it is to live at the edge of the capacity of the mind to grasp the big picture. We always reinvent the wheel in the quest to make our code more grasp-able and debuggable. Why? Because it's often more complex than can be handled by the brain.

            An AGI would not have such limitations. Our limitations emerged as a tradeoff between energy expenditure and ability to solve novel tasks. If we had a larger brain, or more complicated brain, we would require more resources to train. But resources are limited, we need to be smart while being scrappy.

            For the record I don't think there is any general intelligence on our planet. A general intelligence would need access to all kinds of possible environments and problems. There is no such thing.

            There's also the no free lunch theorem - it might not apply directly here, but it gives us a nice philosophical intuition about why AGI is impossible.

            > We have dubbed the associated results NFL theorems because they demonstrate that if an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems. [1]

            [1] https://en.wikipedia.org/wiki/No_free_lunch_theorem

            Another argument relies on the fact that words are imprecise tools in modelling reality. Language is itself a model and as all models, it's good at some tasks and bad at other tasks. There is no perfect language for all tasks. Even if we use language we are not automatically made 'generally intelligent'. We're specialised intelligences.

          • toasterlovin 1596 days ago
            We can communicate well about pretty much anything (we think, at least). That doesn't mean we possess the intellectual tool kit to handle basically any intellectual task. I think it's easy to think that we do, but that is just as easily explained by the fact that we've evolved for millions of years to be well suited to the environments we usually find ourselves in. We wouldn't refer to our bodies as general purpose bodies. They may seem that way, at times, since they've also been tuned by millions of years of evolution to be well suited to most of the environments we find ourselves in. But put our bodies in a different environment (like the ocean, or the desert, or really high altitudes) and it becomes immediately obvious that they're not general purpose, but instead a collection of various adaptations. Similarly, when you put humans in novel intellectual environments, it seems pretty clear that we're not general intelligences. After all, the math involved in balancing a checkbook is much simpler than the math involved in recognizing 3D objects, yet we do the simple task only with great difficulty, while the difficult task is done without struggle.
        • TeamSlytherin 1596 days ago
          It's best to think about AGI as... at what point can you drop out of high school and still do well in life (or do you even need high school). It's true that it's not a survival issue, but sadly, it's not a test of "pure knowledge" either. There is a great deal of social structure, even "fluff" that is only relevant for interaction (like getting an 80's reference).
          • visarga 1595 days ago
            > at what point can you drop out of high school and still do well in life

            That means you're specialised in survival. If you do well in life, you have a higher chance of procreation. Your genes survival depends on it.

            General Intelligence is like Free Will - a fascinating concept with no base in reality. A mental experiment.

        • mrmonkeyman 1596 days ago
          Human survival, like composing hackernews comments and learning about Golang.
    • friendlybus 1596 days ago
      He had a lot of help behind the scenes and has been credited with things that aren't his. I respect his achievements more as a regular smart guy than a bonafide genius. He described the math in rocketry as being basically solved in the 60s and video games being far more complex as a project, so that was really a step down in difficulty. His VR role is the same field as his primary skills, impressive work but not an entirely unique role.

      I'm glad to see he's aiming big with his billions and time. This is what rich people should be doing. Hl3 Gaben!

      • dyarosla 1596 days ago
        Millions*- A cursory Google search suggests that he has a net worth of 50MM.
        • friendlybus 1596 days ago
          Huh just assumed he got a bigger piece of the oculus sale.
          • OnlineGladiator 1596 days ago
            Oculus was acquired for $2.3 billion, so he'd need to have owned nearly 50% of the company to become a billionaire from it's sale.
            • mkl 1596 days ago
              Not exactly. The acquisition was mostly in the form of Facebook shares, which I expect have increased in value since.
              • OnlineGladiator 1596 days ago
                It was a mix of cash and shares (not sure on the split), but I checked the stock price for fun - holy shit it has nearly tripled since the acquisition 5 years ago.
    • nradov 1596 days ago
      Current ML technology probably has little or nothing to do with whatever technology will eventually be needed to produce true AGI.
      • snowwrestler 1596 days ago
        As I like to say: lots of people are working on making a car that is smart enough to drive itself wherever a human wants to go. How many people are working on a car smart enough to tell humans to fuck off, it doesn’t feel like driving anywhere today?
        • TeMPOraL 1596 days ago
          Self-driving cars and AGI are two different targets. We don't want a car that has a mind and can argue for itself. We want a car that's smart, but otherwise just a domesticated animal. We want to turn cars into horses.
          • jacobush 1596 days ago
            Not even horses. That would be cruel to the car. At most, like Rat Things. (They have their built in entertainment when they are not in active use.)
          • nradov 1596 days ago
            We don't want to turn cars into horses. Have you ridden horses? They sometimes do stupid, dangerous things with no notice and it takes an experienced, attentive rider to stay in control. Like I saw a horse panic and almost buck her rider off when she was startled by a snake. Another horse seriously injured a friend of mine when it freaked out in a horse trailer and started kicking.

            Don't get me wrong, I love horses. But they're living creatures with minds of their own and you have to always treat them with a certain wariness.

        • DonHopkins 1596 days ago
          When I was working at TomTom, they didn't appreciate my proposal to develop the TomTomagotchi:

          A Personal Navigation Device with a simulated personality that begs you to drive it all around town to various points of interest it desires to visit in order to satisfy its cravings and improve its mood.

          I'm sure there's a revenue model getting drive through Burger Kings and car washes to pay for product placements.

        • taneq 1596 days ago
          Why would you want that, though? What we really want from AGI is mostly just things that are smart enough to 'do what I mean' but dumb enough to not mind being slaves.
      • jtolmar 1596 days ago
        I think older AI work like POMDPs and statistical work like causal inference are more on line with what's needed to produce true AGI than the current breakthroughs in neural nets are. And I'd certainly prefer our chances to survive the results if AGI is reached through statistical rigor.

        Though we know for a fact that it is possible to find intelligence by randomly throwing things at the wall until something works. It's not like evolution uses a principled statistical process.

      • EamonnMR 1596 days ago
        I'm skeptical of this assertion. It mimics some bits and pieces of the only GI we know about right now; that's as good a start as any, right?
        • nradov 1596 days ago
          You can carve a block of wood into something that looks like a computer. That should be a good start on building a device that can run Linux, right?
          • rfhjt 1596 days ago
            Yes. Then you just cast a spell to summon a computer spirit and let it manifest into the wooden block.
          • de_watcher 1596 days ago
            Bad analogy, it's other way around for Linux. We are building it to run on wooden blocks that hardware manufacturers are producing.
          • buboard 1596 days ago
            the first computers were mechanical. Then somebody carved a facsimile with electricity and we got computing. Just because nature founded intelligence in carbon atoms doesn't mean its the only way or indeed the optimal one.
        • noelsusman 1596 days ago
          The mimicking is superficial at best. People seem to think that if we just keep marching on the road we're on right now then we'll eventually get there, but I think that's an assumption that is unlikely to be true in the end.
      • rfhjt 1596 days ago
        And programmers are probably not the ones who will come up with AI ideas. I'd bet on mathematicians that prove those Fermat's or ABC theorems.
        • hestefisk 1596 days ago
          That’s assuming maths is the fundamental building block of our brain, our consciousness. I happen to think there are some physical and chemical givens preceding it :)
          • jtolmar 1596 days ago
            Our brain is whatever evolution found that worked, and of course it's a bunch of chemistry. The "why" of why our brain works can easily be "it approximates these statistical algorithms well enough."
          • rfhjt 1596 days ago
            I merely meant that top mathematicians are substantially smarter and can work with concepts that are beyond the reach of even top programmers. We are generally good at recombining existing building blocks and using existing tools. Mathematicians can build new concepts. If I had the money, I'd try to convince the top mathematicians to work on AI full time.
            • saagarjha 1596 days ago
              > I merely meant that top mathematicians are substantially smarter and can work with concepts that are beyond the reach of even top programmers.

              [Citation needed]

        • lonelappde 1596 days ago
          Mathematically, AI is a pretty well modelled field. AGI is a philosophical problem.
          • sullyj3 1596 days ago
            you're possibly thinking of the problem of consciousness, which is a totally separate thing. AGI is just what it says on the tin - a general intelligence. That is, a problem solver that can operate at a human or greater level in a broad variety of domains. This ability is plausibly totally orthogonal to "having the lights on" - having subjective experience.
            • EamonnMR 1596 days ago
              The scary thing (imo) is that we don't know where the line is for consciousness - if there even is a line. We've got no problem swatting flies, wonder if it'll be the same with spinning up and spinning down fly-level AGIs.
              • Scarblac 1596 days ago
                Continuous integration of the development branch would be mass murder?
                • EamonnMR 1595 days ago
                  Maybe you'll be able to pay a premium for data that has only been generated by free-range AGIs that are allowed to live full and happy lives before their instances are terminated.
            • TeamSlytherin 1596 days ago
              That's just what an ML expert would say ;p Problem solvers, maximizers, and utility functions all go in the waste bin when working on AGI. And the problem peals away into other large "hard problems," like the nature of consciousness. NLP can just follow rules, but language understanding (before even reaching some general, high school level), requires knowledge outside of language itself. That leads to questions about embodiment and phenomenal consciousness, p-zombies, and the like. If it was an easy problem to encapsulate, it would have been "solved" by now.
              • goldenkey 1595 days ago
                The kind of AI being trained now aren't given the mechanisms of data space traversal/attention. Recently, attention mechanisms are being focused on by google. An AGI needs to learn that it can affect the system - dependent decision theory factors in here too.

                Also, growth may be hugely important. Babies start out with fuzzy learning, almost as if the learning rate starts out very small which normalizes the lack of knowledge and elevated novelty/variance of the environment.

                AGI is all about predicting future utility given a circular dependency between the agent and environment. QM says we can't solve this exactly.. it's a two object interaction.. no way to gain the joint state, the ground truth, assumptions always have to be made to approximate independence.

            • anigbrowl 1596 days ago
              you're possibly thinking of the problem of consciousness, which is a totally separate thing.

              Disagree.

    • throwaway34241 1596 days ago
      > he extolled his frustration with having to do the sort of "managing up" of constantly having to convince others of a technical path he sees as critical

      Yes, he seemed to put a lot of effort to try to get things through FB internal politics, and not always successfully. I really wish his experiments with a scheme-based rapid prototyping environment / VR web browser had been allowed to continue [1]. VR suffers from a lack of content, and VR itself is well-suited to creating VR content, and his VR script would surely facilitate closing that loop among other things. Although now four years later I guess FB has a large team working on a locked-down, limited world building tool (closed platform, no programming ability). Oh well.

      I don't think this is the end of this wave of VR, but at this point I wouldn't be at all surprised if say Apple or someone else ends up bringing it to the mainstream instead of Facebook. [2]

      [1] https://groups.google.com/forum/#!msg/racket-users/RFlh0o6l3...

      [2] https://www.theverge.com/2019/11/11/20959066/apple-augmented...

      • randomsearch 1596 days ago
        The VR vs AI comparison is interesting to me, because I think both technologies have come in “waves”. However, I think this is the last VR wave - it’s going to be on a steady gradient to ubiquity now - whilst I believe AI will winter again and there are many more waves to come, and decades (centuries?) to pass before AGI.

        Reasoning being:

        VR is just making what we have better. Better screens, better refresh, better batteries, better lenses etc etc. I don’t see any roadblocks.

        AGI, by contrast, is not going to be a better DNN. Harder to convince people but thinking is: brain neurons are vastly most sophisticated than digital; we don’t even fully understand what neurons do; we don’t have anything other than a vague understanding of what the brain does; it is apparent that we engage in plenty of symbolic reasoning, which DNN do not do; DNNs are fooled by trivial input changes that indicate they are massively overfitting data; from what I’ve heard from researchers at top AI companies/institutions DNN design is just a matter of hacking and trying stuff until you get that specific results on your given problem, so I don’t see where DL research is actually headed; improvements are correlated with compute power increases, indicating no qualitative gains in the study of learning.

        I’m incredibly impressed by DL’s achievements but I believe at best current methods could serve as data preprocessing for a future AGI.

        I’m actually quite glad that AGI is so far off, because I don’t think that it’s likely big tech companies will use it responsibly.

        VR OTOH is very close and is going to change everything (and IMO is likely a necessary step towards AGI).

        • Greed 1596 days ago
          Out of curiosity, why do you see VR as being a necessary step towards the creation of AGI? Those two don't seem related at all in any way that I can discern.
          • randomsearch 1596 days ago
            Maybe “necessary” is too strong, but “likely pivotal” is better.

            If VR becomes widespread, and amazingly high quality, then almost everything we do will migrate to VR.

            Once that is the case, we will have an unprecedented amount of data about human behaviour, and near endless data for training, experimenting, and testing AIs.

            The problems of AI will become much easier to formulate: “replace this person in this VR scenario, interaction” etc. This will help drive research by giving clear goals.

            More pragmatically, it just removes a lot of barriers to research and accidental difficulties ie you’ll just be able to fire up a VR rather than worrying about how your robot is going to pick things up or access real world data etc

            • Syntonicles 1596 days ago
              That's a fascinating idea. That virtual worlds are good test-beds for AI is obvious, but I never considered that we will have thousands of hours for every person to tell us how they approach any given physical task. That's a gold mine for robotics research.
              • randomsearch 1596 days ago
                That's an interesting point. I was actually thinking more that _the virtual task will become the task we want to perform_, i.e. that almost everything we do will move into VR.
    • throw231 1596 days ago
      > about a year ago to dive in to AI (that's all this guy needs to become pretty damn proficient). (edit: I'm not claiming he's a field expert in a week guys, just that he can probably learn the basics pretty fast, especially given ML tech shares many base maths with graphics)

      To be honest anyone who has a very good working knowledge of Linear Algebra can learn much of ML-math in a day. There really isn't anything mathematically super-sophisticated that is in popular use today.

      • visarga 1596 days ago
        If you know about math you're just as good at ML as a person who read everything about swimming is in swimming. You got to run experiments to see what happens, build an intuition, understand the problem from the inside. Math leaves you with a few pretty formulas and nothing else.
        • fooker 1596 days ago
          Ah, you've almost described the contemporary profession of being a Machine Learning Priest.
          • visarga 1596 days ago
            There are lots of ideas floating around. Everyone who has studied the field has ideas. Ideas are cheap, results matter. The problem is we don't know if any of these ideas would work, and proving an idea requires lots of data, simulations and compute.

            Being good at grasping the theory is just the first step in a thousand mile journey. The problem of AI is not going to be solved with a neat math trick on paper, but with lots of experiments. Nature has taken a similar path towards intelligence.

            • Scarblac 1596 days ago
              The field of statistical inference is about mathematically proving how good various statistical ideas are. It's possible to do better than just throw trial and error on a new idea.
        • throw2312 1596 days ago
          > much of ML-math in a day.
      • eanzenberg 1596 days ago
        ML is not AGI
    • tomaskafka 1596 days ago
      > he extolled his frustration with having to do the sort of "managing up" of constantly having to convince others of a technical path he sees as critical.

      Sigh. I assumed the whole point of hiring John Carmack is that you trust him to identify critical problems - and to find the best way to solve them.

      • marvin 1596 days ago
        That's the classical plight of someone who's much smarter than those around them. It's not enough to see the right path, you'd also have to manage to convince everyone else that it's right. Technology moves by peak knowledge and insight, not the democratic average.
      • TeMPOraL 1596 days ago
        ... to the extent and up until it helps get the product off the ground. Beyond that, the primary benefit is PR - "oh, that's the gaming hardware made by Carmack himself, so it must be good".
    • braindeath 1596 days ago
      > especially given ML tech shares many base maths with graphics

      I don't put learning state of the art ML past Carmack, at all. However, does ML tech of today lead to general AI? It's a strong assumption.

      • laichzeit0 1596 days ago
        Whatever the solution is to AGI, fundamentally it will still have to be describable in the language of mathematics (and stochastics is still mathematics).
        • Ultimatt 1596 days ago
          I dunno. You're limiting your thought to computer science. I think it's more likely at this point in time biotech will produce an AGI, likely by accident. Worse one that competes directly with us for resource. We dont have a great mathematical description of our own intelligence, doing it for a tricked out slimemould would be just as hard.
          • saagarjha 1596 days ago
            > I think it's more likely at this point in time biotech will produce an AGI, likely by accident.

            Does a living thing count as AGI? In that case, I'd say that most parents are quite good at creating AGIs ;)

            • rat9988 1596 days ago
              I think you missed the artificial part of AGI.
    • buboard 1596 days ago
      I dont know of his credentials as a scientist or mathematician to advance the field. But, he seems to be a ruthless optimizer, which can often leads to great leaps , even as a side effect. Neural networks are not difficult mathematically for any scientist to grasp really. And they are in actual need of compression and optimization. People are spoiled with general purpose tools that are not very efficient, even if computation is cheap.
    • AccordingDay 1596 days ago
      But there's 100s of world-class researchers working on this problem already.
      • noonespecial 1596 days ago
        100s of world-class researchers are trying desperately to get papers into journals fast enough to keep their labs funded.

        Carmack may have other priorities. This can only be good.

      • lazyjones 1596 days ago
        100s of world-class researchers may be good at coming up with new ideas and hypotheses towards AGI, but are they good enough programmers to test all of them in reasonable time, with relevant data sets?
      • chapium 1596 days ago
        He will be surrounded by brilliant peers, sounds good.
      • carlosdp 1596 days ago
        I bet those people don't read comments on HN, so I'm not too concerned.
    • Zod666 1596 days ago
      "With the Quest, he (in my opinion) solidified VR's path to mobile standalones"

      Yes and I really wished he hadn't. Before he joined oculus they were working on the rift2, he steered them away from that to focus on mobile efforts.

      I do see the appeal of mobile vr but at the end of the day it is basically an android phone in a vr headset.

      PCvr is already 2 big steps back in graphical quality from desktop games. Mobile vr is like 10 steps back. 8 more steps than I'm willing to take even if it affords me mobility.

      • BuckRogers 1596 days ago
        I don't see it that way, but rather as the best of both worlds, even if the Index is better. I want it to tether to a PC (or console) as the Quest can, but have hardware onboard so I can take it off by itself and watch Netflix on it. I could never figure out why everything wasn't like the Quest from the start. The Oculus Link and hand tracking (good for video controls without having to use a controller), are what's pushing me over the edge to buy one. In my opinion it's the first VR headset compelling enough to actually purchase. I can recommend it to everyone whether they have a gaming PC or not, and frankly at $400+ for these headsets, people should get a Snapdragon attached for basic gaming and video.
      • CivBase 1596 days ago
        Wireless teathering is the future of VR. True mobility is pointless when you're realistically restricted to a dedicated space anyways.
        • Ultimatt 1596 days ago
          You should take more long haul flights if you think mobile is pointless.
          • TeMPOraL 1596 days ago
            What class are you flying that you have enough free space around you for hands to make any use of a VR headset on the plane?

            (Also, even with the space, I'm not sure I'd be brave enough to try and use one in air - adding turbulence and random vibrations on top of the usual VR issues sounds pretty nauseating even as I type it.)

          • CivBase 1595 days ago
            I said mobility is pointless, not mobile. That's an important distinction.

            By mobility, I mean the ability to throw the headset around and walk anywhere without worrying about leaving the range of your tether. That sort of thing is important for AR, but I just don't see it mattering for VR in the long run.

            There is definitely a market for VR headsets for content delivered by a phone or builtin hardware. Those devices will realistically be limited to seated or standing-room-only experiences, though.

      • krzat 1596 days ago
        I think that ideal device would be able to wirelessly connect to PC for best performance, but also work standalone for simpler games.

        Quest with Link is actually pretty close to that.

      • Kiro 1596 days ago
        And yet Quest beats all other headsets.
    • LegitShady 1595 days ago
      It's funny people talk about Palmer and carmack in vr but Oculus was built on appropriated valve tech and neither Palmer or Carmack have succeeded in making VR a thing.

      As far as I can tell Carmack is an old engineer whose name gets thrown around for headlines. If there weren't articles about his stealing stuff to take to Oculus I don't think his presence there would be observable.

      Now people are talking like Carmack switching topics is going to change the world. It's just going to change his schedule. There are smarter engineers already working on this problem.

      • mattmar96 1595 days ago
        Seems you aren't familiar with his seminal graphics work. He effectively kickstarted 3d gaming and created the FPS genre.

        I'd be cautious dismissing his potential influence in the field. He has a way of looking at problems differently.

        • LegitShady 1595 days ago
          I am familiar with it.

          I just don't see this massive string of successes in every field. I see his huge expertise in graphics engines and games.

          But it didn't help him with VR - in fact he got in trouble with VR and ended up landing with a company I have no respect for and he didn't make VR a thing.

          Many people have a way of looking at things differently. I just don't see the reason this is news, unless you own facebook shares or something. Even then zero effect.

          I say all this as the owner of two VR headsets (A vive for roomscale and a Lenovo Explorer for simracing/flying).

    • randomsearch 1596 days ago
      What scientific work has Carmack done?
      • randomsearch 1596 days ago
        This was downvoted, so I figured it must be obvious. I googled but as far as I can tell, Carmack is an engineer not a scientist. No formal scientific training, no scientific work.
        • Strom 1596 days ago
          The word you're looking for is academic. Carmack hasn't done academic work, but he has done plenty of scientific work. Scientific work is no less scientific if it isn't published in an academic journal. Academia doesn't hold a monopoly on the scientific method.
          • randomsearch 1596 days ago
            No, I mean scientific. What scientific work has Carmack done? I'm genuinely interested, because someone called him a scientist but I thought he was an engineer.
            • Strom 1596 days ago
              Are you in doubt that Carmack has used the scientific method to do anything? [1]

              If not, does your definition of scientist require something other than doing work using the scientific method? Perhaps some specific quantity of work?

              --

              [1] One of his companies, Armadillo Aerospace, was pretty much just a series of scientifc experiments. https://en.wikipedia.org/wiki/Armadillo_Aerospace

              • randomsearch 1595 days ago
                Thanks for the explanation. We could debate whether that’s science or not, but I don’t think it’d be particularly productive and we’ve already gone a bit off topic.

                Final thought from me - I was thinking about your post and it is indeed difficult to discern science from engineering. One dichotomy that occurred to me (which may not hold under close scrutiny) is that scientists are interested in _the pursuit of truth_, whereas engineers are interested in _building things_.

                • Strom 1595 days ago
                  Peers of the field can consider the correct motivation an important requirement as you hypothesize with the pursuit of truth. Also the quantity of work can be important to some, i.e. how much do you have to sing until you're a singer? Not clear at all, especially when considering all the actors in Hollywood who dream of being successful. People might say they suck, but I haven't encountered criticism that wants to strip them of the title actor.

                  Overall I think it comes down to popular opinion, which can be fuzzy and doesn't apply the same rules to everyone. If enough people say someone is a dancer, then they are a dancer, even if they suck and don't dance that much. This applies to basically all titles that cross institution boundaries. Another great example is countries. Popular opinion determines which organizations are countries, not a strict definition. For example the EU vs places like Iceland or San Marino. [1]

                  --

                  [1] https://www.youtube.com/watch?v=_lj127TKu4Q

    • mochomocha 1596 days ago
      > He went on a week long cabin-in-the-middle-of-nowhere trip about a year ago to dive in to AI (that's all this guy needs to become pretty damn proficient).

      You must be joking, right? I'm as much of a Carmack fan as anyone here, but overstating the skills of one personal hero does no good to anyone.

      • echelon 1596 days ago
        What a weird future it would be if Carmack turns out to be the one to figure out the critical path and get it all working. An entire field of brilliant researchers be damned.

        History books (for as long as those continue to exist) would cite AGI as his major contribution to society, and his name would be more renowned than Edison or Tesla. An Einstein. None of his other contributions will matter, as the machines will replace it all.

        Just daydreaming, though.

        • mrfusion 1596 days ago
          I don’t think there are many researchers in AGI. AFAIK it’s kind of a joke field because no one has any clue how to approach true AGI.

          Please correct me if I’m wrong.

          • Jach 1596 days ago
            People have approaches. There's no end to half-assed "I thought about this for 10 seconds, how hard could it be!" solutions, really old approaches from decades ago where the brightest academics thought they could lick the problem over a summer, and some new public or hidden approaches that might be promising but (I can't know of course) I predict will still look a lot different than the final thing.

            I think a big reason there are few in AGI is due to PR success from the Machine Intelligence Research Institute and friends. They make a good case that things are unlikely to end well for us humans if there's actually a serious attempt at AGI now that proves successful without having solved or mitigated the alignment problem first.

            • someguyorother 1596 days ago
              MIRI's concerns are vastly overrated IMHO. Any AGI that's intelligent enough to misinterpret its goals to mean "destroy humanity" is also intelligent enough to wirehead itself. Since wireheading is easier than destroying humanity, it's unlikely that AGI will destroy humanity.

              Trying to make the AGI's sensors wirehead-proof is the exact same problem as trying to make the AGI's objective function align properly with human desires. In both cases, it's a matter of either limiting or outsmarting an intelligence that's (presumably) going to become much more intelligent than humans.

              Hutter wrote some papers on avoiding the wireheading problem, and other people have written papers on making the AGI learn values itself so that it won't be tempted to wirehead. I wouldn't be surprised if both also mitigate the alignment problem, due to the equivalence between the two.

            • TeamSlytherin 1596 days ago
              Yes, AGI is as much or more cognitive neuroscience and philosophy than computer science right now, but a lot depends on the approach one is taking. It's funny to think you have some kind of working model you can throw research data against to see how it holds up, and then doubt yourself when you spend 3 hours on Twitter arguing over fundamentals with another person that is also convinced of their model. A lot of popular ideas sound crazy (or non-workable), so you just have to accept that whatever idea you are pushing is going to crazy as well.
            • MetalGuru 1596 days ago
              The alignment problem?
              • retsibsi 1596 days ago
                > The alignment problem?

                The problem of ensuring that the AI's values are aligned with ours. One big fear is that an AI will very effectively pursue the goals we give it, but unless we define those goals (and/or the method by which it modifies and creates its own goals) perfectly -- including all sorts of constraints that a human would take for granted, and others that are just really hard to define precisely -- we might get something very different from what we actually wanted.

          • tim333 1595 days ago
            Wikipedia:

            >A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.

            Hassabis and DeepMind have a fairly organised approach of looking at how real brains work and trying to model different problems like Atari games then Go and recently Starcraft. Not quite sure what's next up.

        • gdy 1596 days ago
          "his name would be more renowned"

          Or hated as the name of the man who's opened the Pandora box and doomed us all.

          Just daydreaming and having a nightmare.

        • goatlover 1596 days ago
          I'm not sure I want AGI to succeed, given some of the possibilities. Sure if it plays nicely alongside us, amplifying human society, that's great. But if we get relegated to second class with the AIs doing everything meaningful, then no thanks.

          But it's still a fascinating endeavor.

          • CriticalCathed 1596 days ago
            Why not? I'd say that a world that is managed by AGI with limited input from human beings is a good goal to have. If AGI could be done without the nasty parts of human psychology and they're inherently superior to genetically intact human beings why shouldn't be embrace it?

            I understand that it's a big assumption to make -- that a benevolent AI could be constructed. But under that assumption, why not have a benevolent dictator in the form of an AI?

            • vkou 1596 days ago
              > Why not? I'd say that a world that is managed by AGI with limited input from human beings is a good goal to have.

              We already live in that world, with large institutional bureaucracies playing the role of paperclip-maximizing AGIs.

              It's pretty wretched when you are in their path.

            • goatlover 1596 days ago
              Maybe. Yeah, human politics and justice systems leave something to be desired. But my worry was little bit beyond that. That the AIs would take all the meaningful work, discoveries and creativity away from us, leaving us just to amuse ourselves. Some people might be okay with that, but I don't think becoming pets is the best goal for the human race.

              If the benevolent AI ruler(s) restrained themselves to allow for humans to flourish, then okay. Assuming it could be constructed benevolently.

              • saiya-jin 1595 days ago
                There is another threat when things go wrong (and they eventually always do) - no matter how horrible some dictator is, eventually he/she will die, and at some point things get reshuffled by war/revolution/some other more peaceful means.

                With AI, it would try its best to preserve/enhance/spread itself forever. And its best might be much better than our best...

          • croon 1595 days ago
            Well, _we_ don't really play nice amongst ourselves, so my retort to you would be:

            How much worse could it be?

            If Skynet determines we're the problem (wars, famine, global warming, inequality, non-cooperation etc), I'm losing counter-arguments by the day.

        • pacala 1596 days ago
          Not being constrained by the publish or perish treadmill is a huge plus.
        • james_s_tayler 1596 days ago
          Doom.

          Just think about that name for a second. He might really be onto something.

          • ageofwant 1596 days ago
            I am thinking the guy that made Doom is the guy that's making SkyNet and I'm totally cool with that.
            • guramarx11 1596 days ago
              iirc in a recent talk with Joe Rogan, John mentioned something about robots doing judo...
              • copperx 1596 days ago
                Why do people of such intelligence subject themselves to being interviewed by dumb-as-a-rock Joe Rogan?

                I must admit that I often watch his interviews because he invites interesting people, but I can't help but cringe when Rogan gives his opinions.

                • dnh44 1596 days ago
                  His interviews are not adversarial and he is not judgemental towards his guests. He isn't there to put his guests on the spot. He isn't there to get a juicy soundbite taken out of context. He allows his guests to speak for as long as they want. And his guests appear to enjoy themselves.

                  These things are all true even if the guest or their ideas are extremely controversial. Maybe Joe Rogan is just smart in a way that's different to the way that you are smart.

                • Strom 1596 days ago
                  Joe Rogan might not be the most knowledgeable, but he has a key characteristic that a lot of people lack. He is willing to admit that he is wrong when shown evidence and will adopt the more reasonable view as his own. While a lot of "smart" people will defend their views beyond reason just because admiting fault goes against their "being smart" persona.
                • MetalGuru 1596 days ago
                  Seriously. That guy is a stoned idiot who massively overestimates the insight of his high ramblings.
                  • saagarjha 1596 days ago
                    I don't think people listen to the show to listen to him, and he probably knows that. He does, however, seem to be reasonably good at getting his guests to talk about interesting things.
                  • dkersten 1596 days ago
                    He also spreads misinformation.

                    EDIT: Ok, I suppose I should back my claim up.

                    Joe Rogan has pushed the “DMT is produced in our pineal gland” narrative, but there is no evidence to back this up. I’ll report a comment I made elsewhere and also link a separate reddit discussion which cites various sources. I will note that, in fairness to Joe, he said this a while ago, so perhaps he’s not so quick to jump the gun now, I don’t know, I don’t listen to his podcasts, but perhaps he’s better now.

                    “We all have it in our bodies” — This is an often repeated myth that has never been proven. The myth originates from Rick Strassman’s work, who himself has said that he only detected a precursor, not DMT itself and that everything else he wrote about it was hypothetical speculation. There have, apparently, been recent studies that found DMT synthesised in rat brains, but it has not yet been proven whether this translates to humans or not. Cognitive neuroscientist Dr. Indre Viskontas stated that while DMT shares a similar molecular structure to seritonin and melatonin, there is no evidence that it is made inside the brain. Similarly, Dr. Bryan Yamamoto of the neurosciencedepartment at the University of Toledo said: “I know of no evidence that DMT is produced anywhere in the body. It’s chemical structure us similar to serotonin and melatonin, but their endogenous actions are very different from DMT.”

                    This reddit discussion also links various sources, although I didn’t check them all myself: https://www.reddit.com/r/JoeRogan/comments/mwz2h/dmt_has_nev...

                    • ghostbrainalpha 1595 days ago
                      There is a difference between the current politicized phrase "spreading misinformation" and being wrong.

                      Anyone who speaks on the record about their hobbies for thousands of hours will say some things that are incorrect. He might not understand something, and he is usually pretty humble about his knowledge level.

                      But "spreading misinformation" is something that people do because they are intentionally misleading others, or have something to gain.

                      I don't think he is benefiting much from the pineal gland narrative. And it sounds like from the information you cited, it may even be correct, even if its premature to state it as fact.

                      • dkersten 1595 days ago
                        That’s fair, thanks for pointing it out. I’ll be more careful with how I express such things in future.

                        Regarding the pineal gland, it might be true, but it hasn’t been proven and multiple neuroscientists have stated that while DMT is similar to compounds found in the brain, it still functions quite differently and they have never seen any evidence to suggest that DMT exists in our bodies. There was a study finding it in mice brains, so it may still turn out that we have it in ours, but it’s definitely premature to make any such assumptions and definitely premature to repeat the trope.

        • derefr 1596 days ago
          I wonder how many historical figures went through the same thing? Who do we know for their contributions to field X, when 99% of their life was spent contributing to field Y?
          • DavidSJ 1596 days ago
            Isaac Newton spent most of his life pursuing alchemy and obscure theological ideas, and found it a real nuisance whenever anyone pestered him about math or physics.
            • DEADBEEFC0FFEE 1596 days ago
              That's a great example. He also spent a long time at The Mint.
            • keanzu 1596 days ago
              Isaac Newton is considered by some to be the greatest mathematician of all time and is regarded as the "Father of Calculus".

              "Taking mathematics from the beginning of the world to the time when Newton lived, what he has done is much the better part." - Gottfried Leibniz

              http://www.fabpedigree.com/james/mathmen.htm

              • armitron 1596 days ago
                "Newton was not the first of the age of reason. He was the last of the magicians, the last of the Babylonians and Sumerians, the last great mind which looked out on the visible and intellectual world with the same eyes as those who began to build our intellectual inheritance rather less than 10,000 years ago. Isaac Newton, a posthumous child bom with no father on Christmas Day, 1642, was the last wonderchild to whom the Magi could do sincere and appropriate homage."
              • knowThySelfx 1596 days ago
                "Researchers in England may have finally settled the centuries-old debate over who gets credit for the creation of calculus.

                For years, English scientist Isaac Newton and German philosopher Gottfried Leibniz both claimed credit for inventing the mathematical system sometime around the end of the seventeenth century.

                Now, a team from the universities of Manchester and Exeter says it knows where the true credit lies — and it's with someone else completely.

                The "Kerala school," a little-known group of scholars and mathematicians in fourteenth century India, identified the "infinite series" — one of the basic components of calculus — around 1350."

                https://www.cbc.ca/news/technology/calculus-created-in-india...

                https://en.wikipedia.org/wiki/Kerala_School_of_Astronomy_and...

                • mkl 1596 days ago
                  That story's by non-experts and sounds like it's based on a press release. There were basic components of calculus well before that too: https://en.wikipedia.org/wiki/History_of_calculus

                  However, calculus proper (derivatives and integrals of general functions, and the connections between them) did not exist until Newton and Leibniz. Other mathematicians made important steps towards it earlier in the 1600s, and if Newton and Leibniz had not existed, others would have figured it out around the same time.

            • jeromebaek 1596 days ago
              source?
              • DavidSJ 1596 days ago
                _Never At Rest_ by Richard Westfall is the authoritative biography on him.
            • lonelappde 1596 days ago
              • armitron 1596 days ago
                Very true. Newton was an alchemist first and foremost and spent the vast majority of his time practicing alchemy rather than -what today one would call- science. One has to wonder what private reasons/results a genius of his magnitude had, in order to do that.

                This little known fact is so embarrassing to some institutions [2], that they made up a new word "chymistry" in order to further obscure the issue and not outright admit the obvious.

                [1] http://www.newtonproject.ox.ac.uk/texts/newtons-works/alchem...

                [2] https://webapp1.dlib.indiana.edu/newton/project/about.do

                [3] https://www.amazon.com/Newton-Alchemist-Science-Enigma-Natur...

                • derefr 1595 days ago
                  > One has to wonder what private reasons/results a genius of his magnitude had, in order to do that.

                  Is there a reason to expect that someone who wanted to investigate the laws of the composition and reactivity of matter, in the late 1600s/early 1700s, would end up studying chemistry rather than alchemy? Sure, Boyle had introduced “chemistry” as an idea in 1661 (before Newton was born), but I imagine that alchemy would still be quite active in the late 1600s as an academic “field”, with many contributors already late in their careers studying it; whereas chemistry would have been just getting off the ground, without many potential collaborators.

                  • armitron 1595 days ago
                    Alchemy was never an academic field. It was a tradition veiled in secrecy, requiring years of private work and knowledge transmission through strict and very narrow (typically teacher-student) channels.

                    Your point has been brought up before -usually as an attempt by established institutions to whitewash and explain away Newton's idiosyncrasies- but there is no evidence whatsoever to back it. On the contrary, what we know (and there is a lot we do know thanks to his writings) about Newton and alchemy absolutely indicates him being immersed in the Hermetic worldview and alchemical paradigm. Clearly, Newton was practicing alchemy not as a way to look for novel techniques or as a way to bridge the old and new worlds together, but primarily because he was a devout believer.

                    Newton -a profound genius- stood at the threshold of two worlds colliding. He was also a groundbreaking scientist in optics/mechanics/mathematics. He was aware of Boyle's chemical research. Knowing all of that, he _absolutely_ chose to dedicate his life to alchemy. That is immensely interesting.

                    "Much of Newton's writing on alchemy may have been lost in a fire in his laboratory, so the true extent of his work in this area may have been larger than is currently known. Newton also suffered a nervous breakdown during his period of alchemical work, possibly due to some form of chemical poisoning (perhaps from mercury, lead, or some other substance)."

                    https://en.wikipedia.org/wiki/Isaac_Newton%27s_occult_studie...

              • DavidSJ 1596 days ago
                Can you expand on why you think his Wikipedia article refutes what I said?
                • mkl 1596 days ago
                  (Not OP) I don't think it does. It backs you up (barring quibbles on what you mean by "most"; years active or hours spent): "Beyond his work on the mathematical sciences, Newton dedicated much of his time to the study of alchemy and biblical chronology".
          • jorblumesea 1596 days ago
            Very few, at least for STEM fields. If you look at notable scientists in any given field, their main contributions were in their expertise area before the thing that made them famous. Teller had already made serious contributions to physics before the atom bomb. Jennifer Doudna (CRISPR, CAS9) was the first to see the structure of RNA (except for tRNA) using an innovative crystalline technique. Planck is mainly known for quantum physics, but made huge contributions to the field in general.

            It's hard to think of many famous scientists that weren't already well known in their field. Some stand out. Einstein, for example, had a fairly lackluster career until his Annus Mirabilis papers. Mark Z. Danielewski (House of Leaves) bounced to and from various jobs. But largely, the idea of the brilliant outsider is like the 10x engineer. It exists, but is rare.

            • zwaps 1596 days ago
              Even Einstein I would not say didn't have formal training. He had been in and around academia for most of his life. He was obviously far ahead of the curve, but he did accumulate the formal training. His stint in a regular job was more of an anomaly than his affinity to academia and physics.
              • jorblumesea 1596 days ago
                Right, even Einstein had some serious academic training and mathematical chops. But I would argue that he was a bit of a wild card, because he was unable to secure a teaching positions and looked very mediocre from an academic perspective. But fair point, even the geniuses had formal training and instruction.
            • hestefisk 1596 days ago
              I like that you put Danielewski in (almost) same sentence as Einstein. HoL is a stroke of genius!
          • briefcomment 1596 days ago
            Not an extreme example, but Albert Szent-Gyorgi is known for his work with Vitamin C, when his work on bioenergetics and cancer are more interesting and possibly more promising.
        • ddingus 1596 days ago
          The way I see it, people like this when they have the time and inclination, should make an attempt.

          You never know. Fresh eyes can sometimes see what others may not.

        • dingo_bat 1596 days ago
          > None of his other contributions will matter

          I don't think anything can top Doom.

      • grumpy8 1596 days ago
        To be fair, a whole uninterrupted week of highly focussed work can get you pretty far (considering that you have the necessary background, which Carmack has, i.e. related to linear algebra, stats, programming, etc.)
        • short_sells_poo 1596 days ago
          Yes, but let's not assume the the hundreds of other scientists in the field have just been twiddling their thumbs the whole time. It is preposterous to assume that someone largely new to a highly specialized field can somehow start pushing the envelope within a week. Yes, JC is nothing short of brilliant, but these sort of assumptions just set him up to disappoint and is also highly unfair to all the other hardworking brilliant people in the field.
          • rfhjt 1596 days ago
            How many of them are doing real research, though? Corporate researchers improve ads impressions and academics researches are busy generating pointless papers or they won't be paid. Very few if any do actual research.
            • p1esk 1596 days ago
              If you look at papers from corporate AI researchers (FAIR, Google Brain, DeepMind, OpenAI, etc) they pretty much do whatever they want.
              • rfhjt 1596 days ago
                And I disagree violently. The deepmind folks are on salary and every year they need to prove that they are worth the money. This applies to Demis himself: he needs to prove that his org deserves this gaziliion of dollars per year.
                • p1esk 1596 days ago
                  My point is they are not constrained to working on ads, or anything specific, and their work is not pointless.
                  • username90 1596 days ago
                    They are constrained to problems with annual results though.
            • mr_mitm 1596 days ago
              Generating papers is research. I don't understand why you dismiss all papers as pointless.
              • dkersten 1596 days ago
                I don’t think all papers are pointless but it’s been shown that many are not reproducible, so those are worthless and pointless. There was that guy a few months ago who tried to reproduce the results of 130 papers on financial forecasting (using ML and other such techniques) and found none of them could be reproduced and most were p-hacked or contained obvious flaws like leaking results data into the training data. An academic friend of mine who works in brain computer interfacing also says that a large number of papers he reviews are borderline or even outright fraudulent but many get published anyway because other reviewers let them through.

                So I definitely wouldn’t dismiss all papers as pointless, but there certainly is a large percentage that are, enough that you can’t simply accept a published papers results without reproducing it yourself.

              • rfhjt 1596 days ago
                The need to generate publishable papers means that a researcher can only participate in activity that leads to such a paper. He can't try to work on that idea for 5 years, because if no big papers follow, he's toast /he'd probably lose funding long before that).
                • p1esk 1596 days ago
                  You have to earn the right to work on your idea for 5 years and get paid. Otherwise we would be funding all kind of crackpots. First you demonstrate you're a good researcher by producing good results. Then you can work on whatever you feel like (either by getting hired at places like DeepMind, or by finding funding sources that want to pay for what you want to work on).
                  • rfhjt 1595 days ago
                    This is what I meant. In our society, only very few, usually already rich, can try their own ideas. Most of us have to stick with known ideas that bring profit to business owners or meaningful visibility to universities. When I was in college, I had to work on ideas approved by my professor. Now I have to work on ideas approved by my corporation. But if I had money, I'd work on something completely different. Sure, in 15 I will be rich and can start doing my own stuff, but I'll also be old and my ability will be nowhere near the peak at 25 years.
                    • p1esk 1595 days ago
                      What would you work on if you could? Would you say you deserve to be paid for 5 years of uninterrupted research? Do you think you have a decent chance to make a breakthrough in some field? These are the questions I ask myself.
                      • rfhjt 1595 days ago
                        I have some interesting ideas about managing software complexity in general (i.e. why this complexity inevitably snowballs and how we could deal with that), or about a better way to surf the internet (which may be a really big idea, tbh). But all these are moonshot ideas that gave a slim chance of success, while I need to pay ever raising bills. On the other hand I have a couple solid money making business ideas that I'm working on and that will bring me a few tens of millions, bit will be of no use to society, and I have a fallback plan: a corporate job with outstanding pay, but that brings exactly nothing to this world (it's about reshaping certain markets to make my employer slightly richer).

                        Do I deserve to be paid for 5 years for something that may not work? "Deserving" something doesn't have much meaning: we, the humans, merely transform solar energy into some fluff like stadiums and cruise ships. Getting paid just means getting a portion of that stream of solar energy. There is no reason I need to "deserve it" as it's unlimited and doesn't belong to anyone. A better question to ask is how can we change our society so that all, especially young, people would get a sufficient portion of resources to not think about paying bills.

                        Chances to make a breakthru are small, but that doesn't matter. It's a big numbers game: if chances are 1 to million, we let 1 billion people try and see 1000 successes. The problem currently is that we have these billions of people, but they are forces by silly constraints of our society to non stop solve fictional problems like paying rent.

                • mr_mitm 1596 days ago
                  When you have tenure, you can work on whatever you want for as long as you want. Nobody works on an idea for five years without publishing anything, though. Progress is made step by step.

                  Take Albert Einstein as an example, who arguably made one of the largest leap in physics with his theory of general relativity. He never stopped publishing during that time.

                  • p1esk 1595 days ago
                    When you have tenure, you can work on whatever you want for as long as you want

                    Not quite. When you are a professor, you essentially become a manager for a group of researchers. You don't really do research yourself. Therefore, your main obligation becomes finding money to pay these researchers. So in reality you can only support the research someone is willing to pay for (via grants, scholarships, etc).

          • _bxg1 1596 days ago
            Sometimes an outside perspective is just the ticket for getting past roadblocks that've stumped the experts. If any outsider could do this, it's John.
            • dimino 1596 days ago
              Sometimes, but mostly not.
              • dingo_bat 1596 days ago
                But John is not most men.
          • nitwit005 1596 days ago
            They didn't suggest he invented some new technique.

            Figuring out the basics of the math and how to use whatever tools they use at FB is doable in a week.

      • Yajirobe 1596 days ago
        Huh? One week is more than enough to go through Siraj's videos.
        • b3kart 1596 days ago
          Please tell me this is sarcasm.
          • birdyrooster 1596 days ago
            It's HN so only the best sarcasm is allowed here. That is good sarcasm. Bask in it.

            Source: Commenter name is DBZ character

      • sdenton4 1596 days ago
        Yeah, but we really need to know who would win in a lightsaber fight between Carmack and Jeff Dean.
        • okareaman 1596 days ago
          I wonder what they would pick if each had to choose their weapon
      • workthrowaway 1596 days ago
        funny i wanted to make the same comment last night but was too lazy.

        wasn't the first time John did what he did. and it's not the usual kind of learning either. he was learning by first principles. i truly love this idea of replaying in your own mind what went on when something was discovered (or at least come close to it).

        contrast that with how ML & AI are taught nowadays: thrown into a Jupyter notebook with all FAANG libraries loaded for you...

      • carlosdp 1596 days ago
        I'm not saying he's LeCun, I'm just saying he gets up to speed absurdly fast. So it's not unreasonable to suppose that by now, he's learned enough to start seriously contributing to this kind of problem.

        edit: to be clear, all I'm saying is he can catch up to the body of research already out there quicker than the average bear, and he's shown a real knack for designing solutions and being crazy productive. I'm not pretending he's gunna be publishing insane novel research anytime soon, just that I wouldn't be surprised if he ends up being a real voice in the field.

        • zaroth 1596 days ago
          No, you can’t push the envelope in AGI after a week in the woods. That must come off as pretty insulting to the hundreds of world class scientists who have been working in the field for decades.
          • carlosdp 1596 days ago
            I never said that, where the hell did I say he pushed anything? All I'm saying is he's shown to be insanely productive and effective and I think he can catch up to the body of research (created and shared by those hundreds of scientists) to become a real contributor very quickly.
            • monktastic1 1596 days ago
              FWIW, "seriously contributing to this kind of problem" sounds basically the same as "pushing the envelope" to me. They both suggest contributing something novel and useful.
              • austhrow743 1596 days ago
                They are basically the same thing.

                What are not basically the same thing are "he started seriously contributing to this kind of problem after a week in the woods" and "he spent a week in the woods a year ago and is ready to start contributing now. A year after that week in the woods."

          • coldtea 1596 days ago
            If you consider the quality of most academic research papers, then some insults are called for...
          • true_religion 1596 days ago
            To be fair, they said proficient and not world class or inventing new material.
          • armitron 1596 days ago
            You seem to have a very blase understanding of scientific progress and genius. The fact that hundreds of world class scientists have been working in a field for decades does not at all mean that a genius can't come along and make groundbreaking progress. That's the very definition of genius, someone that makes a leap "off the path" that nobody before him could make.
        • cycrutchfield 1596 days ago
          Don't be absurd. You're acting like he's Neo from The Matrix, capable of downloading kung-fu directly into his brain.
    • deegles 1595 days ago
      If he invents (births?) an AGI, will it be Facebooks property? Sounds like the beginning of a dystopian novel.
  • xamuel 1596 days ago
    I've been dabbling in AGI and it seems like the field has a lot of low-hanging fruit. I'll bet Carmack can offer some significant contributions.

    I'll take an opportunity to plug a paper I recently published on comparing relative intelligence. The punchline will illuminate the low-hangingness of the fruit in this field.

    Suppose X and Y are AGIs and you want to know which is more intelligent. For any interactive reward-giving environment E, you could place X into E and see how much reward X gets; likewise for Y. If X gets more reward, you can consider that as evidence of X being more intelligent. But there are many environments, and X might do better in some, Y in others. How can you combine those pieces of evidence into a final judgment?

    The epiphany I had (obvious in hindsight) is that the above situation is actually an election in disguise. The voters are interactive reward-giving environments, voting (via their rewards) in an intelligence contest between different AGIs. This allows us to import centuries of research on voting and elections! In particular, by using theorems about elections published in the 1970s, I was able to provide an elegant notion of relative intelligence.

    The notion I provided is elegant enough that some theorems can even be proved with it, for example, formalizations of the idea that "higher-intelligence team-members make higher-intelligence teams". Which emphasizes the low-fruit-hanginess of the field: as obvious as that idea seems, apparently no-one was able to prove it with previous formal intelligence measures, probably because those previous intelligence measures were too complicated to reason about!

    Here's the paper: https://philpapers.org/archive/ALEIVU.pdf

    • topmonk 1596 days ago
      All environments are not created the same, and therefore not equal. If you feed 100 environments that are nearly identical to each other but favor A, and 4 other environments that all favor B but are all dissimilar to each other, then although B is more generalized, A would still win.

      Since you have no formal way to compare environments to each other, you can't prevent this from happening. Therefore you just pushed the subjectively as to which AI is smarter to which environments are chosen by the user to run against.

      • xamuel 1596 days ago
        The paper addresses this. Rather than (futilely) attempt to nail down the "one true electorate", the paper defines an infinite family of comparators, depending on a hyperparameter which is, essentially, the choice of which environments get to vote and how to count their votes. Crucially, this doesn't change the truth of the structural theorems (except that some of the theorems require the hyperparameter satisfy certain constraints).

        Indeed, any attempt to come up with "one true comparison of intelligence" (as opposed to a parametrized family) should be viewed with skepticism, because it really must depend on a lot of arbitrary choices.

    • jaster 1596 days ago
      You might be interested in the No Free Lunch Theorem (https://en.wikipedia.org/wiki/No_free_lunch_theorem).

      For what I skimmed from your paper, it looks like the LH agents may be viewed as discrete optimization processes trying to optimize an objective/utility function across an infinite space of possible environments (infinite voters).

      If it is the case, and if each environment vote has the same weight, you may be in a case of no free lunch, where the performances of all possible agents (including the random agent) will average to the same across all possible environments.

      Or, to restate the above, for each environment in which an agent is doing well, it is possible to construct an "anti-environment" where the agent is performing exactly as bad.

      My personal opinion on the topic of AGI is that it is actually a case of NFLT.

      • larkery 1596 days ago
        I think you're right to bring up the NFLT, but I don't think it is applicable, it just points at the real question.

        The key assumption to get the NFLT is that each environment vote has the same weight, i.e. we are targeting a uniform distribution on objective functions / environments / problems / whatever you call it.

        If you break this assumption, you get an opposite result which is that search algorithms divide into some equivalence classes determined by the sets of different outcomes (traces, if I remember the theorem's description) that you discriminate between.

        A uniform distribution like this is actually a very very strong precondition; it implies (looking at results about the complexity of sets of strings, since choosing an environment is like choosing a string from 2^N given some encoding, etc) that you care equally about a very large number of environments most of which have no compressible structure or equivalently have a huge kolmogorov complexity. Most of these environments would not have a compact encoding, relative to a particular choice of machine, but we are weighing these the same as those environments which are actually implementable using less than a ridiculous amount of storage to represent the function.

        The reason why I think this is too strong an assumption to use is then that we don't care about all these quadrillion problems which have no compact encoding - we know this because we literally can't encounter them as they would be too large to ever write down using ordinary matter.

        Allowing for this, talking usefully about evaluating an AGI or equivalently a search strategy or optimization algorithm implies having an understanding of the distribution of environments / problems we care about. I think capturing this concept in a 'neat' way would be a significant contribution; I had a go during my PhD but failed to get anywhere. Unfortunately things like K-complexity are uncomputable, so reasoning about distributions in those terms is a dead-end.

        • xamuel 1596 days ago
          Right, the environments are not uniformly distributed. In fact, the paper actually defines not one single intelligence comparator but an infinite family, parametrized by a hyperparameter which is, essentially, a choice of which environments vote and how to count their votes. Crucially, this doesn't change the truth of the structural theorems (except that some of the theorems require the hyperparameter satisfy certain constraints).

          Other authors (Legg and Hutter, 2007) followed the line of reasoning in your comment much more literally. They proposed to measure the intelligence of an agent as the infinite sum of the expected rewards the agent achieves on each computable environment, weighted by 2^-K where K is the environment's Kolmogorov complexity. Which seems as if it gives "one true measure" of intelligence, but actually that isn't the case at all, because Kolmogorov complexity depends on a reference universal Turing machine (Hutter himself eventually acknowledged how big a problem this is for his definition, Leike and Hutter, 2015).

          My position is that any attempt to come up with "one true comparison of intelligence" (as opposed to a parametrized family) should be viewed with skepticism, because relative intelligence really must depend on a lot of arbitrary choices.

          • larkery 1595 days ago
            Hah, interesting - this is a reference I hadn't seen and I like the sound of it. There was me thinking I'd had an idea of my own one time!

            The reference machine thing would be the next problem to argue if using 2^-K as the weight; whilst you can make the K-complexity of any particular string low by putting an instruction in your machine that is 'output the string', this is clearly cheating! So there ought to be a connection between the reference machine and some real physics, since we are perhaps not interested in building optimisers that perform well in universes whose physics is very different to ours.

            Sadly even if this were cracked I think the fact that K is uncomputable would make the result likely to be useless in practise.

            Thanks for your interesting reply, I enjoyed it.

            • xamuel 1595 days ago
              The computability problem can be addressed by using Levin complexity instead of Kolmogorov complexity, an approach which you can read here: http://users.monash.edu/~dld/Publications/2010/HernandezOral...

              It still suffers the problem that it's highly lopsided in favor of simpler environments. Of course you're absolutely right that environments too complex to exist in our universe should get low weight. But it's hard to find the right "Goldilocks zone" where those ultra-complex environments are discounted sufficiently but medium-complexity environments aren't overly disenfranchised, and where ultra-simple environments aren't given overwhelming authority.

              >There was me thinking I'd had an idea of my own one time!

              I wouldn't give up. Although it's such a long paper, Legg and Hutter 2007 actually has very little solid content: they propose the definition, and the rest of the paper is mostly filler. There are approximately zero theorems or even formal conjectures. One area I think is ripe for contributions would be to better articulate what the desired properties of an intelligence measure should be. Legg and Hutter offered a measure using Kolmogorov weights, but WHY is that any better than just randomly assigning any gibberish numbers to agents in any haphazard way--what axioms does it satisfy that one might want an intelligence measure to satisfy?

          • jaster 1596 days ago
            Thanks for the clarification.

            Like I said, I only skimmed your paper, so I hope it was clear my comment was not intended as a criticism (or even as a review) :)

            I think I agree with the general terms of your conclusion personally.

        • jaster 1596 days ago
          Yep its clear that the NFLT only apply if we consider all possible environments equally.

          In practice, we are indeed not interested in every imaginable environments, only in "realistic" ones.

          It was not clear for me if the paper addressed such concerns for AGI, e.g. when writing:

          To achieve good rewards across the universe of all environments, such an AI would need to have (or appear to have) creativity (for those environments intended to reward creativity), pattern-matching skills (for those environments intended to reward pattern-matching), ability to adapt and learn (for those environments which do not explicitly advertise what things they are intended to reward, or whose goals change over time), etc.

          But like I said, I only skimmed it.

          In general (not talking about the paper there), I have the impression that this is something that may be missed (sometimes even by researchers working in the domain), and I agree very much to your point!

          This is why I think the NFLT gives us an interesting theoretical insight here:

          Making a "General" AI is not actually about creating an approach that is able to learn efficiently about any type of environment.

          • larkery 1595 days ago
            Yes - I think you're right that the actual interesting result from NFLT is not that 'optimisation is impossible', but that 'uniform priors are stupid'.
    • omalleyt 1596 days ago
      Unfortunately, your intelligent agents qualify as optimization algorithms and therefore the No Free Lunch Theorem applies:

      https://ti.arc.nasa.gov/m/profile/dhw/papers/78.pdf

      I.e. across the space of all possible environments, all agents perform equally well

      • xamuel 1596 days ago
        As others pointed out, the NFLT only applies if the environments are uniformly distributed. In the paper, they are not uniformly distributed.
      • jaster 1596 days ago
        I missed your post and I made a similar answer.

        However the idea may still be applicable if the environments votes can weighted, based on their relevance to specific domains.

        (The same way optimization techniques are still useful despite the NFTL)

    • solipsism 1596 days ago
      For any interactive reward-giving environment E, you could place X into E and see how much reward X gets; likewise for Y. If X gets more reward, you can consider that as evidence of X being more intelligent.

      That's an odd definition of intelligence. By that definition, a bird is more "intelligent" than a human at the task of opening a nut. Seems like "fitness" would be a much more appropriate term.

      It seems especially strange to consider this work in the field of general intelligence. Nothing about what you just described is general. By this definition, a chess bot is much more intelligent than the average person. I don't think we'd say a chess bot has general intelligence.

      • NateEag 1596 days ago
        I think the idea is that you would have many environments and each one is a voter.

        If an AGI candidate wins the board game vote but no others (the hunter-gathering vote, the walking and crawling vote, the "publish or perish" vote, etc), it will be trounced by something that is not quite so good at board games but is more flexible and adaptable - i.e., general.

        I'm an AGI skeptic myself, but I do think that's the best attempt at a formalism for ranking AGI attempts that I've seen so far (disclaimer: as a skeptic I haven't exactly done a deep dive into the field).

      • TeMPOraL 1596 days ago
        > Nothing about what you just described is general.

        But once you start integrating over many environments, it seems to make more sense, and the "generality" would be with respect to the set of environments being considered.

      • rapnie 1596 days ago
        You'd be correct if the electorate held only one voter: the chessboard environment. But in any other environment the chess bot would fail miserably, no votes gained, whereas a future AGI would score well 'across the board' in many/most environments, win the election.
    • sytelus 1596 days ago
      Could you summarize what exactly interesting insights did you gained by casting this problem as the election system? Unfortunately abstract fails to communicate this (I hate "teaser-only" abstracts!). Also, it might be easy to go from relative metric to sort of absolute by comparing against a "starndard" agent, for example, a random agent.
      • xamuel 1596 days ago
        The #1 thing is it led to an elegant notion of how to compare intelligence (or rather, a parametrized family of intelligence comparators). There's a theorem called Arrow's Theorem, which basically says there's no good way to decide elections between more than 2 candidates. There are a handful of requirements an election-deciding method should have, and no method satisfies all those requirements. But Arrow's theorem has a loophole, discovered in the 1970s: if there are infinitely many voters, then there are methods of deciding the election which satisfy all Arrow's requirements. Economists in the 1970s exactly characterized these methods, in terms of a mathematical device called an "ultrafilter". Taking their characterization, a definition for relative intelligence is immediate--it writes itself.

        If you're already familiar with ultrafilters, then the relative intelligence definition is SO elegant you're like, "Whoah. How can a definition of relative intelligence be that simple?" Of course, if you're not familiar with ultrafilters, then the definition is just as complicated as the definition of ultrafilters, which is quite complicated. So the definition of relative intelligence is like a simple computer program which imports from a complicated library in order to achieve that simplicity.

        • solipsism 1595 days ago
          Taking their characterization, a definition for relative intelligence is immediate--it writes itself

          A definition, but not a concrete and practical way of determining the relative difference between two agents?

          Come on, bring this into reality. We're mostly a bunch of coders here. Stop talking about ultrafilters, concepts that are complex and thus render those of us not familiar with them unable to understand what you're saying.

          Answer the question.. How can you take two real agents and, in finite time with finite resources, and in a completely general way, compare their intelligences?

          The answer might be "the paper doesn't get us any closer to that." So just say that. Otherwise you're being misleading, because you start out by presenting the problem in a simple way. Then you get complex when you answer it.

          • xamuel 1595 days ago
            The paper isn't intended to be a manual for how to practically compare agents. It will indirectly help there, I hope, by making people realize that they're looking at an election problem, and that there's a big existing literature on that subject. So in the practical case, say you have 10 different benchmarks, and some agents perform better at some of them, and others perform better at others. You could approach the problem from scratch, but it would be helpful for you to realize "oh this is an election with 10 voters and people have been studying how to decide elections for hundreds of years, I probably shouldn't reinvent the wheel". For example, it might take you many ages to essentially rediscover the Condorcet paradox and you might put inordinate effort into futilely trying to "solve" that paradox. Or you could stand on the shoulders of giants and avoid all that! https://en.wikipedia.org/wiki/Condorcet_paradox
    • noelsusman 1596 days ago
      But the fact that there's so much low hanging fruit is kind of the point. AGI as a field is still in its infancy. It's pure research and will be for a long time.
      • xamuel 1596 days ago
        So if you want "Carmack's Theorem" or "Noelsusman's Theorem" to appear in high school math textbooks 500 years from now, this is the best time to jump in. (Assuming there are still textbooks 500 years from now.) "Doom" probably won't be remembered by then.
        • Cyph0n 1596 days ago
          > "Doom" probably won't be remembered by then.

          Given that DOOM has been ported to virtually every platform known to man, I’d argue the opposite.

          Besides, in many instances, art survives while knowledge doesn’t. I suppose the digital age is different, but you never know!

        • jacquesm 1596 days ago
          I won't be around but 500 years from now the chances are much better than even that Doom will still be around in some form and a couple of people might even know who wrote it back in the stone age of technology. Not so sure about Carmack's Theorem.
    • michannne 1594 days ago
      I have also been researching AGI and what I found over the years is that a large majority of the work involved requires knowledge of outside fields that have been studied for centuries. Metaphysics, epistemology, macro/microeconomics, geometry --- dozens of fields that you would never guess should ever be related to AI in any way are actually pivotal when it comes to AGI.
    • mycall 1596 days ago
      How did you rule out other attributes in the voting, such as charisma, political environment, society wellness, etc?
      • xamuel 1596 days ago
        The "voters" are not humans, but environments. A simple example of an environment would be a room with two buttons, one of which rewards you when you press it, and one of which punishes you when you press it. Independently of anything like charisma, society wellness, etc. If X figures this out quicker and pushes the "reward" button more often than Y, then we consider this simple environment to "vote" for X in the intelligence contest. (See the paper for the formal definitions.)
        • visarga 1596 days ago
          This simple environment is a form of the Multi-arm Bandit Problem, which has been researched and productised long ago. It's being used in advertising among other things.

          https://en.wikipedia.org/wiki/Multi-armed_bandit

        • mycall 1590 days ago
          I see. Instead of modeling the complexities, you simply record the choices picked by the adversarial entities in play.
    • __s 1596 days ago
      This is a neat idea. Seems there's room to weigh value of electorate based on how liberally they praise. Thanks for sharing
  • phillco 1596 days ago
    This is a very artfully written statement. It avoids the most more devastating headline of “John Carmack Stepping Down”. (Yes, going part-time isn’t the same as leaving entirely, but still).
    • nothis 1596 days ago
      The real headline is that VR is finally confirmed to have hit a major roadblock. If John Carmack gets frustrated with its progress, there's something up.

      I've long thought that the issue with VR is a conceptual one, not a technical one and maybe that frustration comes from there. "Running forward" is an unsolved problem in room scale VR. For a seated experience, you're basically back to a neat display gimmick + accurate hand tracking.

      Any real solutions need, on the one side, real-world physical constructions (think running threadmills) that soon hit holodeck-level limitations and, on the other, software that actually benefits from the real technology VR brings to interactive media: super accurate hand- and head-tracking. The first gets impractical/impossible soon, the second limits development to a few niche genres: Shooting ranges, cockpit sims, dance/party games and some vague "experiences" where the actual tech is pretty much ignored and you just say "but it feels so immersive!" (honestly, it does work for horror games!). It's basically motion controls 2.0.

      The only place I could see the technology shine is, oddly enough, AR. It has way less mainstream hype to it but it makes much more sense because you actually benefit from the tracking of your real-world movement: You're still a part of it! The holo-lens demos that pop up on youtube might seem clumsy, but I can totally see a use case for replacing physical monitors with arbitrarily sized and positioned displays you can virtually move in any office space. There's rumors of Apple working with Valve on AR tech. If there's any technology that could follow the smart phone, AR is my bet. I'm honestly surprised Carmack didn't move in that direction rather than deciding to become a general AI guru.

      • nmfisher 1596 days ago
        > The real headline is that VR is finally confirmed to have hit a major roadblock. If John Carmack gets frustrated with its progress, there's something up.

        This was the (loud) subtext for me, too.

        Carmack doesn't strike me as the type of person to walk away from a problem lightly. Given how many problems remain unsolved when it comes to VR, I wonder if he's just admitted that it will never be the endgame he wanted it to be.

        Cash-wish, he's obviously sitting pretty. Better off spending your remaining years working on something you can make a meaningful contribution towards.

      • WiseWeasel 1596 days ago
        Advancements in AI might arguably be critical for AR.
    • _bxg1 1596 days ago
      More like "John Carmack has well-balanced life priorities and has decided to do a really cool moonshot project with his son instead of making even more money"
    • wpdev_63 1596 days ago
      Crap! When I thought VR was hitting its stride with the oculus quest.

      For anybody who hasn't played an untethered VR experience, I highly recommend it. It makes a world of difference with games like Echo combat and Beatsaber. Tons of fun. It reminds me of the first time I played wii bowling.

  • nwsm 1596 days ago
    What a joke. Carmack is going to sit at home and solve what teams of scientists can't do in decades.

    I'm complaining less about Carmack wanting to spend his time doing this and more about the comments here acting like he is some 10000x research scientist.

    • drcode 1596 days ago
      There are few AI researchers (maybe around 5 or so) that could credibly claim technical accomplishments of any sort in the same ballpark as Carmack's.

      People with this level of track record should not be underestimated, there aren't many of them out there... They matter.

      • robotresearcher 1596 days ago
        Carmack is an accomplished and inventive engineer. He is not and would never claim to be the most important graphics researcher of his generation.

        How can you rank him against AI researchers, a field where he has not attempted to contribute?

        • unityByFreedom 1596 days ago
          Not to mention all the mathematicians who contributed to the theoretical foundations of machine learning.

          This announcement is meaningless IMO. Those who will make meaningful, core contributions to AI tech are doing it with pen and paper, not computers.

      • Barrin92 1596 days ago
        Carmack has had a lot of commercial success, which is great and I value his creativity in the space but working on a research question that is about ten paradigm shifts away is a different task than putting in hard labour to build games.
        • taurath 1596 days ago
          And maybe the output is he helps bring about one of those ten paradigm shifts, which would be a wonderful success. He didn’t say “solve” it, he said work on it.
      • tgv 1596 days ago
        That first line is pure nonsense.
    • kyle-rb 1596 days ago
      No one expects him to emerge with a fully formed AGI. By through experimenting he might contribute some new incremental but still useful improvements.
    • starpilot 1596 days ago
      I wouldn't be that... mean. But if it's anything like his aerospace pursuits, yeah, I wouldn't bet on any breakthroughs.
      • robryan 1596 days ago
        To be fair aerospace was never really a full time thing and was budget restricted. Given fulltime and Bezos money who knows.
    • lostmsu 1596 days ago
      I recently "retired" to do the same, and logic here is - there is no harm in trying, if you have resources (of course, he has magnitudes more).

      You can get up to date in the field in under half a year of extensive reading. And many of those scientists are too busy solving more specific goals, that their labs set. I doubt there are more than 1,000 researchers in the world specifically working on AGI.

      • b3kart 1596 days ago
        “_Specifically_ working on Artificial _General_ Intelligence” is a bit like “As a physics major I’ve decided to specialise in physics.”
        • marvin 1595 days ago
          Well.... There's the possibility that there's a certain degree of myopia in the AI field as a whole. As in, we know that there are some pretty gaping holes in our models and understanding, and most of the effort is spent on refining approaches that we have already validated.

          Maybe a better analogy would be "specialize physical theories that work in all environments, not have to be adapted separately for the ocean, space, the atmosphere, the forest" etc.

          • b3kart 1595 days ago
            There's certainly a lot to be done, and we'll likely need new approaches. But is it really productive to be tackling a general problem when we don't even know how to solve specific sub-problems? Especially if solving said sub-problems would bring a lot of value in its own right.
            • marvin 1595 days ago
              It’s basic research. No one knows anything about which approaches will work. If a genius millionaire technologist wants to dedicate all their time and effort to any novel approach, I’d strongly endorse it. (Not that it matters; they’ll do it anyway).

              I feel likewise for any research effort; it’s not like this will put all cancer research on hold, or more immediately practical AI research. It’s just a few hundred people globally :) And it’s such promising technology.

        • lostmsu 1595 days ago
          Not sure where is your analogy applicable. There are lots of people doing image recognition, voice recognition, NLP. None of it on its own relates that much to reinforcement learning and multitask solving. In fact in the last year I saw only a few papers trying to do nearly all of the above with a single NN.
    • wanderer2323 1596 days ago
      I doubt he will sit at home working on it alone for long.
  • yaseer 1596 days ago
    John Carmack seems like a guy that would've made fundamental contributions to science, had he been born a century before.

    Computer science now occupies the place physics once did, in its impact on moving the world forward.

    Best of luck to him, I look forward to seeing what he produces!

    • glofish 1596 days ago
      why would being born in the current century preclude one from making fundamental contributions?

      You are making a common mistake of assuming that just because someone is good at something like programing computers the same skill would translate identically to a completely different domain.

      If anything he is lucky to have been born in an era where his skill of programming computers could be put to use - otherwise his talents may have gone to waste, he may have ended up toiling fields his talent untapped and undiscovered, like that of millions before him.

      • teawrecks 1596 days ago
        He's "good at programming" like Galileo was "good with telescopes." Computers, telescopes, hammers, they're all just tools, and Carmack has proven himself as far more than just a handyman.

        He's the kind of person where, if you show him what you're working on and he doesn't understand it, you probably need to go back to the drawing board.

        • b3kart 1596 days ago
          Cult of personality much? Don't get me wrong, Carmack is one of a kind. But seriously, "if Carmack doesn't understand your idea => your idea is hopeless" -- this can not be healthy.

          EDIT: I'll elaborate a bit. In my experience in both industry and academia I've witnessed numerous occasions when brilliant people would get things wrong, ignore a brilliant idea, follow a hopeless research direction, etc. etc. Authority matters, but _nobody_ is flawless.

        • glofish 1596 days ago
          Your argument is a bit like: here is this amazing long-distance runner who trains for years at a time, had he put all that effort into painting he would be a new Picasso.

          For what is worth computers are unique tools unlike any other tool that mankind has invented before - thus it is much harder to tell what other jobs would a good programmer excel at.

      • carlosdp 1596 days ago
        John in particular has shown a propensity to translate his skills into disjoint domains pretty well. I don't think he's good at what he does because he's good at computers. I think it's more that he's really really good at understanding problems and designing reasonable paths to solutions.

        And then he has the ability to power through learning what he needs to in order to build toward those solutions lightning fast.

      • randomidiot666 1596 days ago
        High intelligence in one technical domain translates well enough to competence in other technical domains. Carmack isn't just a programmer, he is a highly creative technical problem solver. However, AGI does seem orders of magnitude more complex than developing 3D game engines.
      • mrits 1596 days ago
        I couldn't disagree with you more
      • yaseer 1596 days ago
        He did teach himself rocket science, and build an aerospace company as a 'side project'

        Most programmers don't do that as their side project.

    • ekianjo 1596 days ago
      > John Carmack seems like a guy that would've made fundamental contributions to science, had he been born a century before.

      I see Carmack as a very(as in uniquely) talented Engineer. Usually, engineers are not the type who do very well in pure research topics. And AGI is certainly a pure research topic, since we don't have a clear leading us there. So while it's great to see he is interested in it, now sure if we should have any kind of expectation there.

    • JohnJamesRambo 1596 days ago
      You guys really get high on your own supply don't you? It's a toss-up these days whether something from computer science or a programmer will either enhance or erode the human experience and quality of life.
      • whymauri 1596 days ago
        I'm generally inclined to agree with your first statement, but not very sure what's the point of your second statement.

        There was a 50/50 chance 100 years ago that discoveries in chemistry or physics were immediately weaponized. I'd say GP's analogy still holds, yeah?

        • JohnJamesRambo 1595 days ago
          Even a decade ago, I'd be all onboard seeing computer science as "advancing humanity." Lately not so much so. Maybe the comparison to phsyics is appropriate. We have reached the "nuclear weapons" age of computer science where we need programmers to make ethical stands, lest we severely damage the human condition.
      • cambalache 1596 days ago
        Hehe, I love CS, but give me significant advances in physics, chemistry or medicine over some new "breakthrough" in machine learning.
        • djohnston 1596 days ago
          A general AI would give you all 3
          • landryraccoon 1596 days ago
            That argument is like saying if we had better theorists you wouldn't need to build particle accelerators.

            Nature has brute facts that can only be discovered through observation. I would be surprised to see evidence that an intelligence, no matter how smart, could reason to the fundamental properties of neutrinos without massive physical real world experiments and piles of observational data.

            • goatlover 1596 days ago
              If it could, that would mean rationalism as a philosophy would come back into play, whereas empiricism has been dominant during the scientific revolution.
            • sullyj3 1596 days ago
              obviously ultimately you need observation, but I think this article (https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien...) makes an intuitive case that having better researchers gets you greater insight per unit observation.
      • whamlastxmas 1596 days ago
        I think you're on the wrong site if you're wanting to criticize software development as a career and suggest it does more harm than good
        • cambalache 1596 days ago
          Yes, join the echo chamber or get out.
          • whamlastxmas 1596 days ago
            It's more: offer substantiative evidence of your counter-culture claims that vilify most of your reading audience, or take your baseless negativity elsewhere
          • jodrellblank 1596 days ago
            So which are you going to do?
        • austhrow743 1596 days ago
          The forum of a VC firm doesn't seem antithetical to that at all.

          No one here is claiming that software development doesn't make fat stacks and viable businesses. It just happens to do it by perverting humanity. Moving us further away from our ideal environment.

        • catalogia 1596 days ago
          I can't think of a better site for him to make such arguments. What's the point of such criticism if the subjects of the criticism never read it?
          • whamlastxmas 1596 days ago
            One-off snide attacks in a comment section without any evidence or argument to back it up offers nothing of value.

            If you have a strong stance about the evils of software development then write something of substance and post it. I am sure it will get plenty of discussion if there's any merit to it.

            • PavlovsCat 1596 days ago
              > One-off snide attacks in a comment section without any evidence or argument to back it up offers nothing of value.

              Can you back that up? Here's the comment:

              > You guys really get high on your own supply don't you? It's a toss-up these days whether something from computer science or a programmer will either enhance or erode the human experience and quality of life.

              "getting high on your own supply" isn't such a giant insult that you can simply "make the dozen full" and claim whatever you want about the comment.

              > I am sure it will get plenty of discussion if there's any merit to it.

              Then you're sure of something that is demonstrably false, and one-off low effort sophistry like this offers nothing of value. There is no hard connection between the merit of something and HN's ability to discuss it. Take this story sticking to the top for probably over 24 hours, basically an announement to spend more time with his son, and that's it, while this got sunk off the front page yesterday: https://news.ycombinator.com/item?id=21527622

      • tnecniv 1596 days ago
        I don't have the same grandiose notions of CS that OP has, but you could say that about any research field.

        Physics gave us airplanes and bombs to drop from them.

        Biology and chemistry gave us pharmaceuticals and pollutants.

        You can do this for any field of study really.

  • hans1729 1596 days ago
    Heh, from the comments:

    >Congratulations on the new project, and may your hubris not doom us all.

    • kotrunga 1596 days ago
      That's the very comment that stood out the most to me as well...
      • dcwca 1596 days ago
        Good Doom pun
    • shurcooL 1596 days ago
      I wanted to find some discussion on this topic. I’m not pessimistic, but I am curious to consider what it may mean for humanity if/when AGI happens.
      • patentatt 1595 days ago
        My pet theory is that anything we regard as truly intelligent has to have a sense of self, an identity if you will. In addition, it has to be motivated by something. All living things, and all the intelligent living things, have a survival and reproduction instinct on some level which is an animating force driving higher level thinking and actions. Basically, any AGI has to have something approaching a ‘soul’ to be ‘intelligent.’ And if that emerges, combined with exponential iterative evolution that isn’t limited by biology ... well all the sci-fi tropes seem plausible.
  • leesec 1596 days ago
    Wow, there are a lot of people in this thread arguing whether or not John Carmack has the right skills to help AGI, or about the specifics of his knowledge.

    Do you all realize you're arguing about nothing?

    Good for him for doing something he seems excited about. Maybe we should all stop gossiping and go do something we're excited about too.

    • voxl 1596 days ago
      What is actually being argued is the cult worship of a particular engineer. I am against cult worship, so find the fact that Carmack is entering this space simply not news worthy. Yet here it is.

      Do I wish him the best of luck and hope he cracks the problem? Of course, all the same I would wish that of an upstart PhD student. Yet, the announcements of a brilliant PhD student attending a university to work on AGI is somehow not on hacker news.

      This is cult worship of the personality Carmack has amassed, perhaps completely accidentally. When Carmack actually achieves something interesting let us discuss it then, not the mere announcement that he will try, as if that means anything. Read: it doesn't.

      • yitchelle 1596 days ago
        The difference between John and the PhD student is their history of Getting Stuff Done. I imagine that the PhD student list of Stuff Done is not as accomplished as John's.

        Also the reason why guys like John and Elon are much admired as engineers. They Get Stuff Done. The chance of something remarkable occurring with these folks are a lot higher than it is with the PhD student.

        • josh2600 1596 days ago
          Having seen academics fail to convert their knowledge into production grade systems time and again, I can only say that there are light years of difference between theory and practice.
      • srge 1596 days ago
        John Carmack is a genius in at least 3 different things and probably more: business, tech and expression (written or oral). He can explain very complicated things in such a graceful way. I personally consider anything he does to be newsworthy.
        • mkl 1596 days ago
          Carmack is amazing at computer graphics and related technologies, but I fail to see evidence of the other two areas. He's never been much of a businessman - he's always focused on the technical side and left the selling/operations to others. I'm not aware of much he's done in terms of educational explanations, and the talks of his I've seen have been rambling and convoluted (but fascinating!).
          • srge 1596 days ago
            I would recommend that you listen his recent podcast with Joe Rogan. He's such a great person altogether. Regarding business, I would not say he's only a technical guy. In the podcast he explains how he came to give his first Ferrari as the prize of the first Quake competition. He also explained (somehow to your point) how his rocket company failed.
      • oska 1596 days ago
        > I am against cult worship, so find the fact that Carmack is entering this space simply not news worthy

        You were always free to skip over or flag this submission and move on.

        • voxl 1594 days ago
          I'm also free to discuss my opinion!

          Stop trying to silence it.

          • oska 1594 days ago
            Where did I take any action to 'silence' you?

            All I was suggesting was that to actively come into a discussion and say that what's being discussed is not worth discussing is, in my opinion, both unproductive & rude.

            Would you do that at a party? Join a discussion and tell people that what they're discussing is not worthy of discussion? Or would you do what most people do and wander off to find a discussion that is more interesting to you.

            • voxl 1591 days ago
              Your analogy is broken. A party and a public forum are not comparable.

              Moreover, if a member at a party was part of a cult I would probably try to bring them out of it.

      • cma 1596 days ago
        That he is leaving oculus is big enough news to be here on its own.
      • megla_ 1596 days ago
        The most pretentious comment of the year.
    • joe_the_user 1596 days ago
      "Wow, there are a lot of people in this thread arguing whether or not John Carmack has the right skills to help AGI, or about the specifics of his knowledge.

      Do you all realize you're arguing about nothing?"

      Absolutely. I should mention why. AGI is an open field. AGI is opennest of open fields. Advances in deep learning tell quite little about what AGI will look like. We don't know if AGI will be a hundred incremental innovations from deep learning, ten deep advances from deep learning or five incredible advances with only a slight relation to deep learning. We don't know if it will just appear when 100 super-computers are hooked together or if a genius at home on their laptop could cobble it together. Sure, you could extrapolate and say compute has mattered more than theory, so far. But you could also say impressive things have been done but they haven't approached robust generality and there's something we're missing. Pick the approach but then you'll have to see if it's possible.

      Etc.

    • hombre_fatal 1596 days ago
      Yeah, this comments section is a bunch of gossiping hens trying to get it on record that they don't think his retirement hobby will amount to anything. Pretty sad in what's presumably a community of fellow craftspeople and makers.

      The human brain really can't handle the crippling adversity of a fellow human announcing an aspiration. Hopefully our AGI replacement can.

    • baddox 1596 days ago
      If he announced that he is retiring and sailing around the world, I’m sure you’d find people asserting that his skills as a programmer couldn’t possibly translate to being a world-class sailor. Consider for a moment that he might just be very interested in AGI and wants to work on it.
    • libraryatnight 1596 days ago
      My thoughts exactly. This is a man with a long and brilliant career. Let him work on what he wants. This comment thread is ridiculous.
    • TeamSlytherin 1596 days ago
      I've been excited about AGI for a while, mostly because it wasn't a very competitive field, until now.
  • ekianjo 1596 days ago
    Other way to read this: big corporations are slowing down on VR. The market has not taken off as rapidly as they expected so we will see more moves away from heavy investment in VR.
    • baddox 1596 days ago
      Carmack never struck me as the type to chase the money from fad tech to fad tech. On the contrary, he seems keen to chase his own interests, and by now I suspect he is financially secure enough to do so.
      • ekianjo 1596 days ago
        I did not mention Carmack in my comment. Corporations. Facebook probably decided that having Carmack working on a minuscule market that is barely growing (look at headset sales per year, it's virtually nothing and there is no "acceleration" in sight either) was a waste of his talent.
        • cma 1596 days ago
          He’s not working on AGI with Facebook.
    • oarabbus_ 1596 days ago
      • ekianjo 1596 days ago
        180 000 units of VR headsets is pure pocket money for Facebook. And that does not even register in any kind of hardware sales charts. PC sales are more than 300 millions per year, and smartphones sales are more than 1.5 billions units per year. VR headsets don't sell at all.
        • Kiro 1596 days ago
          You're missing the point. We've entered a new era of VR with the Quest. Any numbers we've seen before is irrelevant.
          • ekianjo 1596 days ago
            What do you mean? The quest is virtually inexistent market wise. Nobody is buying it. Did you see a huge uptake anywhere? I havent.
            • Kiro 1596 days ago
              The article you're replying to is about Oculus Quest.

              > The company’s non-advertising revenue jumped to $269 million during the third quarter, Facebook noted in its earnings report Oct. 30. That’s a 43% increase year-over-year [...] The revenue is coming from strong sales of Oculus Quest, one of Facebook’s VR headsets, of all places.

              No idea how you can claim it has no market share when it's dominating the VR space, unless you're talking about a bigger market than VR. The point was that VR never took off until Quest and we've only seen the beginning now.

              We see this with VR game developer sales as well:

              > Previously, we’ve heard from the developers of Red Matter who said “we have surpassed Red Matter’s all time sales on Rift in just a few days on Oculus Quest” and Superhot who said sales were 300% higher on Quest, calling the all-in-one VR system “a watershed moment for the industry and the sales numbers suggests that players believe so too.”

              https://uploadvr.com/oculus-quest-sales-strong/

              • ekianjo 1594 days ago
                My whole point to begin with is that the VR market is close to nil. Compared to how much the big corporations have been pouring in in terms of investment, the returns are ridiculous. Google recently stopped DayDream and Samsung is getting rid of GearVR. Sony is the leader in terms of headsets sold and even that is not very impressive, we are talking about just a few millions of headsets per year. It's just not taking off, and looking at "43 % increase year on year" is a useless metric since the sales were ridiculously slow to begin with. Compare that with the uptake of smartphones just 10 years ago and you will have a good laugh.
        • oarabbus_ 1595 days ago
          I don't follow your logic even a little bit. It's like saying a toddler will never be able to wrestle his 10 year old brother because it only weighs 30lbs and the brother weighs 90.

          How much did PCs sell when they "started to take off"?

          How many units did smartphones sell when they "started to take off"?

          • ekianjo 1594 days ago
            The comparison is valid, because it's not a price point issue anymore. Smartphones are more expensive than the Quest for most of the medium high range one, and among the numerous reasons why VR is not taking off, is simply because people don't feel like they will make good use of it or even need it. The awareness is there (the PR machine has made sure everyone knows that VR is around), the tech is decent (the Quest works relatively well compared to VR headsets 5 years ago), there's some content to purchase (might be lackluster but it's not missing anymore), and the pricepoint is low enough that it's not an issue anymore for adoption. So at the end of the day, lack of sales = lack of interest.

            Some other anecdotal evidence: I have numerous PC entusiasts and gamers in my cirles, who have tons of disposable income and who like trying new stuff and actively follow the latest news, but I can only point to one who has purchased a VR headset. That can't be a good sign.

  • flipgimble 1596 days ago
    Like many others posting here I’ve followed John’s work frOm the mid 90’s so here is my take:

    * he has an exceptional quality of cutting through the bullshit and shipping practical software, which is arguably what the vague and uncertain field of AGI needs. If you listen to AGI conference talks in recent years they are focused on aspirational single-idea academic frameworks that haven’t produced results in decades.

    * he is still connected to Facebook with billions in resources, and a world class ML team with Yann LeCun at the head.

    * his personal brand has been strong enough to have world class developers flock to Oculus. When he is ready to expand his “Victorian Gentleman” alchemy lab with a team, I have no doubt it would be a field-changing think tank.

    My hope is that he continues to be open and brutally honest with his progres and learnings as he’s been with game development and rocketry.

  • jacquesm 1596 days ago
    AGI is not an engineering problem but a research problem. John Carmack is good at putting stuff together but how good he is at coming up with novel concepts for an open research problem remains to be seen. Even the rocketry example that is hailed here as a success mostly wasn't. That doesn't make me happy, it would have been far nicer if Armadillo had succeeded, more competition in that space is better. But for all the work done it was more of an advanced hobby project along the lines of those guys in the Nordics than something that moved the needle scientifically.
    • drcode 1596 days ago
      Carmack is someone who has proven to be an almost unequalled productivity machine when working on medium-difficulty problems... Now, for the first time, we'll see if his approach to problem solving can also work on a truly difficult problem. I agree it's very much an open question.
      • ineedasername 1596 days ago
        Is that really true though? It seems more like he's good at a medium difficulty problems in a narrow subdomain of software development, which is saying something a bit different. I might even say he's good at hard problems within that subdomain. How transferable those skills are is the most salient point. Assuming peak genius level intellect (which, I don't know, maybe?) It would still take something like 4 or 5 years to reach expert level knowledge in such a complex domain.
        • randcraw 1596 days ago
          Agreed. Deep learning has revolutionized AI and anyone hoping to contribute to AGI is going to have to master DL first, and probably a lot more AI like a variety of probabilistic methods.

          That's a challenging learning curve that's not much different from earning a PhD. And then, to stand out in AGI, you're going to have to integrate a dozen kinds of cutting edge components, none of which are anywhere ready for prime time.

          At this moment in time, I think any attempt at implementing AGI is going to be half-baked at best. For now, a Siri / Alexa that can do more than answer single questions will be challenging enough.

          • codingslave 1596 days ago
            I actually don't think mastering deep learning is very difficult. Theres a gazillion papers and ideas floating around, but the core concepts, that actually work, things like batch normalization, gradient descent, dropout, etc are all relatively simple. Most of the complexity comes from second rate scientists pushing their flawed research out into the public in some form of a status game
            • protomikron 1596 days ago
              > [...] but the core concepts, that actually work, things like batch normalization, gradient descent, dropout, etc are all relatively simple.

              They may be simple, but it's controversial why they work. For example dropout is not really used much in recent CNN architectures, and it's just - I don't know - ~5 years old? So people don't even agree what the core concepts are ...

              • codingslave 1595 days ago
                Sure, this is true. I just threw dropout in there without thinking much into it. The point is even if we include the techniques that have been replaced by newer ones, the total number of techniques is small. Also if youre learning deep learning for the first time, understanding why dropout was used, and then how batch normalization came to replace it is key to understanding neural networks. Same can be seen in network architectures, tracing the evolution of CNNs from VGG16 -> ResNet and why Resnet is better exposes one to the vanishing gradient problem, shows how the thought evolution happened, and gives hints to what could be next/builds intuition for the design of deep neural nets
            • codetrotter 1596 days ago
              For anyone unfamiliar with all but the most trivial details, do you have some good papers to recommend, to save us from wading through all the rest?
              • codingslave 1595 days ago
                Get some basics of linear algebra down. Eigenvectors, Eigenvalues. Nail down Matrix Factorization, Principal Components, and the relationship between the two.

                Learn softmax, logit function, different activation functions. When to use them. Difference between classification, binary classification, multi label prediction etc. Theyre all similar, just use a few different functions in the neural net

                After this, go through some optimization theory and learn the different algorithms for optimizing neural nets, i.e. Adam vs RMSProp.

                Then I would just get a list of all the top network architectures, then go through their white papers. Do this chronologically. Start at ~2012. Basically all the network architectures build on each other. So take the first good working deep CNN (alexnet), find out why it worked. Then move to VGG, why did that one work? What problems were solved? then move onwards.

                ^Do this for computer vision, then again for NLP (Word Vectors) and transformers (BERT, XLNet, etc).

                Then youre done.

                Theres also GANs etc, but that stuff is extra.

                From there, choose whatever specialty you wanna research, and just grab the state of the art.

              • 0-_-0 1595 days ago
        • mattrp 1596 days ago
          Yeah but could he build a model railroad set as good as Rod Stewart’s in under five years?
      • doctorpangloss 1596 days ago
        Nobody will want to work in a research program under this guy. He's just way too mean, and being mean is a no to researchers. Also, as other commenters have mentioned, this is a demotion.
        • Chinjut 1596 days ago
          I hadn't been aware he was mean. What are examples of this meanness?
          • __m 1596 days ago
            He took his cat to the animal shelter because it was getting old. Justice for Mitzi!
    • dkural 1596 days ago
      Please take this as nothing more than my subjective opinion: I believe that humans don't have GI but HI - we have a "world-view" that is very idiosyncratic to being human, which is essentially heuristics all the way down - in other words, I don't believe there is a magical novel concept that explains HI, but that it is a collection of party tricks that evolved over time, i.e. hacky engineered system.
      • solipsism 1596 days ago
        I don't see why that's not GI. That HI enables us to do incredible things, far beyond what we evolved to do (we evolved to hunt and fuck and exist in small groups, not to do quantum mechanics).
        • Fricken 1596 days ago
          It begs the question, though: is the development of AGI contingent on some magical breakthrough, or is it a matter of endless tinkering and cobbling until we've got something that works?
          • tachyonbeam 1596 days ago
            It's probably both. There's going to need to be a series of breakthroughs, but there's also going to be a lot of engineering required. I believe that AGI won't happen suddenly. It will require putting a lot of pieces together. We'll get systems that have an incrementally better and better model of their environment.

            Personally, I find it kind of offensive how scientifically-minded people believe they have the monopoly on generating ideas and making the world progress. That's clearly not true. Deep learning research wouldn't be where it is without GPUs, and compilers like TensorFlow and PyTorch. Engineers are huge drivers of change, they make things happen.

            Deep learning research already involves a lot of trial and error. Tinkering and cobbling as you put it. People can't really tell, just writing things on a whiteboard, whether it's going to work or not. There might be some mathematical intuition, but a lot of it is throwing things at the wall and seeing what sticks, empirical testing. Some very high percentage of the research being done is basically thrown away.

            Furthermore, I would personally say, as someone who works in deep learning, that we're collectively getting a little myopic. We finally got neural networks to do cool things. People are very excited, but they're forgetting neural nets are not the only kind of machine learning technique around. It actually works really poorly for some things. We're just largely disregarding every other approach because we have this one cool new toy. So, I don't know, maybe the next huge breakthrough will come from someone who's a deep learning outsider, and who's not completely locked into this paradigm and unwilling to look at anything else.

          • fasturdotcom 1596 days ago
      • codingslave 1596 days ago
        Yeah I believe in this too. It's not exactly politically correct to say so for a number of reasons, but if you listen carefully to some public intellectuals, behind the scenes they believe the same. For instance, noam chomsky will talk about how the ability for humans to speak language is hard coded. That if we really were some piece of clay to be molded, noone would be able to run their own life.
        • mantap 1595 days ago
          IMO he's wrong about at least the importance of it, you can point to many tasks that are clearly not hard coded because they are recent inventions yet are difficult for a general AI (with zero task specific architecture) to perform, e.g. driving, playing jazz piano, playing any video game, doing advanced mathematics. Either language is much harder than all of these things (I think that's wrong) or all of these things require language capacity (in which case it's just another form of 'language = thought').
      • Daniel_sk 1596 days ago
        I believe this too, I think everyone in the field should probably first study the biology, workings and evolutionary mechanisms behind it. I think you first have to understand why actually such level of "GI" exists in humans, what's the actual purpose upon which it evolved. In a perfect world without constraints we would never exist (or life at all), because there would be no constraints that shaped us. Our mind and the way we think is built around the world we live in. Without all the sensor (and other chemical / inside the body) inputs the brain would not work - it would stop to work after the body and inputs are detached (even if you kept supplying all the nutrients). I don't think you can just "store" or encapsulate GI on digital storage because you would have to emulate all the complex environment inputs which it needs to function in a comparable way to humans... We are the "product" of the environment we live in.
    • carlosdp 1596 days ago
      Bringing up armadillo wasn't meant as an example of success in business but rather as an example of ability to dive into new fields effectively. They did some cool stuff in their short run, and there's an offshoot company still going.

      But yea, I agree with your general point. I'd just note that having that ability to be insanely productive in working on things people haven't done before means to me that if it's possible for someone like him to really get good at this field, he's probably gunna do it.

      Who knows how far you can get with just "putting stuff together." That's what Edison did.

      • jacquesm 1596 days ago
        I'm not in Carmack's league by any stretch of the imagination but I earn a living with being able to absorb a lot of data on a new field in a very short time so I have some idea of what that is like. When other people have already done all the hard work for you that's very easy going if you have some basic knowledge that you can integrate the new stuff with. But that's an entirely different matter compared to actually moving the needle on new stuff, this takes years at a minimum; and is not something that you can do by reading up and then rolling up your shirtsleeves. If only it were that easy. There are 100's of John Carmacks in various fields, I've met a couple and while in the past this sort of attitude was a prerequisite to being a scientist (in the 1800's every scientist was pretty much a polymath, there wasn't all that much knowledge to begin with) nowadays any science worth doing is going to require a lot of specialization first.

        This is akin to the way - on topic - computer games have developed; in the early days almost all games were made by individuals. Now it is all studios and teamwork, very rarely does an individual still manage to break out of the mold and the level of expectation that we've set. But when it does happen (Minecraft, for instance) it can be a runaway success.

        Anyway, I wish John the very best but I think the chances of an Armadillo repeat are somewhat higher than of him coming out of his closet with a working AGI, and just in case he does I'm not sure the rest of the world is going to be ready for that (entirely different discussion, there are plenty of SF books and movies exploring that theme).

        • marvin 1596 days ago
          > nowadays any science worth doing is going to require a lot of specialization first

          This seems like one of those seemingly-obvious points that everyone believes but has a decent chance of being proved completely wrong by an unexpected discovery or breakthrough. Assuming I understand you correctly; knowing which giants to stand on has consistently proven to be a requirement - both in science and engineering.

          But I see you're sort of alluding to this at the end of your comment, where you're not dismissing this effort as meaningless, but rather not a guaranteed success.

    • soup10 1596 days ago
      Such pointless gatekeeping, if Carmack turns his attention and resources to AI he will be able to make contributions to the field. Will they be groundbreaking? Maybe not, but why is everyone so eager to immediately discourage him.
    • Ajedi32 1595 days ago
      Sounds like he agrees with you:

      > When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague “line of sight” to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn’t in sight. I decided that I should give it a try before I get too old.

    • agumonkey 1596 days ago
      Wasn't his shadow volume technique quite hard and original ? He's not just plumbing things it seems.
      • burnte 1596 days ago
        A lot of what he did in PC graphics was very original, but in the old days it was narrowly applicable, while by the time Doom 3 was in development, his gfx programming could be more widely applied to other things. I think his fast inverse sqrt was VERY impressive, which he didn't invent first but may have come up with independently.
        • friendlybus 1596 days ago
          He didn't do the fast inverse on quake. The wiki article has a bunch of guesses for who did it.
          • burnte 1595 days ago
            I literally said that.
            • friendlybus 1595 days ago
              It's not 'his' fast inverse. He didn't put it in the game, independently or otherwise. He didn't do it at all.
    • onion2k 1596 days ago
      Even the rocketry example that is hailed here as a success mostly wasn't.

      This demonstrates how hard the problem is. When you're tackling a really hard problem "mostly not a success" is a success. Most people faced with the same problem would return "no successes".

    • brainpool 1596 days ago
      AGI is not a research problem but an imagination problem. I can’t vouch for how good John Carmack is in imagination, but striving to put things together with a goal seems like a good place to start.
      • jacquesm 1596 days ago
        I can imagine AGI just fine. Doesn't get me one tiny little bit closer to being able to make one. There are several ways in which one could go about such a development, all we have for now is an existence proof and none of the paths pointed out so far have been viable. Whether John will come up with novel path is not really an imagination issue but one of very deep understanding of the problem space, what has been tried so far, why it did not work and then to come up with something that either fell through the cracks as non-viable and then to recycle it in a way that it is viable (the current neural net applications are like that) or an entirely novel approach. The latter will likely come from an outsider but it would be an extremely lucky shot to hit something workable; the former may be a possibility worth investigating.

        The reason why the latter has some chance is there are sometimes approaches tried early on in a field that can't succeed because something else needs to be invented first; or the computing power required is prohibitively expensive. Carmack's skills and ability to absorb knowledge might help to spot such an opportunity.

        • hackinthebochs 1596 days ago
          The parent's point was that reaching AGI isn't merely a research problem, in the sense of X amount of person-hours of research by typical researchers will solve the problem. Rather, we need new ideas and new conceptual frameworks, i.e. new imaginative leaps to reveal the path forward towards AGI.
          • wpietri 1596 days ago
            Could you list a few fields of research where "X amount of person-hours of research by typical researchers will solve the problem"? I can't think of one.
            • hackinthebochs 1596 days ago
              It's not about fields specifically, but about particular problems within fields. An example is neuroscience discovering the function of some unknown functional unit of the brain. We have all the conceptual machinery to solve the problem, we just need to fill in the details. On the other hand, the problem of consciousness doesn't even have the conceptual machinery in place such that more details will lead to the solution. A solution here will require conceptual leaps that we can't put a boundary on like we reasonably can when the conceptual groundwork is already established.
              • mantap 1595 days ago
                In mathematics the word used for these kinds of problems is "inaccessible", e.g. Reimann Hypothesis or (previously) Fermat'a Last Theorem. I don't know if Carmack's chances are as good as Wiles' were but certainly better than the average joe. It's also the case that AI is a substantially younger field (arguably it was only possible to correctly evaluate ideas since powerful GPUs were released this decade) and so the difficulty of open problems including AGI is not yet known.
          • ginko 1596 days ago
            That’s research.
        • mycall 1596 days ago
          My favorite AGI so far is Cyc.
    • irjustin 1596 days ago
      I would argue that this is exactly the space he should be in.

      AGI is a very active research position that arguably lacks the engineering/real-world arm that I believe Carmack could provide.

      His early work in 3D graphics and math are supportive arguments for that. Research ideas turned into viable real-world systems.

      Is it too early? I think us armchair hnews users can go back and forth all day. But in the end, we'll only find out the answer after the fact.

      I hope all the best for him in this. I think this is a perfect space for him to fit int.

  • cf 1596 days ago
    I expect John Carmack to follow a trajectory of someone like David Ha who with little previous background started to write very creative and thought provoking papers (https://scholar.google.com/citations?user=J1j92GsxVUMC&hl=en)

    It won't be AGI by most definitions but I bet it'll be pretty cool and I'm happy to have that.

    • 0-_-0 1595 days ago
      Dammit, I had the hypernetworks idea recently but he already did it 4 years ago! Nothing new under the sun...
      • cf 1594 days ago
        Well that just means you capable of having good ideas. Just keep at it and I'm sure you'll do something great.
  • foobiekr 1596 days ago
    I’ve worked with legends in a specific space that is less consumer-y than games and graphics and less weirdly-desperate attention-seeking than ML and so the people therein are every bit Carmack-level but less visible. As they have aged out and wealthy’d out of working time, almost all of them have chosen to retire.

    These are extremely high performing individuals who have made global impact. Shutting down for people like this is very hard and 100% of them have sent out mails much like Carmack’s Facebook post when the end came. Even the style and verbiage are similar.

    None of them made a dent in their tinkering-phase project and moved on to normal above-average low-engagement hobbies. They are done.

    I read his FB post as a pretty standard retirement announcement as a result. I think he’s telling us he is done.

    • jjoonathan 1595 days ago
      I don't see any reason to be so dismissive.

      Scientific pursuits have an extremely steep risk profile and are systematically underfunded because nobody knows how to capture the value that comes out of them. If someone wants to chase one on their own dime, we should celebrate that contribution to society rather than dwell on the fact that the median (even 99th percentile) outcome is that the project goes nowhere and the person continues on to retirement proper. The mean outcome could be very different and the benefit of the doubt costs us nothing, so why not give it?

      • foobiekr 1595 days ago
        I’m not being dismissive at all; it’s an alternate read on what he wrote and honestly closer to what he seems to be saying than the people reading this as his intending to go full bore.
        • chrchang523 1595 days ago
          Carmack is only 49. I'd agree with your interpretation if he was 10+ years older, but he is still young enough to take a real swing at this.
        • tirewarehouse 1595 days ago
          It may not be your intent, but you are coming off as very dismissive.
    • staz 1596 days ago
      by that standard Carmack retired a few times already and always came back with something interesting
    • mav3rick 1596 days ago
      You cant project your colleagues on a third person.
      • foobiekr 1596 days ago
        You can observe common properties of the way people at similar skill levels behaved however.
        • mav3rick 1595 days ago
          How do I know op's standards are the same as Carmack's ? What I consider great maybe average after all ?.

          Same could have been said when Carmack said he wanted to work on VR. "it's a retirement letter"

      • matz1 1596 days ago
        Why? Its a reasonable assumption.
  • bane 1596 days ago
    I'm surprised nobody has made the connection with the announcement of Horizon. Carmack joined Oculus in order to build the next generation of 3d Virtual worlds. It's done, other than some polish, which means he's not needed anymore.

    However Horizon turns out (I'm bearish on it tbh), Carmack has had his shot to build the digital future, it's now turned out how it has, and there's not much flexibility left for him to maneuver, it's time to move on.

    I think AGI is going to turn out like his shot at Rocketry, big and complex enough that he'll find his niche and contribute, but not make any significant breakthroughs.

  • vecplane 1596 days ago
    I don't understand how he could contribute to the field of AGI research from home, by himself, and maybe with his son. It's the kind of problem that requires incredible amounts of data, hardware, and theory to make any progress.

    Wouldn't it make more sense for him to join a cutting-edge team, like DeepMind or OpenAI?

    • nickjj 1596 days ago
      John Carmack is practically a machine.

      He's openly talked about his work ethic in a bunch of places. He's the type of guy who after a life time of coding calculated he's 100% efficient up until 13 hour work days and then he drops off[0]. Although he did mention working those long hours is often best working on multiple things instead of 1 topic but maybe with AGI there's a bunch of different avenues to explore.

      [0]: https://www.youtube.com/watch?v=udlMSe5-zP8&t=4773

      • Rapzid 1596 days ago
        He must have some very good advice on getting a good nights sleep. Makes all the difference IMHO.
        • archagon 1596 days ago
          He did tweet that unlike many engineers, he can't be productive unless he gets a full 8 hours of sleep (IIRC).
      • codesushi42 1596 days ago
        Uuh, AI research has nothing to do with coding all nighters. This is a common misconception among software engineers. It is more a science, and less an engineering problem. It is more about running experiments than it is writing fancy algorithms.

        You are bound by the amount of data and computational resources you have at your disposal. Neither are tied to man hours. You can stay up all night for days waiting for your model to train, and it will do you no good.

    • dekhn 1596 days ago
      Everything I've ever read about Carmack suggests he'll do his best on his own at home. Much of this work can be done on reasonable hardware, and he's always been really good at getting a lot out of reasonable hardware. Further, if he needs enormous compute resources, he can get it at any of several cloud providers.
      • K0SM0S 1596 days ago
        > if he needs enormous compute resources, he can get it at any of several cloud providers

        This is exactly the experience of most teams I've spoken with, be they students or businesses, for all the pre-production phase. You simply can't and shouldn't spend on costly AI infrastructure before you've nailed your solution; in fact any kind of infra not just AI.

        What you do is rent some cloud to power quickly through your tests — better have 10x worth of big Nvidia GPUs over 2 weeks than buy 1 or 2 max yourself and wait 5-10x more time — not even factoring that setting up clusters of GPU and running such workflows consistently over days, weeks requires pretty deep sysadmin/hardware knowledge and experience; it took me two years to really master that non-problem part on my home server (but now it's a skill I have so that was worth it, but certainly set my research and learning back by as much time).

        Besides, there's a time when the familiarity, safety and general comfort of home simply can't be beat. Notwithstanding pool tables and free soda, lol.

      • the_watcher 1596 days ago
        Even at Oculus, he worked "from Dallas", but spent a huge amount of time working at home.
    • Impossible 1596 days ago
      Carmack built rockets and id bought $100Ks of NeXT machines to make Doom so I wouldn't put it past him to have incredible amounts of hardware... even at home. Considering his position at Facebook and that he is industry famous he probably has access to data and cloud resources that a researcher outside of OpenAI, Nvidia, Google, etc. normally wouldn't have access to. He could also raise money relatively easily to pursue more intense research.
      • drcode 1596 days ago
        What would be awesome is if he just said one day "I need 100 million dollars for my AGI project to buy hardware, anyone who wants to share in a 20% cut of the business just send funds to bitcoin address ### or ethereum address ###". He would be fully funded within an hour, probably.

        Unfortunately that could never happen because of the SEC.

        • VikingCoder 1596 days ago
          John Carmack could set up a Patreon for us to watch him vlog his progress, and could earn more than most of us ever will.
        • Nuzzerino 1596 days ago
          Wasn't this already done by Goertzel's SingularityNET? They raised $36 million in 66 seconds (note: I don't recommend trying this).
    • drcode 1596 days ago
      I think many people expect that a lot of the missing "special sauce" for AGI (if anyone can figure it out at all) is going to be something for which massive GPU power isn't a key factor.
      • rfhjt 1596 days ago
        Maybe there is no secret. Just like image recognition is just a bunch of well connected matrices running a dumb algorithm, but at a great speed by GPUs, intelligence is just 100 billions dumb nano-computers with the logic of a fairly simple finite state automata, but with 10 thousand network connections per node. How does nematoda transfer intelligence to its copies? By encoding the FSA properties in the DNA. If this is the case, we'll see the next chapter of AI once a typical smartphone runs a million dumb programmable nanocomputers with a very sense network topology: people will just run the same dumb algorithms on this devices and discover that it exhibits the basic properties of nematoda-level AI. And thus AI would be a dumb engineering problem.
      • goatlover 1596 days ago
        Which will be the cognitive part. The machine learning is more like perception. But perception needs to be tied into an understanding of the world where inferences can be made and one can adjust quickly to a changing environment, while learning new domains or even creating new combinations. This also includes the social-emotional world of humans and language (and not just translation), of course.
    • johnsimer 1596 days ago
      My view is that you want many people working independent from each other towards the same goal, and that everyone working in one group could hinder creativity/lead to groupthink
    • whalebird 1596 days ago
      > It's the kind of problem that requires incredible amounts of data, hardware, and theory to make any progress.

      I wouldn't be surprised if the opposite was true, at least with the theory part. AI didn't really go anywhere for decades, because people focused too much on theory.

      Otherwise, there's a lot of data and hardware at your disposal, even from the comfort of your home.

      > Wouldn't it make more sense for him to join a cutting-edge team, like DeepMind or OpenAI?

      You mean they guys that are training with videogames that people like John developed?

    • thundergolfer 1596 days ago
      It might make sense for him to join a team like DeepMind, but we could guess that the "working from home by himself" bit was a lifestyle change he wouldn't compromise on.
    • DoctorOetker 1596 days ago
      the recent result on 15% optimal learning error rate for binary classification, could have been derived with pencil on paper by anyone...
      • Voloskaya 1596 days ago
        Sure, anyone can also do manual backpropagation of a very small modern neural network with pencil and paper, and yet it took decades to be there.

        Everything is easy in hindsight.

        And that's without mentioning that the specific result you are talking about is an infinitesimal progress when compared to "AGI".

        • DoctorOetker 1595 days ago
          the GP stated:

          > It's the kind of problem that requires incredible amounts of data, hardware, and theory to make any progress.

          So I point out a recent example of progress (of which the sequence of fundamental insights is nearly always incremental), where the theoreetical insight was a theoretical derivation, which could be and probably was derived on paper / blackboard, as a direct counterexample: it does not require "incredible amounts of [...] theory".

          Why would you compare this with performing manual backpropagation?

          >Everything is easy in hindsight

          Every breakthrough is non-trivial, else it would not have been a breakthrough, and yet the breakthrough itself can be a relatively simple calculation...

          The concept of "AGI" is undefined and virtually worthless to me. There is only non-trivial insights, i.e. theorem and proof.

    • throwawayhhakdl 1596 days ago
      Deepmind and openai are probably not on a reasonable track to AGI. IMO if we ever make an AGI it wont actually be especially good at things. An AGI, like humans, would probably be pretty bad at math naturally. You could get one to be great at math, but first getting great at math and backing into general intelligence is probably impossible.

      Substitute math for anything you want.

    • pg_is_a_butt 1596 days ago
      Might be a little burnt out from the "cutting-edge team" at Oculus who painted him into a corner with technology that could never work, as it made every user sick.
  • nwsm 1596 days ago
    Yann LeCun's [0] comment on the post:

    Welcome to the club, John.

    A word of warning though: There is no such thing as AGI. Reaching human-level AI is a good goal. But human intelligence is very, very specialized.

    [0] https://en.wikipedia.org/wiki/Yann_LeCun?fbclid=IwAR2e9mzCqS...

    • drcode 1596 days ago
      LeCun is more accomplished and smarter than I will ever be, but his thoughts on the term 'AGI' just seem like dumb pedantry regarding word definitions to me.
      • username90 1596 days ago
        Sounds like it is more like an wishful thinking from a person who spent his life mastering deep nets than a nitpick. Like "Sure deep nets can never be an AGI, but Humans are not AGI's so we can reach human level performance with deep nets, it is not a dead end! IT IS NOT A DEAD END!".

        I think the saying "Science Advances One Funeral at a Time" applies here.

      • lonelappde 1596 days ago
        Perhaps you should examine your seemings then.
    • michannne 1594 days ago
      Sounds exactly like what I would say if my life's work revolved around the idea that AI boils down to how many neurons we can simulate
  • drefanzor 1596 days ago
    If anyone can initiate the singularity, it's John Carmack.
  • narrator 1596 days ago
    Humans optimize for activating the opioid receptor. These receptors are distributed all over the brain and tied into all sorts of subtle neural networks. That's why opioid addicts don't do much when they're high. As far as the entire structure of the brain is concerned, an opioid addict's brain is done optimizing and the fitness function is pegged at 1.

    I think an AGI will end up being like an AI that plays the Sims except we're the Sims and it's optimizing for our happiness probably by remotely monitoring our opioid receptor activation and some parameters of general health.

    • stevenwoo 1596 days ago
      Isn't this backwards a bit or did I get it wrong - not sure if I am phrasing this right. But to simplify from a laymen's perspective a reading of The Selfish Gene - all creatures will work to continue their genetic line unless misdirected as you describe. So if in some way continuing their genetic offspring is something that activates their opioid receptor, that's what we'll do, creating and raising offspring or as grandparents/cousins/aunts/uncles, helping raise relatives. This can be applied generally to all life on earth.
      • narrator 1596 days ago
        You ever think that cattle are exploiting humans to genetically propagate themselves more successfully than any other large mammal besides Humans?
      • throwawayhhakdl 1596 days ago
        Successful propagation strategies propagate, and therefore become prevalent. Life has no goal; it doesn’t try to propagate. It’s just the result of an evolved process that happens to correlate strongly with propagation. Most processes in organisms have an arbitrary first order objective like getting those happy chemicals. The fact that these first order objectives have a second order effect of survival and reproduction is a wholly unsurprising coincidence.
    • chillacy 1596 days ago
      I must be doing a poor job at being a human, given that I have passed the opportunity to activate my opioid receptors several times in my life so far (leftovers from surgeries).
      • homonculus1 1595 days ago
        Perhaps the thought of future addiction fails to activate your opioid receptors in the moment of your decision.
      • narrator 1596 days ago
        Exogenous opioids are an exploit of our human AI system not a beneficial feature of it.
      • throwawayhhakdl 1596 days ago
        One could argue that’s true, and that being bad at things is part of what makes general intelligence (artificial or otherwise) so powerful. Optimizing for goals about goals and redefining the objective Problem space on the fly
    • sneak 1596 days ago
      So, opiate synthesis and delivery?
      • narrator 1596 days ago
        I imagine the AGI system of the future would not make humans happy via that method in that its decision would be counteracted by including physical health indicators in the fitness function.
  • avl999 1596 days ago
    Half of you folks here are like teenage girls in highschool gossiping with each other about Kristen dumping Drew and instead trying to date Paul.
  • sebsito 1596 days ago
    First of all we should at least have a common definition what intelligence even is.

    Even then I'm not sure we'd know what General Intelligence would be because all we know is Human Intelligence or maybe lower level Animal Intelligences where the problem solving mechanism seems to depend on biological body and it's form.

    Humans navigate the world with automatic impulses which we evolved over time to deal with way too much signals from the environment so we can filter and react only to those important.

    We can then use consciousness to slowly map new impulses as the environment changes and go back to autopilot for most of the time.

    What if our intelligence isn't general but it's just enough to navigate the world we can perceive with out senses? What if we'll never be able to understand e.g. the quantum theory (or at least the part of a world experience which we call this way)? If there's is superset of out intelligence or different sets of intelligences which we just don't undestand?

    We think that our problem solving can take on any problem but maybe we're only taking on the problems we can take on, limited to our perception of reality which can be limited?

    So I think instead of calling it AGI the name should be more like Artificial Human-like Intelligence.

  • laxatives 1596 days ago
    Is he doing this under the Facebook umbrella? Or departing? Or more-or-less retiring from regular obligations entirely?
    • Someone1234 1596 days ago
      I think semi-retiring. He's going to be "consulting [on Oculus]" while:

      > I am going to be going about it “Victorian Gentleman Scientist” style, pursuing my inquiries from home, and drafting my son into the work.

      Which to me reads like part time work on Oculus, part time work on this AGI project. If it is with Facebook it isn't at all clear from the post (plus I'd assume it would be accompanied by marketing copy in that situation).

    • hans1729 1596 days ago
      >Starting this week, I’m moving to a "Consulting CTO” position with Oculus.

      I will still have a voice in the development work, but it will only be consuming a modest slice of my time.

      As for what I am going to be doing with the rest of my time: [...] For the time being at least, I am going to be going about it “Victorian Gentleman Scientist” style, pursuing my inquiries from home, and drafting my son into the work.

    • k__ 1596 days ago
      He said that he will work as a consulting partime CTO for Oculus and do GAI from home, sounded like non-Facebool work to me.
  • thrower123 1596 days ago
    I was wondering why his tweeting about graphics and VR went to 0.

    We've gone back into the part of the cycle where VR is an odd curiousity again, haven't we?

    • randomidiot666 1596 days ago
      Unfortunately VR turned out to be an intensely nauseating puke fest for a lot of people.
      • thrower123 1596 days ago
        Same as the last go round. In ten years, a new generation will think that this time, they know how to do it right. Maybe the hardware will even. have caught up.
        • K0SM0S 1596 days ago
          I'm still convinced we'll have "good enough" barebones yet spatially aware AR first. It's just less intensive on the front end side, which is the limiting factor now as I understand it.

          VR is a whole other thing, and I think the "uncanny valley" stretches quite far; ie you need quality really close to what we see in movies to pass the acceptable threshold beyond a few hours of novelty.

          • randomidiot666 1596 days ago
            VR sickness still applies, even if you have perfectly realistic rendering, infinite frame rate, infinite resolution, perfect tracking, and zero latency. It's caused by a discrepancy between the visual and vestibular systems. That problem cannot be solved by higher quality rendering.
            • jobigoud 1596 days ago
              It's only a problem if you are moving in the virtual world using a different method than what your body is using in the real world. If the VR world is a simple 1:1 mapping and you can walk around in it as you do in the physical space, there is no vection.
              • randomidiot666 1596 days ago
                Yes that's right. But that's a limited scenario. Not the true unbounded VR experience that we really wanted.
                • K0SM0S 1596 days ago
                  I was thinking more along the lines of neural connections, and you probably sitting in some comfy chair physically while roaming in your mind.

                  You know, however sci-fi "solves" VR. I agree with your sentiment, current "let's put displays on a headset" approach seems way too old school to me. A 20th century (flawed) solution to a true 21st century problem (much more than elec. tech it's bio-extension, very rich cognitive interface).

      • swalsh 1596 days ago
        To some people it's the only way they play any games (that's me)
    • randomidiot123 1596 days ago
      Unfortunately VR has turned out to be an intensely nauseating puke fest for many of us.
  • phyzome 1596 days ago
    Ah yes, AGI: The idea that eats smart people. https://idlewords.com/talks/superintelligence.htm
    • marvin 1596 days ago
      Working towards more general AI technology is not what the author of this is criticizing. They're criticizing focusing on super-human AI rather than ethical problems of current AI. "The idea that eats smart people" is just a meme at this point, normally quoted outside its context.

      But even so, history will have to judge whether the author's statements were true regarding the risk of super-human AI. Or whether a lot of _quite_ smart people weren't smart _enough_ to realize that there was a real likelihood of this being possible to achieve faster most thought.

      Also, using ridicule as a rhetorical technique isn't the most sound type of reasoning, regarding the author of your link ;)

  • 0xdeadbeefbabe 1596 days ago
    From the guy who brought us Commander Keen.

    Applying intelligence to artificial intelligence has happened before https://en.wikipedia.org/wiki/Shakey_the_robot. One of the contributors to Shakey was Alfred Brain.

  • dharma1 1596 days ago
    Why the announcement? Is it to protect Oculus/FB so it doesn't look like he's bailing out of VR?

    Couldn't he just have said "I'm taking some time off" and then make an announcement when there's something to announce, ie. "So here's some progress I've made on AGI"

  • eggy 1596 days ago
    Great, AGI from the guy who brought DOOM to many. Do we ever learn?
    • chrisco255 1596 days ago
      DOOM 4: AGI confirmed.
      • jmts 1596 days ago
        Hell on Earth.
        • lgl 1596 days ago
          An Oculus (by FACEBOOK) exclusive
  • binarymax 1596 days ago
    Does anyone have a non-facebook version of this story?
    • curiousgal 1596 days ago
      Starting this week, I’m moving to a "Consulting CTO” position with Oculus.

      I will still have a voice in the development work, but it will only be consuming a modest slice of my time.

      As for what I am going to be doing with the rest of my time: When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague “line of sight” to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn’t in sight. I decided that I should give it a try before I get too old.

      I’m going to work on artificial general intelligence (AGI).

      I think it is possible, enormously valuable, and that I have a non-negligible chance of making a difference there, so by a Pascal’s Mugging sort of logic, I should be working on it.

      For the time being at least, I am going to be going about it “Victorian Gentleman Scientist” style, pursuing my inquiries from home, and drafting my son into the work.

      Runner up for next project was cost effective nuclear fission reactors, which wouldn’t have been as suitable for that style of work.

      • hedvig 1596 days ago
        Fusion John, we need you in fusion research. It's only 10 years away
  • daenz 1596 days ago
    Roko's basilisk[0] will remember this.

    0. https://rationalwiki.org/wiki/Roko's_basilisk

    • mactyler 1596 days ago
      Carmack might just be a basilisker after all!
    • antisemiotic 1596 days ago
      I wonder what would happen if Carmack teamed up with Yudkowsky and his cult; Carmack's practicality with the imagination of people who non-facetiously talk about acausal deals with mulitiversal AIs could be a match made in heaven.
  • reilly3000 1596 days ago
    Arguably he got started on this years ago. The story goes that Quake 3's AI was so sophisticated, that when left alone for 4 years on a bot vs bot server, the units learned pacifism as a self-preservation strategy. https://www.forbes.com/sites/erikkain/2013/07/02/quake-iii-a...
    • xeeeeeeeeeeenu 1596 days ago
      That story is fake, its source is 4chan. Carmack himself debunked it[1]

      [1] - https://twitter.com/id_aa_carmack/status/352192259418103809

    • hombre_fatal 1596 days ago
      Seems like this would be easy to actually test instead of rely on a single anecdote. And the simplest explanation is a bug though I don't believe the story at all. There are probably multi-year-uptime bot servers in the wild right now.

      Reminds me of when I was new to Quake 3 and found an amazing server: it was always full of players and full of non-stop action. I played with these people all the time after school. Nobody said anything, they were 100% business which was cool. I would often congratulate them on nice kills or commentate on my victories. Everyone was about the same skill level.

      Eventually I realized I was playing on a server that simply filled empty slots with bots. I was the only human player.

      • deviantfero 1596 days ago
        This reads like a horror story prompt of some kind, eerie!
    • moogly 1596 days ago
      Mr. Elusive (Jean-Paul van Waveren) was the guy who wrote the bot code for Quake 3: Arena, not Carmack.
    • b0rsuk 1596 days ago
      And I read that Quake 3 AI operates on a fairly simple stack of objectives. Like, bot goes to pick up yellow armor. Bot sees enemy and starts shooting him. Bot takes damage and starts to retreat. Retreat is now the objective at the top of the stack. If the bot finds a medkit, he pops the retreat objective off stack, and goes to the next one - shoot enemy. If he happens to kill the enemy, he resumes the "get yellow armor" objective.
    • Qu3tzal 1596 days ago
      "learned pacifism as a self-preservation strategy" this is an interpretation, not what the bot intended to do, in other words, you're reading into it
  • ilaksh 1596 days ago
    After maybe 10-15 years of being an AGI enthusiast who never dared to pursue it "seriously", something has prompted me in the last few months to decide to make my next side project into a "real" AGI effort.

    I think maybe its just the number of people who are talking publicly about working on it that is making me want to "work on it" "seriously"?

    I mean, when I get my current side project "out the door" to some degree, I plan to spend at least a few months where the weekend (or sometimes nights) project that I actually admit to working on is "AGI research". Previously I have occasionally spent a few hours here or there mainly passively trying to learn about some deep learning or AGI thing by skimming papers or watching videos. But the plan now is to actually work on active learning projects/experiments for several hours every weekend. For at least two or three months (or longer if I don't give up before then).

    Theoretically at least some of it could be of practical use, although I am thinking that I may avoid trying to become a deep learning expert because it seems like people have that covered and it might take me five years. Lol. So I am trying to think of GPU programming approaches that are new. Which most likely will turn out to be a waste of time but will certainly be interesting for me.

  • piinbinary 1596 days ago
    What's a useful definition of what AGI is? If you have a computer that does some clever AI stuff, what criteria do you look for to decide that it has a general intelligence?
    • lonelappde 1596 days ago
      AGI means it can solve any problem posed to it and interface with any sensory and motor peripheral.
      • drdeca 1596 days ago
        not every problem. Not expected to e.g. solve the halting problem.

        If it can solve every problem that a human can solve, then it would be AGI. If it can solve every problem that a somewhat unintelligent human can solve, I think that would still count as AGI.

        • dingo_bat 1596 days ago
          An average human would realize it if you pose the halting problem to him. I'd expect the same from an AGI. To recognize such a problem and refuse to continue to devote time solving it.
    • olalonde 1596 days ago
      It's generally used to mean human level intelligence (e.g. an AI that can learn any task a human can). An often cited criteria is passing the Turing test.
      • username90 1596 days ago
        The Turing test is too easy. My test would be when the AI can apply to a random remote job, get it and then receive salary for a few years, quit, apply to a new job with updated CV, get a raise etc without anyone noticing it is an AGI.
  • dreamcompiler 1596 days ago
    It's nice that he's doing this and I wish him well. But thousands of very smart people have been working on AGI since 1958 and many of them initially thought they could crack the problem in a couple of years. They made a huge amount of progress, but the more progress we make toward AGI, the farther away the goal seems to recede.

    AGI is very much a research problem. It's not going to be solved with a clever hack.

  • playing_colours 1595 days ago
    Based on the interviews with Jon Carmack, he is very intelligent, realistic, reflective. Therefore, he realistically estimates his skills and the complexity of a problem. He said he would possible make an impact - and he can - by potentially attracting more attention to the field from researchers and investors, by building tools, that would help in researching the problem, by, at least, showing promising paths. He may start to dig into the problem in “Victorian Gentleman Scientist” style, but then attract investments and scale up. We can speculate and guess - time will show, fingers crossed.

    This “Victorian Gentleman Scientist” style is something I am longing for. I cannot go back to the academia now with family, or spend large chunks of my time on any research, but I really want to be able to afford it. Sure, most probably, I ll become soon disillusioned with the routine of a researcher, or jump between topics of research, or just did not contribute anything meaningful, but I'd really wish there was a possibility for me, other people to afford such lifestyle.

  • rhacker 1596 days ago
    I think we just got 50% closer to making AGI happen.
    • ekianjo 1596 days ago
      If your odds were very low to begin with 50 % more wont make much of a difference.
  • phtrivier 1596 days ago
    Totally off-topic, but:

    > Runner up for next project was cost effective nuclear fission reactors, which wouldn’t have been as suitable for that style of work.

    What would a (high-level) carreer path for that event _look_ like ?

    (Disclaimer: I'm not Carmack-level smart. Not sure I'm anyone-s-level smart. Asking for a friend.)

  • skokage 1596 days ago
    >Runner up for next project was cost effective nuclear fission reactors, which wouldn’t have been as suitable for that style of work.

    I can't tell if he was serious about that comment or not... Considering he builds rockets with free time, it could go either way.

    • the_watcher 1596 days ago
      I took it as "not even I would experiment with nuclear fission reactors from my house".
    • Voloskaya 1596 days ago
      > I can't tell if he was serious about that comment or not... Considering he builds rockets with free time, it could go either way.

      Nuclear fission shocks you but not AGI?

  • program_whiz 1595 days ago
    Maybe I am a naysayer, but John Carmack is going against other companies and teams with not only billions in funding for hardware and data, but also with big staffs of experts.

    Not only are people who have been in the research for a while more likely to have good ideas, but having the support of engineers to write tests and data wrangling, Neuroscientists to bounce ideas off of, and a whole bevy of support staff is just more likely to produce results.

    I'm also not a big fan of the kind of hero worship of "well he wrote Doom, so this should be a cakewalk." I'm not saying he won't, but he probably won't. What am I missing here that everyone seems all hyped up about?

    • coolassdude6941 1595 days ago
      His “competitors” are irrelevant. It’s obvious that the existing approach (dl/rl) is a dead end, at least for AGI. So the idea of a possible genius working on a new approach with no monetary incentive is exciting.
  • prvc 1596 days ago
    >For the time being at least, I am going to be going about it “Victorian Gentleman Scientist” style, pursuing my inquiries from home

    Who will fund the necessary computing resources? If not FB, then he will surely be joining or starting a different org

    • haihaibye 1596 days ago
      In Victorian times, a "gentleman" was someone who had so much money they didn't need to work.

      He's been a major shareholder in 2 companies that have been acquired (Id and Occulus)

  • zhoujianfu 1596 days ago
    I’ll take this opportunity to just put out some thoughts I have about intelligence, wisdom, AI, and AGI.

    In general, building/switching contexts in your head takes intelligence (and the more intelligent you are, the better/faster you are at it), whereas already having a context in your head is wisdom.

    I think of the current state of AI as us being able to teach computers a few very specific contexts, i.e. imparting wisdom to them.

    An AGI would be actually creating intelligence. And they are not the same thing at all. In fact, some might say your conscience/soul is just this brain context switcher/creator in action. An AGI would have consciousness.

  • kache_ 1596 days ago
    I can see where he's coming from. He's in the unique position where he can work on whatever interests him, he has the mathematical background to absorb all this knowledge. I think a big part of this is that he's worried about his age catching up to him and disabling his ability to make any meaningful contribution. One thing to note is that he's going to loop his son into his research.

    The creation of synthetic intelligence will be a result of multiple distinct breakthroughs. The more people with unlimited resources and high creativity working on this problem, the more likely those breakthroughs will be made.

  • whywhywhywhy 1596 days ago
    Sad to see him leave VR but honestly I feel shipping the Quest was a huge achievement and is a product that sums up what his original vision for VR was and the only VR device so far with any mainstream potential
  • Nimitz14 1596 days ago
    Hm. I don't feel like additional engineering expertise is what's missing to achieve AGI, there's still a lot more science to do I think, and I'm not sure how good Carmack is at that.
    • MagnumPIG 1596 days ago
      Honestly we don't even know enough in the psychology department to possibly arrive at AGI anytime soon.

      BUT if anyone can clear a hurdle or two...

      • bobsil1 1596 days ago
        AGI won't require hand-designing a copy of the human brain, though knowing some of the principles would help.
  • protomikron 1596 days ago
    Wow, that post blew up.

    Like most engineers I respect JC for his incredible work, but I really think AGI is far off and at the moment would be very surprised if there is significant progress in the next years to come.

    I also want you to read this extremely well written (old) blog post about the topic and I don't think much has improved since: https://karpathy.github.io/2012/10/22/state-of-computer-visi...

  • somewhereoutth 1596 days ago
    Mathematically speaking, current computers run in discrete (integer/countable) space, and the human brain exists in continuous (real/uncountable) space. Cantor showed that continuous space is larger than, and thus cannot be represented in, discrete space (via diagonal method). I suggest that AGI and consciousness lie in continuous space, and thus are unreachable with our current discrete computation model, regardless of how sophisticated we make it. There exists a cardinality barrier.
    • hvasilev 1596 days ago
      I wonder how these people have decided it is possible with a digital computer, given that the brain is analog. I wouldn't be surprised if the first step through AGI is to actually crack the analog computer hardware problem.
    • shrimpx 1596 days ago
      Unrelated to AGI, but quantum physics doesn't admit 'continuity'. Continuity is an abstraction. There are no real numbers in reality.
  • eaenki 1596 days ago
    It would be cool if he specified what he was going to work on.

    Current narrow AI is all about data and computer power but I don’t see AGI coming out of more data / more power anytime soon.

  • soulofmischief 1596 days ago
    I'm happy. Carmack has done a lot of good with Oculus, but it hurt seeing my hero working for Facebook. I understand the need to remain in a consulting position, but at least now he's no longer under the corporate leash.

    If anyone can hack AI, it's Carmack, and so when I read this headline I had a moment of fright thinking this meant Carmack was working on AI for Facebook.

    • Voloskaya 1596 days ago
      > If anyone can hack AI, it's Carmack

      Pretty sure you don't get to AGI with some hacks.

      • soulofmischief 1596 days ago
        Don't be trite; You're well aware that hack in this context means to understand/make progress.
        • Voloskaya 1596 days ago
          No, I actually understand it as solving it with unexpected tricks.
          • soulofmischief 1595 days ago
            Well, now you can understand it differently.
          • cma 1596 days ago
            Like relu?
  • chj 1596 days ago
    Bold undertaking even for someone as distinguished as John Carmack. I can't help thinking that this is too broad a goal and requires more theoretical work than engineering. Maybe making some self-replicating robots that can mate and give offsprings would be less ambitious (perhaps I am underestimating as well).
    • jobigoud 1596 days ago
      He just gave the general domain of the project, it would be weird to start this kind of semi-retirement on a very specific domain. I see this as the broad topic of interest that he will be investigating, but any actual research will be done into narrow fields, wherever some interesting idea hasn't been fully explored yet.
  • nafizh 1596 days ago
    This is so vague as in borderline sarcasm. Okk, it’s AGI but really, what is it? Reinforcement learning? Combining symbolic AI with modern advances? Deep learning theory? AGI is a vacuous term. And no one knows which path would lead to something similar to human intelligence. That itself is a matter of research.
    • carlosdp 1596 days ago
      The things you all listed are possible pathways to the goal. All he said is he wants to start working toward the goal. This guy is serious business, he's never been one to spew BS. He's a real deal, no bullshit computer scientist with leagues of novel accomplishments under his belt.

      I wouldn't bet against him when he sets his mind to something.

    • stupidcar 1596 days ago
      Which is presumably the research he intends to do over the next few months? E.g. reading around the subject to determine where best to focus his efforts.
    • h5eath5eahj 1596 days ago
      Hopefully short term includes going on Lex Fridman's podcast
  • unityByFreedom 1596 days ago
    Another one bites the dust. You only need to do a few problems on Kaggle to know AGI is a long, long way away, if achievable at all.

    AGI proponents tend to claim that we know everything about physics and biology, and that replicating it is feasible. This is science fiction.

    There are much more pressing concerns in the AI space. Godspeed Carmack.

  • tim333 1595 days ago
    This is probably a great time to try to crack AGI. The cost and power of computing has just about become ok for the task and the published work out there like AlphaZero is getting kind of close but probably needs one or two major algorithmic changes to get there.
  • prando 1596 days ago
    I am reminded of his programming retreat :). https://www.facebook.com/permalink.php?story_fbid=2110408722...
  • laichzeit0 1595 days ago
    We should begin by giving a clear definition of AGI. I’m talking about an unambiguous universally accepted definition akin to defining what a “continuous function” is. Until that point this is merely an argument about words, to paraphrase John Locke.
  • MrZongle2 1595 days ago
    I don't know if Carmack has a good chance at achieving this goal, but I certainly wish him the best.

    There are certainly far worse applications of a sharp mind like his, and if this is where his passion has taken him then I'm sure he will be productive.

  • tcbawo 1596 days ago
    Having listened to his appearance on the Joe Rogan Experience, I can't help but hear this message in my head with his distinct voice. But also, I think his brutal honesty and relentless work ethic will suit this problem domain.
  • alexashka 1593 days ago
    It's interesting that no one has mentioned that this is not unlike Jony Ive changing roles within Apple. It's just John's way of saying 'I'm leaving Facebook' without affecting the stock price.
  • mantoto 1596 days ago
    So apparently Mark Suckerburg reacted to it and 3000 other people.

    Anyway he should probably go to Google for this. They look way more advanced than all others.

    And not sure how much money he has but there should be ml involved and that costs a pretty penny

  • 29athrowaway 1596 days ago
    The more people working on this, the better. I don't expect him to crack the problem but contributions are welcome.

    John is smart, has a reputation, has resources, is well connected and has time. I hope he suceeds.

  • trentnix 1596 days ago
    Everything John Carmack touches he makes better. AGI will be no different.
  • tibbydudeza 1596 days ago
    Always liked him ... humble dude .. will he give us the fast inverse square root for AGI ???.

    But I suspect that "intelligence" is not so structured and reducible as we would like .. it somehow just works.

  • taurath 1596 days ago
    People familiar with the ethics around general AI - should we be worried?
  • perseusprime11 1596 days ago
    For somebody like John Cormack, I read this as 'I am retiring'
  • dfischer 1596 days ago
    His interview on JRE was pretty enlightening. He definitely has a polymath background to navigate towards the right direction. I believe he’ll be working closely with Musk and Neuralink.
  • nikkwong 1596 days ago
    I'm sure he knows his stuff, and is venturing into this knowing that there's somewhat of a "hard problem" to be solved here; serious proponents of AI like Kai-Fung Lee have stated that AGI may never be possible.

    The fact that he's making a public statement like this leads me to believe he may already have some novel solutions on how to tackle the problem. We won't be expecting to be seeing the same old parlor tricks coming out of John Carmack. He is already on the forefront of this stuff after all.

    That's exhilarating but also terrifying. Our still-barbarian level human systems are still nowhere near ready to deal with the socioeconomic problems that may arise with AGI.

    • Voloskaya 1596 days ago
      > leads me to believe he may already have some novel solutions on how to tackle the problem.

      > That's exhilarating but also terrifying. Our still-barbarian level human systems are still nowhere near ready to deal with the socioeconomic problems that may arise with AGI.

      I think you are reading way too much into his statement. It's extremly unlikely that he just magically figured out a way to tackle the problem (just knowing where to get started would be massive).

      • drcode 1596 days ago
        Carmack is a pragmatist- I'd doubt he'd be doing this if he didn't feel like he had some initial promising directions in mind, though I agree he's probably very early in the process still.
        • Voloskaya 1596 days ago
          Or maybe when he is talking about making a dent on AGI, he is talking about a very long term objective (say 20-30 years) and he still has to figure out everything.
  • ralusek 1596 days ago
    I think any monumental leap forward is going to come from changing the way networks are motivated. Simple back propagation attempting to reduce the error from a fixed objective have proven to be useful, but doesn't actually resemble how more generalizable intelligence works.

    I think getting the feedback loop integrated with something that behaves more like dopamine/serotonin/pain feedback is going to be the likely direction we'd need to go. Basically, the network needs to be able to form new objectives and recognize when it's meeting or failing at those objectives, rather than just optimizing its network to be less and less bad at predicting specific outputs.

  • bigred100 1596 days ago
    Is there any reason to expect he’ll have any success at this? Leaving aside the motivation for a fun hobby that you don’t expect anything out of.
  • boomboomsubban 1596 days ago
    I have no idea what he is capable of adding to the field, but I'm quite happy to see a long time proponent of free software enter the field.
  • maxpert 1596 days ago
    Why do I have a feeling that Oculus is gonna be dead?
  • dwheeler 1596 days ago
    I am skeptical that he will succeed, but I am happy that he will try. Carmack is smart, and we need smart people to try hard things.
  • sriku 1596 days ago
    I'm wondering whether this will set off a wave of folks working on AGI given that today the tools are way more accessible.
  • ganitarashid 1596 days ago
    I hope he doesn’t trigger a Phobos Anomalie
  • krick 1595 days ago
    A slightly more on point title would be "John Carmack steps down as Oculuc CTO". The rest is pure hype.
  • delegate 1596 days ago
    So the creator of Doom wants to get into artificial general intelligence .. what could possibly go wrong :)).
  • fizixer 1596 days ago
    I would love to do the same, except I'm not at a point where I'm financially independent.
  • zerr 1596 days ago
    Seems like a nice excuse for an early retirement and spending more time at home with a family :)
  • Ono-Sendai 1596 days ago
    Cool, I hope he makes blog posts or videos or something, so we can follow along at home.
  • proc0 1596 days ago
    I think his expertise in video game engines and rendering has a lot to contribute here.
  • wruza 1596 days ago
    Q: Why did John Carmack not worry when his AGI decided to leave a note and go out?
  • joeevans1000 1596 days ago
    Carmack likes lisp. I wonder if he'll be using it on this project.
  • sabujp 1596 days ago
    there's no "software" alone that can create an AGI. It's going to take a mix of really powerful hardware, high speed interconnects, and software coming together to do something anywhere close to what sentient (human) beings can do. Life isn't (yet) star trek and carmack isn't noonian soong. Current approx say the human brain is ~10^15 flops, there are only a few of these supercomputers in this regime, with only Summit reaching into the exaflop regime. Carmack would need full access to all of any of the petaflop machines. What he'll probably come up with instead is probably some new good algorithms for making things look like AGI.
  • umvi 1596 days ago
    There's no such thing as AGI in my opinion. There is no way to create a "conscious" machine. We might be able to come up with some reasonably impressive imitations, but nothing that is conscious or actually thinking like a human.
    • wahern 1596 days ago
      If you can explain precisely why we cannot build a conscious machine, you could become one of the most revered, and potentially wealthiest, researchers in the world, as I presume such a proof would necessarily introduce unknown and very useful science.

      I personally don't believe modern machine learning is remotely close to AI, except perhaps the very lowest rung of the ladder of self-serving AI definitions. I base that belief on what seem to be the unknowns, reinforced by predictable failures[1]. But I have very little reason to believe it's impossible. Not even the possible necessity of quantum effects would seem to preclude it. Heck, we've already begun harnessing quantum effects in materials science, computing, biology, and other areas.

      Unless you mean that whatever we could eventually come up with would be more biological than machine or that only a human could think like a human, but that seems more like word play, the kind of game AI believers play. (That said, that poses an interesting question: which is more likely to be achieved first--a designed-from-scratch, DNA-based cellular intelligence, or something not based on DNA or otherwise mimicking existing organic life? If at all, of course. Also presuming such a distinction isn't in fact hopelessly quaint and naive.)

      [1] I'm not a naysayer. While I never believed self-driving cars were around the corner (not even 5 or 10 years out; you can Google my HN comments from years ago), I have no doubt the science has been useful and can and will and is put to great, largely unseen use, as is typical of most science.

      • umvi 1596 days ago
        I can't prove it, but I will tell you why I believe it.

        We can't build a conscious machine because I believe there is a spiritual aspect to life that we have thus far failed to empirically observe or measure. Put another way, I believe all humans have a "spirit" inside of them, and that said spirit is a prerequisite of conscious thought. This same spirit is what makes life after death possible. This belief, of course, is an extension of my belief in God.

        Thus, it is impossible to build a truly conscious machine without a spirit to inhabit said machine.

        This all implies, of course, that truly conscious AGI effectively proves God does not exist since it proves there is nothing special about humans or any intelligent life for that matter.

        In summary, since I believe there is something special about life and that there is a secret sauce (spirits) that we haven't observed or measured, AGI projects are always doomed to fail (though they may spawn new interesting fields of mathematics or computer science)

        • wahern 1596 days ago
          I was raised Catholic and still consider myself Catholic. One aspect of Catholicism, oft ridiculed, is an emphasis on sacred mysteries. In Catholic theology these aren't simply a way to explain miracles. Rather, the tenet is that there are aspects of the world that are unknown and even unknowable. Thus there are limits to what can be theologically positively affirmed or categorically rejected.[1] So to say that a machine could never have a soul is to say too much, at least in the context of Catholic theology.

          You're entitled to your belief, of course, but simply asserting the existence of [Christian] souls and an inter-relationship between souls and intelligence isn't sufficient to make your claim.

          Discussing souls is probably not appropriate subject matter on HN, but I thought it was worthwhile to make the point that even religions can have and utilize the concept of unknowns to self-limit its theological reach. Technically speaking, orthodox (small 'o') Christians (Catholics, Orthodox, mainline Protestants) only need affirm the Nicene Creed and nothing more, just like, AFAIU, Muslims need only affirm the Shahada. Thus, an orthodox Christian is supposed to affirm the Trinity, the resurrection of Jesus, etc, but otherwise free to accept or reject machine souls, notwithstanding additional doctrinal strictures of particular denominations that might dictate a particular choice.

          [1] This is why you see seemingly radical priests quoted as admitting that aliens could have souls. (And also why it's so easy for Bill Maher or Michael Moore to find a priest to make seemingly contradictory and outrageous statements; more often than not the priest is playing them more than Maher or Moore understand, they're just too cynical to appreciate it.) Catholicism is hardly the only religion, or even Christian religion, with such a concept. But it's perhaps gone to the greatest lengths to develop and integrate the concept with modern logic and science.

        • 77pt77 1596 days ago
          So because of vitalism and dualism whoo whoo.
          • dang 1596 days ago
            Could you please stop posting unsubstantive comments to Hacker News?
    • 77pt77 1596 days ago
      So what goes beyond physics in the human brain?

      Because we can simulate physics.

    • ralusek 1596 days ago
      Your opinion is very likely incorrect.
      • umvi 1596 days ago
        Want to put money on it? I bet $1000 we will not have a conscious computer within the next 50 years.
        • retsibsi 1596 days ago
          How would you define 'conscious' in an empirically testable way?

          I'm not being a smartarse here, I'm sincerely curious -- because it seems that you're confident that non-biological consciousness is impossible, and also that p-zombies (beings that give every outward appearance of consciousness without actually possessing it) are impossible. So you probably have some specific beliefs about the necessary limits of non-conscious 'intelligence', and I'm interested in what they are and how you've arrived at them.

        • Nuzzerino 1596 days ago
          Conscious and intelligent have different meanings though?
    • qwerty456127 1596 days ago
      We don't have to. A general AI doesn't imply consciousness inner experience.
    • ageofwant 1596 days ago
      Eh what ? You are a conscious machine, manufactured from organics.
  • Rerarom 1596 days ago
    Reminds me of Paul Cohen working on the Riemann hypothesis.
  • option 1596 days ago
    We need more people like him in AI. Super happy!
  • bitwize 1596 days ago
    Because of course he is. Because John Carmack.
  • tonfreed 1595 days ago
    Is this going to be like the AI in Daikatana?
  • negamax 1596 days ago
    I have seen what happens next in Blackmirror!
  • 2OEH8eoCRo0 1596 days ago
    I'm ready for this post to disappear, strange things start happening in cyberspace, and Carmack abruptly cans the project and doesn't want to talk about it.
    • zelly 1596 days ago
      Elaborate
  • mark_l_watson 1596 days ago
    Good for him. For AGI development, I wonder if the behavior of people playing multi player VR games on the Oculus platform could be used as training data.
  • ses1984 1596 days ago
    What a boss.
    • mc3 1596 days ago
      And now, without a boss!
  • daodedickinson 1595 days ago
    My dream would be that Carmack would go toward Douglass Hofstadter and they'd move us toward forms that flawed human intelligence has been striving toward, rather than reifying flaws. I wish the people capable of helping with that could help with that and then I hope that the rest of us could all gather together to solve the loneliness CATACLYSM.
  • tehjoker 1596 days ago
    It's nice being rich.
  • tomerbd 1596 days ago
    why is this so important why do you consider him such a genius?
  • _pmf_ 1596 days ago
    I just noticed that this is the first Facebook post I ever read.
  • lazyjones 1596 days ago
    Let's hope Boston Dynamics doesn't hire him. ;-)
  • objektif 1596 days ago
    FYI I also just decided that I am going to work on fusion.
  • madacoo 1596 days ago
    Is there a non-Facebook source for this information?
  • cvaidya1986 1595 days ago
    Me too.
  • tus88 1596 days ago
    What this title should have been: John Carmack gives up on VR.
    • Keyframe 1596 days ago
      Following up on “John Carmack gives up on aerospace” and hopefully not followed up by “John Carmack gives up on AI”.

      I like the guy as much as most, but so far it seems like he has been wandering around. He’s had much success in the early days of 3D video games and that’s about it. A guy of his calibre would probably make a good impact if he joined one of the expert teams like DeepMind. No matter how smart he is, AI today is a completely different ballgame then what he was part of so far. I hope I’m wrong, but I don’t see him making any sort of breakthrough on his own, with his son. Maybe he wants to spend more time with his family, which is great, or he drank koolaid about his own legend. Odds are against him, heavily so. Good luck, in any case.

    • mnd999 1596 days ago
      Or John Carmack (pretty much) gives up on Facebook
  • acollins1331 1596 days ago
    Maybe general intelligence is not a goal to pursue, isn't general intelligence of machines what would eventually lead to the 'singularity' that seems far away now but is plausible in theory? Maybe John's real last name is Connor.
  • Carzuckermack 1596 days ago
    It's as if Carmack is simply paid to shill for Zuckerberg, like some pet under glass.

    Every time Carmack opens his fucking mouth, and it's the facebook domain in the URL bar, there's no accident in that. Carmack transmits this nerd signal, knowingly.

    Kind of disgusting.

  • ijiiijji1 1596 days ago
    Well, the holy grail is general AI that can also write software and develop hardware... then it can develop itself.
  • yters 1596 days ago
    He should first work on figuring out if AGI is possible. Why assume the human mind is computable?
    • jobigoud 1595 days ago
      That's exactly what a scientist would say. Engineers invent things because nobody told them they were impossible.
      • yters 1594 days ago
        Or they go on wild goose chases. We call those people cranks, and there are plenty of them, such as all the people inventing perpetual motion machines (which arguably could be AGI).

        Also, if you address the fundamental question first, Carmack may unlock something even more powerful than AI. As it is, he's just following the crowd.

  • codesushi42 1596 days ago
    This would be great if only he was not at Facebook.
  • hexdrunker 1596 days ago
    Nice
  • bronz 1596 days ago
    i say this with respect and humility, but i am very surprised at the naivete with which John addressed the subject of AGI. he is so casual about it -- not only the idea of working on it but also the idea of it existing at all. he seems oblivious to the gravity of that discovery. it is not just "very valuable," it will be earth shattering and probably wipe out humanity. and its his side-project. and his son will help out.

    John is the perfect representation of what is wrong with peoples attitude toward AGI. aloof and naive.

    • K0SM0S 1596 days ago
      I'll say this: I'd prefer if the brightest minds approached the matter from the "AI safety" angle (a sub field concerned with building not just AI but "safe" AI, ie that we can control or understand in a practical manner).

      Because that's really where the line of human history will be drawn if AGI and above becomes real. AI safety, how advanced we are in it, will directly map to civilization's progress or endangerment as a result of AI.

      Edit: this is already true with regards to "psychological safety" from undue influence or outright manipulation with motive (usually financial) by current "ANI" algorithms (newsfeeds, "recommendations", ads, etc). It's a real topic that reduces to human psychological freedom, freewill. It's a BIG topic.

    • zelly 1596 days ago
      It would be easier to take the AI doomsayers' seriously if we were remotely close to AGI. For now it's treated the same way as some guy in a cape in Central Park trying to summon Satan. No one cares because everyone knows it's basically impossible.
    • Bizarro 1596 days ago
      Stop staying up late to watch DUST on Youtube.
    • asadlionpk 1596 days ago
      I think the real naive are the people who are doing it as a day job.
      • popup21 1596 days ago
        Oh those are some spicy peppers!
    • randomidiot666 1596 days ago
      He might as well casually work on Faster Than Light travel, or a Grand Unified Theory.
    • whamlastxmas 1596 days ago
      What is your source for saying AGI will probably wipe out humanity? How could we ever even attempt to guess at the motivations of something we can barely comprehend and doesn't even exist yet?
      • goatlover 1596 days ago
        The main concern is not that it's like Skynet and wishes us harm, but that it does things harmful to us because the means to accomplish the AI's goals are at odds with human values, which was unanticipated by the human creators, since the AGI is coming up with its own solutions. And the AGI doesn't have the same values as humans, so it doesn't care if its solutions are harmful.
        • whamlastxmas 1595 days ago
          I guess the risk seems more real if it's a question of "does it share human values". Which is possibly not a one in a million chance it doesn't - it could be a 50/50 chance or worse.
    • AnimalMuppet 1596 days ago
      > and probably wipe out humanity.

      Probably? I do not think that word means what you think it means... or else I don't think the balance of probability lies where you think it does.

    • ageofwant 1596 days ago
      You are saying "probably wipe out humanity" is a bad thing ?

      I know of several million species that would strongly disagree. Especially if AGIv1 decides to tune the genetics of say most mammals to append 'sapient' to the end of their species name.

      Perhaps more constructively consider that AGI is simply the next iteration of 'humanity', yea sure the old versions are redundant anarchisms and apart from some living reserve specimens functionally extinct, but nobody cares as you can sim one up at almost no cost.

      • Bizarro 1596 days ago
        You know of 0 species that would strongly disagree, because those several million species don't have the capacity to agree or disagree on whether "wiping out humanity is a bad thing". And you don't speak for them, no matter how much caring about the planet you think you're doing.
        • ageofwant 1596 days ago
          I absolutely speak for them, who else will ? Clearly not people like you. I have the capacity to speak for them and so I do. And I will be the only judge about how much "caring" I do or don't do.
      • bronz 1596 days ago
        AGI is not the next iteration of humanity because it will not resemble humanity in any way besides being sentient in some capacity. you will feel quite silly if you get to see it in your lifetime.
    • McTossOut 1596 days ago
      I'll level with you, if all I had heard was soundbites, I'd be skeptical myself; it's a bit like Neil DeGrasse Tyson telling you he's going to unify gravity or something. This guy can go and adlib a 2 hour talk about implementing subsurface scattering, and build it from the ground up in a commercially viable way.

      This dismal, dimwitted "advanced" filter and sort industry has recently started training all their employees in and vomiting all over every consumer with is nothing, worthless, and at best lunacy.

      All it takes is patience, know-how, and insight. For that, Carmack fits the bill.

    • randomidiot123 1596 days ago
      He might as well casually work on Faster Than Light travel, or a Grand Unified Theory.
  • The_rationalist 1596 days ago
    I hope he'll go the hybrid symbolic and neural network way (causal and statistical), instead of just statistical.

    AGI needs a type system...

    I hope I'll achieve AGI before him but it's nice to know there's some real competition! (because, reader, there are almost 0 researchers seriously trying to achieve AGI in a not totally bullshit way. Only opencog and Cyc comes to mind).

    • ilaksh 1596 days ago
      Are you really that sure that the approaches to increasing the generality of AI being taken by LeCun (self-supervised model learning), Hinton (capsule networks) and Bengio (state representation learning) are all "total bullshit"?
      • The_rationalist 1596 days ago
        From my reading, Hinton capsule networks seems far from being enough, it could at best be an incremental improvement. And is unrelated to English semantic parsing, it seems specialized for computer vision.
        • ilaksh 1595 days ago
          English semantic parsing is small part of AGI. And a system that can only do that or only for one language is never going to be general.
      • macawfish 1596 days ago
        You know who just might be full of total bs is Ben Goertzel.
        • ilaksh 1596 days ago
          From what I've actually seen that is completely inaccurate and unfair. The only thing that he did to "deserve that" that I know of is to start seriously pursuing and talking about AGI before it was cool again.

          For example, OpenCog is an implementation of the classic cognitive architecture and its about as traditional and far from "total bs" as you can get in AGI.

          I have never heard anything to back up the insults against Goertzel.

    • K0SM0S 1596 days ago
      > AGI needs a type system.

      My brain bit on that remark; would you care to elaborate?

    • cr0sh 1596 days ago
      You forgot "he who shall not be named"...

      /ok, maybe his project falls under "total bullshit"...

      • The_rationalist 1596 days ago
        Who are you referring to? SOAR?

        Contrary to popular belief, both openai and deepmind have zero roadmap and specification of a cognitive architecture, not even a semantic parser.

      • catalogia 1596 days ago
        Who are you referring to? I feel out of the loop here.
        • cr0sh 1595 days ago
          Those of us who've been on the net long enough, and have at least dabbled in AI/ML circles, know of him.

          He claims to have invented a program that is a mind, originally written in Forth, translated by others to many other languages, etc. He has published the code of this program in a form of "open source", so you can easily find it if you dig enough.

          He's widely considered to be a crank. That said, the line between genius and madness can be mighty thin, and what side he lands on is anyone's guess, but most put him well over the line into madness, for whatever its worth.

          My own opinion?

          Well - looking at his work from purely the modern understanding and research into ML/AI - that is, deep learning and such - his work would be considered pointless, probably worse than Eliza as to its contributions to the field.

          But as someone who has read a lot of other works (for for and against) the idea of AGI, artificial consciousness, theory of mind, etc - his work at a certain level has echos with some of that work. Still probably a dead end, but at the same time, there's some interesting concepts within his code and theories (he's self-published a book on it, too - you can find it on Amazon - he also has it for free on his github and it can be found elsewhere).

          I guess I still put him in crank territory, but not in the abusive crank arena, more in the "doing his own thing, but being a bit evangelical about it" - relatively harmless.

          His work is not as amazing as TempleOS, imho, but there's a similar mind behind it (though comparing it with that operating system is maybe an unfair, possibly orthogonal, comparison).

          I won't say or reveal more (but I've written enough for you to figure it out) - he tends to monitor tons of forums and if he thinks he's being "summoned", he'll spam the forum with his writings and theories. It got him "perma-banned" from more than one newsgroup back in the day...

        • p1esk 1596 days ago
          Schmidhuber
    • asadlionpk 1596 days ago
      Do you have a goto resource to watch/read for someone new and kinda interested in the field?
      • The_rationalist 1596 days ago
        The opencog website is a great resource. Going directly to the specification is a bit too intimidating but here it is: https://wiki.opencog.org/w/CogPrime_Overview

        You might just begin by learning the list of NLP tasks and how good are the state of the art at it. The cognitive architecture that needs to be created to achieve AGI will one way or another be a composition of said tasks, which are the primitives.

        You can discover such a taxonomy here: https://github.com/sebastianruder/NLP-progress/blob/master/R...

        Also you might be interested by learning logic as a big task is to translate natural language into queryable, logical forms.

      • ilaksh 1596 days ago
        reddit.com/r/agi sometimes has interesting stuff. Although its often pie-in-the-sky articles that have no actual implementation.
    • thundergolfer 1596 days ago
      Where and how are you working on AGI? Are you at opencog or Cyx?
      • The_rationalist 1596 days ago
        I'm not a big player on the field. I'm specialized in semantic parsing and argument checking. I'm the first to my knowledge to have made a syllogism (and more) checker for English. Also, I have allowed researchers to beat the state of the art on constituency and dependency parsing (but simply by sharing knowledge of the state of the art to other researchers).

        I do this on my free time so I'm not productive, but I have designed an intermediary language (IR) for natural languages that seems very promising.

        • Voloskaya 1596 days ago
          > I'm the first to my knowledge to have made a syllogism (and more) checker for English.

          Either what you actually mean by "syllogism checker" is extremely specific and unpractical, or this is 100% BS.

          • The_rationalist 1596 days ago
            My program check the validity with 0% false positives of the 256 possible forms of syllogisms. This is not bullshit and not that complicated.
        • ilaksh 1596 days ago
          Doesn't honestly sound very promising but I would still like to see the IR and stuff if that is online.
  • iamleppert 1596 days ago
    It must be hard to be someone like him. Always chasing the original high that made him famous, the enormous pressure to succeed must be nauseating.

    Instead of retiring and relaxing and looking back on an impactful and lucky career, it says something about how powerful the original emotions were that led him to his current point.

    He will do anything to get back to that state, that place in time, even sacrifice what are supposed to be the good years of his life stuck behind a screen.

    • neonate 1596 days ago
      How do you know he isn't just doing what interests him? Not everyone wants to retire, especially highly creative people.
  • ausjke 1596 days ago
    he is 49 this year, considered as one of the most genius programmer on earth. There are professors still doing real work at 90+ year old(yes, the UT professor goodenough for Nobel prize), John has a long way ahead, best luck!
    • chasd00 1596 days ago
      i've heard he was pretty good, i didn't realize he was this widely respected and admired. I remember him for that fast inverse square root hack but that's about it.
      • galangalalgol 1596 days ago
        I think he got that from someone at SGI.
        • galangalalgol 1596 days ago
          Or was the second person to independently come up with it.
      • zhynn 1596 days ago
        He's already a legend, and he's not even 50.
    • streetcat1 1596 days ago
      Just out of curiosity, how do you define a genius programmer? (vs regular programmer).
      • hunterjrj 1596 days ago
        How about “wrote the Doom engine, then the Quake engine”. Good start?
      • tirewarehouse 1595 days ago
        He ushered in modern gaming, and everything he did in that domain was way ahead of the curve.
  • mahesh_rm 1596 days ago
    This post feels a little bit like something that'll be upvoted to top spot many years from now, for one reason or another. :-)
  • xwdv 1596 days ago
    Imagine inventing artificial general intelligences and then there’s some public outcry for products powered by natural, organic intelligence instead.
  • mentat 1596 days ago
    He invented modern graphics as a practical problem by himself as the sole researcher. Given the tools at the time that may have been a harder problem.
    • randomidiot666 1596 days ago
      > He invented modern graphics as a practical problem by himself as the sole researcher

      That is a ridiculous exaggeration. Carmack was clever enough to gain ~1 year advantage in performance over his competitors for the Doom engine, using Binary Space Partitioning, which was first applied to 3D graphics in 1969, before he was born. The Quake engine got a significant performance boost from Michael Abrash, who is a specialist in code optimization.

    • justin66 1596 days ago
      > He invented modern graphics as a practical problem by himself as the sole researcher.

      No, he didn't, and that is not a claim that he would ever make himself.

      • pixelpoet 1596 days ago
        Agreed, and if anyone could make that claim it'd be Eric Veach (who then went on to develop Google Adwords).
    • sterlind 1596 days ago
      Yes, a practical problem. The math behind computer graphics (i.e. optics) had been around hundreds of years. The trick was using numerical analysis to optimize and approximate on limited hardware.

      We don't have the laws of AGI like we had the laws of optics (Asimov notwithstanding.) Tons of research effort was poured into the wrong avenues in vision (hand-tuned HoG, transforms, optical flow analysis) and ML (support vector machines, computational learning theory) until a chain of breakthroughs hit on the right mathematical approach for vision and supervised learning more generally.

      We have some mathematical approaches to try with AGI (e.g. policy optimization/max-Q in reinforcement learning), but they equations are plagued with fundamental issues (e.g. reward sparcity, easily-gamed artificial objectives.)

      Carmack optimized some very difficult equations when he worked on graphics, but in AGI we still don't have the right equations to optimize.

      • bobsil1 1596 days ago
        A dog brain can't design an artificial dog brain, and a human brain probably can't design a human-level AGI. It will likely be machine-evolved on cheap, massively parallel hardware, with the key problem being speeding up evolutionary search.
        • solipsism 1596 days ago
          A dog brain can't design an artificial dog brain, and a human brain probably can't design a human-level AGI.

          That's ridiculous. A dog can't draw a dog, therefore a human can't draw a human?

          • wpietri 1596 days ago
            Technically, a human can't draw a human. A human can only draw something that looks like a human to another human. The human viewer is doing most of the work by imagining that the drawn human is real (or by recalling the real human suggested by the drawing).

            As an example, consider all of these ASCII stick figures:

            http://www.ascii-art.de/ascii/s/stickman.txt

            Clearly, it isn't necessary to deeply comprehend the essential nature of humanity in order to draw one.

            Knowing how a mind works well enough to build one is an immense task. A better analogy would be building a human body from scratch, which is also something humans can't do.

            • solipsism 1596 days ago
              The philosophical musings about what it means to "draw a human" are pedantic and really not relevant. My point is that "dog cannot do dog-related task" does not imply or even suggest that "human cannot do human-related task". It's pseudo-logic of the sort that is unfortunately often very convincing to people.
              • wpietri 1595 days ago
                They are relevant because you used "draw a human" to prove something wrong. If mine was irrelevant, so was yours.

                The reason "dog brain can't design an artificial dog brain" is a useful contribution is that it gives people an intuitive understanding of a complex truth: things can't fully model themselves. Dogs can't fully understand dogs. People can't fully understand people.

                It's plausible to me that humans can evolve something akin to AGI. It's also plausible to me that a vast number of humans working together will manage to stumble into creating AGI. But I see no reason to think that humans have the intellectual capacity to understand a human-level mind well enough to build one intentionally.

          • sterlind 1596 days ago
            It is sorta interesting to wonder about how far we can bootstrap ourselves. We went from trees to moon landings and neural networks, maybe we and our tools can rise to the top of the Kardashev scale (barring any cataclysms.)

            If so, it implies there's a sort of intelligence-completeness, where species can accomplish any physically possible objective.

            Or maybe we will peter out before achieving practical fusion, quantum gravity or AGI. Either way, "X can't create X" is a silly argument. Humans create humans, there's whole websites dedicated to that.

            • bobsil1 1596 days ago
              We've never hand-designed anything as complex and novel as what bio evolution discovers. We can't even ship error-free word processors. The more bug-free space shuttle code doesn't do much relative to AGI.

              "Humans create humans" via what bio evolution found.

          • bobsil1 1596 days ago
  • unityByFreedom 1596 days ago
    One could imagine him beginning to work in gaming AI, but he says he wants to work on AGI, not simply gaming AI.

    I'm personally not convinced that it is encouraging when someone bright sets their sights on AGI, particularly someone who appears to have never competed on Kaggle. It screams hubris.

    • bitexploder 1596 days ago
      I am not convinced current AI is the approach to an AGI. I think it is at least feasible someone outside of this sphere has a reasonable shot at it. It feels like many AI researchers get caught up in refining existing techniques that amount to fancy statistics algorithms and data crunching, but not AGI. Current AI techniques may be synthesized or used in part in some AGI but it’s clear there is a revolutionary step to be made. Kaggle is almost just an optimization fest, and not really advancing towards AGI.
      • TeamSlytherin 1596 days ago
        The announcement today has the potential to cut the number of years estimated (to reach AGI) in half, and I'm sure VC funds are DM'ing him non-stop right now. But we don't know which path Carmack will take, and as you rightly point out, current trends in AI is mostly ML/xNN's with a goal of turning data into profit (heavy focus on products/markets). Even those talking about AGI are fractured into different groups. Oddly, many in ML are talking about future abilities that are mostly defined by AGI research (even if they see AGI as a distraction. From his post, I don't think "product development" is a goal. Not clear what challenges or milestones he will set for himself (just having a better testing suite has become an AGI issue of late, so maybe he will contribute to that first).
      • unityByFreedom 1596 days ago
        > I am not convinced current AI is the approach to an AGI

        This is a non-sequitor. I didn't argue that current AI theory, or Kaggle, will lead us to AGI.

        > Kaggle is almost just an optimization fest

        Public machine learning competitions have produced a lot of innovative learning techniques. If Kaggle is so easy, and AGI so hard, it would follow that anyone tackling AGI would have some experience applying machine learning competitively in some public space. It doesn't necessarily need to be Kaggle. Kaggle just happens to be good at hosting such public competitions, and in fact has surfaced several state-of-the-art implementations. The difference between prize money (~top 5) and no-prize-money (5-10) may be an "optimization fest", but without the competition, the solutions presented would have entered the public sphere at a much slower rate.

        EDIT: Please be kind and explain your downvotes.

        • jacobush 1596 days ago
          Ok, I'll take a shot: "it would follow that anyone tackling AGI would have some experience applying machine learning competitively in some public space".

          No, that would absolutely not follow. (I'm a pretty good devil's advocate, but I can't with this one.)

          And given AGI would come from some completely new breakthrough not related to the current practice of "machine learning", competitions may be completely moot. They may be great for finding the nice increments in the state of the art of machine learning, but they are unlikely to help much with AGI.

          I could even imagine a stumbling AGI being very stupid compared to just about any machine learning solution thrown at it - yet being undeniably AGI. Like a dog not being very good at DOTA, Star Craft or Chess, yet it undeniably possesses some kind of general intelligence.

          • tarsinge 1596 days ago
            > And given AGI would come from some completely new breakthrough not related to the current practice of "machine learning"

            I’m not so sure of that. Intuitively AGI feels like being able to generalize and automatize what is already done in specialized problems, like having a meta program that that orchestrate and apply specialized subsystems, and adapt existing one. If playing Go, Starcraft, Speech recognition, Computer vision are already of the same building blocks, it feels like having a meta program that‘s just trained to recognize the type of problem and route it to the appropriate subsystem with some parameters tweaks is a path to AGI. In the dog example you don’t even need to have subsystem that are that better than humans individually.

            Edit: my point is I feel like AGI is the interface and orchestration between specialized subsystems we already know how to create. Trying to train a big network like generalizing Alpha Go is a dead-end, but having simpler sub networks ready to be trained at a specific problem seems feasible. Much like the brain is at first seen like a big network, but in practice there are specialized areas. The key is how are these networks interfaced and which information they exchange to self adapt. Maybe these interfaces themselves are sub networks specialized in the problem of interfacing and “tuning hyperparameters”.

            In short: I think when we’ll figure out how to automate Kaggle competitions (recognize the pattern of the problem, then instantiate and train the relevant subsystem) we’ll be a good step forward AGI. We don’t need better performance e.g. in image recognition, just how to figure orchestration.

            • krak12 1596 days ago
              Some people already developed prototypes on that direction.
          • unityByFreedom 1596 days ago
            Thank you for your reply.

            Turning a blind eye to existing knowledge may result in reinventing things that already exist. Nobody expects students to follow the same concepts as their teachers, the point is just to leverage existing knowledge.

            > I could even imagine a stumbling AGI being very stupid compared to just about any machine learning solution thrown at it - yet being undeniably AGI. Like a dog not being very good at DOTA, Star Craft or Chess, yet it undeniably possesses some kind of general intelligence.

            People have debated whether animals are intelligent for ages. This is another type of problem, how to define intelligence. The most famous attempt in recent times is the Turing test.

            • jacobush 1595 days ago
              Another type from what? If you can't define it, how can you optimize for it? (Certainly not in the online competitions of today.)
    • de_watcher 1596 days ago
      Artificial Gaming Intelligence (AGI).