The BS-Industrial Complex of Phony A.I.

(gen.medium.com)

539 points | by scottlocklin 1766 days ago

56 comments

  • atoav 1766 days ago
    At the Biennale in Venice (one of the most important art shows there is) I saw a work which looked like this:

    There was a metal frame holding two glas plates with ventian sediment inbetween (sand, soil, mud). In the center there was another metal frame which formed a hole. There also were to PCB boards with ATMEGA micro controllers.

    In the text the artist claimed she controlled the biome of the soil with an AI using various sensors and pumps.

    This was clearly a fake, as you could see nothing like that on the PCB.

    Accidentally (?) she managed to create the best representation of AI I have seen in art: all that counts is that you call it AI even if it is a simple algorithm. AI is the phrase behind which magic hides and people love magic. Everything that has the aura of “humans don’t fully understand how it works in detail” will be used by charlartans, snake oil salesmen and conmen.

    If even artists slap “AI” onto their works to sell it, you know we are past the peak now.

    • YeGoblynQueenne 1766 days ago
      >> Accidentally (?) she managed to create the best representation of AI I have seen in art: all that counts is that you call it AI even if it is a simple algorithm.

      Backpropagation, which most researchers will agree is an AI algorithm, is a "simple algorithm".

      So are many other AI algorithms, some of which are simple enough to be understood so well that most people don't recognise them as AI anymore: search algorithms like depth- breadth- or best-first search, game-playing algorithms like alpha-beta minimax, gradient descent/ hill climb, are the examples that readily come to mind.

      I think the above article and your comment are assuming that, for an algorithm to be "AI" it must be very complicated and difficult to understand. This is common enough to have a name: "the AI effect". A few years down the line I bet people will say that "this is not AI, it's just deep learning".

      There's no reason for AI algorithms to be complicated. Very simple algorithms can create enormous complexity, even infinite complexity. The state of deterministic systems with even a couple of parameters can become impossible to predict after a small number of steps if they have the chaos property. Language seems to be the application of a finite set of rules on a finite vocabulary to produce an infinite set of utterances. Complexity arises from very simple sources, in nature.

      • atoav 1766 days ago
        The point was that her PCB wasn’t connected to anything at all. She claimed there were pumps and sensors, but there was literally nothing. There were cables etc and it certainly would fool someone who has no idea of circuit design and electronics, but I happen to know a bit about it and the circuit almost certainly didn’t do what it claimed it did.
        • YeGoblynQueenne 1766 days ago
          Ah, I see. I must have misread your comment. I thought you meant that the PCB didn't have anything like (a hardware implementation of?) an AI algorithm on it, not that it had nothing at all on it.
        • pts_ 1766 days ago
          That's a horrible example to call out AI.
      • cr0sh 1766 days ago
        > Backpropagation, which most researchers will agree is an AI algorithm, is a "simple algorithm".

        As time rolls on and we see more articles like this one calling out the "AI BS" - which I agree should be called out...

        I worry that a new "winter" will set in, and funding will be cut, and research towards how biological neural networks actually work vis-a-vis artificial neural networks will suffer.

        Because from what I understand, we currently don't know how such biological networks actually "learn" - because there isn't a "mechanism" for backpropagation to occur.

        IIRC, there's still questions on how information is propagated through biological networks; our artificial representations of them are constrained to approximately a single dimension of the real thing (and even that doesn't capture the biology, thus the idea of "spiking neural networks") - but there may be other avenues of information diffusion that are important as well, still to be revealed in the biological makeup.

        We know for certain we are missing something fundamental, when even if you could scale up some of today's best deep learning systems to data center scale won't approximate anything close to what goes on in the human brain, given the size and power constraints.

        Figuring this out could be set back, when funding becomes scarce once more.

        • autokad 1765 days ago
          a lot of people think it will mirror how the internet was. Lots of hype, people threw money at it while not really understanding it but when the profits didn't roll in, the winter came. People forget how tech was desolate, and it wasn't just 2001. probably from 2001 to somewhere in 2005? Anyhow, those who figured out how to make use of the internet well were incredibly successful. Once the winter was over tech has been among the hottest industries for a long time.

          AI might end up the same, enter a winter. people will talk about how silly endevours into AI where, but a few companies will really figure it out and a huge explosion will occur and people will wonder how anyone did anything without AI. something like that

          for those who forgot what that winter was like: 1999 2:10: your not a pure internet company... (insinuating thats bad) https://www.youtube.com/watch?v=GltlJO56S1g&t=316s

          .com bubble burst somewhere March, 2001

          2002 2:22: netflix represents one of the few success stories of an otherwise desolute tech sector https://www.youtube.com/watch?v=YBLAwGhyV5k

          2004 who in their right mind brings a company to market in the doldrums of august in a down tech... https://www.youtube.com/watch?v=HxOoeCHc47Q

      • Chris2048 1766 days ago
        Can back-propagation can't be used in total isolation (data, nnet), so by itself it isn't "AI", even if it is an "AI algorithm".
      • drdeca 1766 days ago
        Sure, backprop is fairly simple, but the thing it produces is somewhat complex, and we seem to find it hard to explain “why” the weights it finds work (even though we understand clearly how it finds weights that do work), right?

        That seems sufficiently “black-box-ish” to me?

      • pslam 1766 days ago
        > Backpropagation, which most researchers will agree is an AI algorithm, is a "simple algorithm".

        Back-propogation is not an "AI" algorithm.

        You are ironically doing exactly what this article is about.

        • YeGoblynQueenne 1765 days ago
          Without meaning to sound patronising, I believe I understand your confusion. Allow me to explain.

          My comment is making an entirely uncontroversial statement: that "backpropagation is an AI algorithm". Not that "backpropagation is AI". The latter could be taken to mean that backpropagation is itself artificially intelligent, that it exhibits some kind of intelligence (leaving aside for the moment the fact that we have no agreed upon definition of "intelligence", artificial or otherwise). If I understand your comment correctly, this is the interpretation you make of my comment.

          However, what my comment says, and this should be clear from the context ("most researchers will agree"), is that backpropagation is an algorithm from the field of research that is known as AI.

          In that context, "AI", "Artificial Intelligence", is the field of research that investigates methods to construct "AI", "Artificial Intelligence(s)". Backpropagation is a component of one such method, neural networks.

          I think then that the confusion, which is also discussed, and exhibited, in the article, stems from the fact that the same word is used to describe both "artificial intelligence" and the field that researches artificial intelligence.

          And I hope this clarifies the confusion.

        • sgt101 1766 days ago
          This is not serious : back-propogation is the origin point of modern AI. This is the algorithm that powers the deep network revoultion. It's not the end point, it's not a magic box, but it is fundamentally an AI algorithm.

          Just saying "no it isn't" is just not helpful or useful.

          • pslam 1766 days ago
            You are missing the entire point of the article if you continue to call these algorithms "AI". Inflating simple things like this to mean "AI" has led to the term being meaningless.

            You are the example it is making.

            • anon284271 1765 days ago
              I thought one of the key principles in this discussion is that the goalposts keep moving: before it's a solved problem, it requires Artificial Intelligence; once solved, it's just basic algorithms. The bar for what constitutes 'AI' keeps getting raised.
              • p1esk 1765 days ago
                People have often believed some narrow tasks require general AI (AGI), however it turns out that almost any specific task can be solved without building an AGI first. This does not change the meaning of "AGI" - a system that is able to perform any mental task as well as an average human.
                • anon284271 1764 days ago
                  Most of the use of the term 'AI' I see isn't referring to AGI at all.
            • sgt101 1765 days ago
              Ok : define Intelligence. Define Life.

              There are no accepted definitions of these terms. Are they meaningless?

              AI that does not include backpropagation or logical deduction or GA's or optimisation is... is... magical thinking. AI without the nuts and bolts from the last 50 years of work is meaningless. The article is heartfelt, and we all agree with the tenant that people pretending that they are using AI when they are really using a database isn't a good thing, but if you take any current system look right down inside it all you will find is a Shannon type implementation of church-turing.

              • Wiretrip 1765 days ago
                There are tonnes of other approaches to machine learning that don't involve backprop! There are also other approaches to neural netwroks that don't use backprop (have a look at Numenta's stuff for example). I suggest you watch Pat Winston's brilliant MIT AI lectures to see how huge the range of techniques is.
                • sgt101 1765 days ago
                  I direct you to lectures 12a and 12b.
    • hx2a 1766 days ago
      Is this it? Biologizing the Machine (terra incognita) by artist Anicka Yi?

      https://bagrifoundation.org/anicka-yi-la-biennale-di-venezia...

      If this is what you are talking about, I'm disappointed that the work is crap. She also won the Hugo Boss prize in 2016. This is a highly respected award in the art world.

      The art world isn't doing a good job understanding AI. Part of this is media hype causing people to be misled. Another part is the cognitive tools artists have developed to understand the world are inappropriate for understanding these technologies. For this work to be believable by technologists there would need to be some kind of demonstrable scientific rigor involved. There is none here, but people from the art world who are evaluating these things don't notice it is missing.

    • marvin 1766 days ago
      You should track down the artist and ask them whether this was intentional, and if not, point out the unintended, poignant commentaty it represents on the state of AI in the industry. I bet they’d love it!
    • mikk14 1766 days ago
      Maybe that's exactly the point she was trying to make :-)
      • michalu 1766 days ago
        With 90% of modern art, the "point they're trying to make" is whatever is the most convenient at the occasion. For the artists themselves rarely know as modern art is as much a BS-industrial complex as the AI.
        • DonHopkins 1766 days ago
          How To Deconstruct Almost Anything

          My Postmodern Adventure

          by Chip Morningstar chip@netcom.com 5-July-1993

          "Academics get paid for being clever, not for being right." - Donald Norman

          https://www.donhopkins.com/home/catalog/text/decon.html

        • minikites 1766 days ago
          Art is more complicated than that and your dismissive attitude misses the point: https://www.youtube.com/watch?v=67EKAIY43kg
          • michalu 1766 days ago
            Art is not complicated at all. Art marketers and pseudo-artists like to make it complicated to keep the "contemporary art" scam alive and profitable.

            That's because real art, like anything (engineering, piano, sports) takes years and years of hard work to perfect the skill which gives the ability to translate your ideas into something beautiful and self-explanatory (when you don't need to be expert to "understand the meaning" as it is with masters like Michelangelo).

            A great example is when Henry Moore saw Michelangelo and started to cry and confessed the only reason he makes such statues is because he knows "he will never be able to create anything as beautiful as that"

            Perhaps he would if instead he worked on things that are hard.

            It's hilarious how modern art lives on creating documentaries, popularizing an "artist" then selling art for investment while fooling people by using shaming tactics like "you just don't understand art" or "that was not the point artist was trying to make"

            If the artist has to explain her work in words he should have chosen literature as a form of expression for clearly she failed using her current means to do so.

            • PhasmaFelis 1766 days ago
              There's a lot of bullshit around art in general; it's not limited to modern art. If we only cared about the quality of the work, then a perfect forgery of da Vinci's style would be valued exactly as much as an original da Vinci.

              I can't agree with the idea that art has to be technically difficult to be "real." If a simple, abstract sculpture gives me more joy than a portrait from a great master, that doesn't mean my understanding is defective. (Nor is yours if you disagree.) Meaning isn't limited to what the creator says it is, either.

              People are obsessed with nailing down a precision definition of "art," and it always turns out to be "the stuff I like, but not that crap you like."

              • michalu 1766 days ago
                I take your point on joy, and art perhaps has a broader spectre, like music. I can enjoy nice simple pop song but I don't confuse with Chopin.

                On technical level. I disagree. There's a difference.

                A piano player that plays a flawless Beethoven, is not Beethoven because there's a difference between the ability to compose and ability to play a composed piece.

                And so it is with your analogy of Da Vinci vs. forgery.

            • nathan_long 1766 days ago
              > A great example is when Henry Moore saw Michelangelo and started to cry and confessed the only reason he makes such statues is because he knows "he will never be able to create anything as beautiful as that"

              Interesting. Citation?

              I personally love this epic ownage: Normal Rockwell painted a Jackson Pollack inside of his own realistic painting. https://triviahappy.com/articles/norman-rockwell-made-fun-of...

              • michalu 1760 days ago
                In the film „Carving a reputation” (BBC, 1998) , it is said that when he saw the Medici Tombs (1524-31, in the Medici Chapel in Florence) during his travel scholarship when he was a student of sculpture at the Royal College of Art, – he didn’t want to look at Michelangelo’s work at first, but finally he admitted that those figures posess “a tremendous monumentality. (…) a grandeur of gesture and scale that for me is what great sculpture is” – wrote Moore in his diary . The nobility and grandeur of the Italian tradition was a humbling experience that threw Moore into a profound depression. To be a great sculptor, this is what he had to compete with.

                He later claimed the reason he's "inspired" by "sumerian" art is that "he feels greco-roman art is over-represented.

              • michalu 1766 days ago
                >Citation

                The anecdote is from Moore's exhibition in Krakow, it was discussed there. I will get a precise source in 2 days and post it here.

          • Chris2048 1766 days ago
            So, at one point the video says "The quality and character of his line work is astounding. The restrained use of color is exquisite."

            Do you believe the scribbles are "amazing scribbles"?

      • atoav 1766 days ago
        That is why I added the “(?)”, I wasn’t sure about it either :)
    • dspillett 1766 days ago
      Dubiously claiming "I" have worked for many humans in many walks of life, I don't see dubiously claiming "AI" as much different!

      A key problem is that AI is a very wide and vague concept.

      What she might have is a simple "expert system", which a lot of systems called AI in the 80s were and many today probably are too, monitoring the inputs, and she may have programmed that initially in its knowledge acquisition phase with the aid of a machine learning arrangement. That would qualify as "using AI" depending on which interpretation of "AI" you are working by. An expert system can be called a rudimentary AI, and could be implemented as simple combinatorial logic if new learning during operation is not needed.

      • atoav 1766 days ago
        The thing is, her PCB wasn’t at all connected to the frame electrically. There were decoy cables but they were all connected back to the PCB as far as I could tell.

        The remaining circuitry was way to simple for the claimed functionality. Basically a DIY arduino on a bigger PCB with a few leds.

        Around the inner frame there was transparent silicone how her “sensors” should be able to get through that layer of see-through isolation without beeing seen themselves is beyond me.

        This is why I think it is a work of fiction. I don’t know whether on purpose or because they thought nobody will notice. I certainly found it to be an interesting commentary on AI :)

        • cr0sh 1766 days ago
          While I don't doubt what you saw or your explanation/interpretation of the project (that is, what was likely a BS application of the terminology), it should be understood that such a system could in theory be constructed.

          It is possible, for instance, to implement a small neural network on a regular Arduino; a simple google search for "arduino neural network" should yield some information. It would be a small step from there to interfacing such an implementation to something else as part of an art project.

          Again, I am not doubting your story; I just wanted to point out that what appears to be a simple system could very well have an actual artificial intelligence aspect to it, even given the seemingly unreasonable processing constraints of an Arduino.

    • Iv 1766 days ago
      AI may have a definition that is a bit vague but I find it unfair to say that you can call any black box "AI". (Though I'll gladly grant artistic license to your specific example)

      I was recently asked to make a very simple introduction to AI. It made me think a bit as I have been annoyed at the confusion between ML and AI that is so frequent nowadays.

      I proposed that AI started with Turing's hypothesis that brains are Turing machines and that AI was the field that tries to bring human capacities to computers.

      My next slide says "AI is not really a field. It is more of a goal and a theme. And now a set of techniques used in many different fields."

      • mcguire 1766 days ago
        Your definition is interesting. Unfortunately, we were discussing the definition used by the rest of humanity. Which is complete gibberish.
    • DEADBEEFC0FFEE 1766 days ago
      Anyone who read science fiction as a youth, must surely cringe at the way AI is used. Its a barely disguised synonym of magic.
      • antod 1766 days ago
        Not necessarily. The main example (that stuck with me) of AI in books I read as a kid 35 yrs ago was the Sirius Cybernetics Corporation. Now I think that was eerily prescient.
      • robertAngst 1766 days ago
        AI in 2019 is useful math + programming.

        AI in books is impossible in 2019.

        I don't cringe, but then again, my boss is really happy with the AI program I built. Maybe its time to change definitions or terms.

        • wolfgke 1766 days ago
          > Maybe its time to change definitions or terms.

          You should not change definitions/terms, but rather introduce new definitions/terms if you want to express something different.

        • eli_gottlieb 1766 days ago
          >Maybe its time to change definitions or terms.

          Nah, I prefer having something to keep me cynical and disappointed every time another "cutting-edge AI" application comes out. As a research community, we should be held accountable for our failures to achieve the real shit, the full-on Charles Stross vision.

    • swebs 1766 days ago
      >all that counts is that you call it AI even if it is a simple algorithm

      Is that incorrect though? I think a lot of people here are making a lot of assumptions about a very vague and ill-defined term. To me it seems like AI is just a way of saying that a program is capable of making decisions. There is no implicit tie-in to machine learning or neural networks or anything like that. These decisions can come from 5 lines of "if" statements (as is the case with most video game AI).

      • danbruc 1766 days ago
        If we start calling sorting algorithms artifical intelligence, then the term losses its last bit of meaning and becomes synonymous with algorithm.
    • walrus1066 1766 days ago
      "AI is the phrase behind which magic hides and people love magic. Everything that has the aura of “humans don’t fully understand how it works in detail” will be used by charlartans, snake oil salesmen and conmen."

      100% this!

      • usgroup 1766 days ago
        ...that should really be quantum computing right now.
        • Wiretrip 1766 days ago
          Quantum will be the next 'AI', like blockchain was the last one. :-)
          • dreamcompiler 1766 days ago
            I think you're right. I'm hoping that "show me your qubits" will put a stop to that. It's easy to see qubits because they're hardware; it's a little harder to see a local minima-avoiding heuristic search algorithm.
          • madcaptenor 1766 days ago
            What about the quantum blockchain?
    • Balgair 1766 days ago
      Amazing! I love it!

      https://en.wikipedia.org/wiki/The_Treachery_of_Images

      It's like The Treachery of Images but in reverse.

      This is an AI/mind

      c'est un cerveau / une intelligence artificielle

    • ranit 1766 days ago
      Good observation. This has happened with the word robot for many years. It is AI time now. :(
    • solidasparagus 1766 days ago
      We might be headed for the trough of disillusionment, but in terms of technology and industry applications, we are still in the early phase of modern AI (big data + ML).
      • Wiretrip 1766 days ago
        Exactly, we are in a very early phase. However, the hype wave is driving a dangerous level of premature deployment of half-baked parlour tricks or the over reliance on simple algorithms where there are real consequences. i.e. prison sentences and insurance eligibility and yes, autonomous vehicles.
        • cr0sh 1766 days ago
          I don't know if this is against HN "rules" - but I'm going to give you an explanation anyhow.

          I downvoted one of your comments that was essentially a cut-and-paste of this comment here; I have noticed that you have done this more than once here within the comments.

          Please refrain from doing this; it seems to add nothing but noise to the conversation. If you believe your commentary has merit within the context of the thread at hand, do your best to restate that opinion in an original manner, not by simply copying and repeating the exact same statement.

          I hope this explanation clarifies why I downvoted you; I believe your comment does have merit to the discussion as a whole, but posting it once should be enough to reach the audience with it's intended message.

          • Wiretrip 1765 days ago
            OK, under most circumstances that is a fair point, and I personally hate astroturfers, but in this case I feel the comment was equally valid as an answer to two threads that were physically separate - and could have ended up in vastly different places. If there is a way to link to an existing comment then let me know and I'll use that method in future :-)
        • solidasparagus 1765 days ago
          Very good point. At least you can feel the pushback against that in the industry. It's a fraught time, but the AI community feels extremely proactive.

          It is however an uphill battle when these 80% systems offer such immediate gains. We really need governments to be informed and active.

      • VBprogrammer 1766 days ago
        I'm doing a bit of hiring at the moment. It's hard to find a single CV which doesn't have some kind of machine learning slant. As much as I think there are plenty of advances in ML left to take, I doubt every graduate will step out into a good application for existing machine learning tools.
        • misterman0 1766 days ago
          Just like not all astrophysicists reach Nobel prize level achievements not all machine-learning graduates will innovate in their field. But to help you out, in order to find people who are likely to innovate, find out if that is their goal. If it is not then you know not to hire them. But if that's their goal, maybe start some kind of "program" where you give these people a chance. 6 months. A year maybe. If the are close to something or if you're pleased with them even though they aren't really innovating, then give them a "tenure" so to speak.

          An evaluation process, it's very common here in Sweden. I mean how much money can you really loose in six months? You can kick that shithole straight out day one if that's your perogative. It's a good deal for both parties.

          • VBprogrammer 1766 days ago
            Sorry if I wasn't clear, our problems are CRUD at modest scale. There is little room for the application of machine learning.
            • dspillett 1766 days ago
              > There is little room for the application of machine learning

              You see people trying to shoehorn ML into many such system though, and there is money in it, which is why people are chasing it to have it on their CVs.

              Like the noSQL hype of a some years ago it'll settle down and people will gravitate back more towards the right tool for the job (which will sometimes be ML based, but often not, just as "noSQL" is sometimes the right tool or right enough). ML will survive where it is the best tool for the job, or at least where it can be genuinely useful and not significantly sub-optimal.

              > our problems are CRUD at modest scale.

              I see some of our client base looking into ML and "Big Data", and I despair a little because they often fail badly at getting "little data" correct. It is actually part of the sales pitch for ML: let the AI filter out the crap in your inputs and give you something approximating a decent answer as output. They'd be much better served working on fixing the data sources or using more traditional cleansing methods, but that seems like harder work compared to the new magic some consultant is extolling. ML isn't a magic bullet, but it is currently being sold as one.

        • glitchc 1766 days ago
          Does your job description list machine learning as a requirement? If so, there's your problem. Maybe a chat with HR may be in order.
          • DonHopkins 1766 days ago
            Maybe the problem is that HR is using machine learning to filter the resumes.
          • VBprogrammer 1766 days ago
            Definitely not.
        • pts_ 1766 days ago
          What are you hiring for if you don't mind?
    • dreamcompiler 1766 days ago
      Which gallery? I'm going there in a few months and would like to see that.
    • olalonde 1766 days ago
      Same thing with "blockchain".
    • robertAngst 1766 days ago
      Okay, but as much as we believe profitable companies are run by idiots, they arent. They need to stay profitable or create profit, so they hire technical managers that know that AI is really just finding uses for data.

      All of tech is magic to the non-tech people.

      • wolfgke 1766 days ago
        > Okay, but as much as we believe profitable companies are run by idiots, they arent.

        Rather: many customers are idiots, too.

  • dreamcompiler 1766 days ago
    This happened right before the first AI Winter in the late 80s: AI (in the form of expert systems) solved a number of hard problems and was hyped as being able to solve every problem. Reality set in when we figured out:

    1. It didn't scale and

    2. Getting 80% of the problem solved was easy, but getting that last 20% was very, very hard. Maybe several orders of magnitude harder than the first 80%.

    Nowadays we don't seem to have problem 1 quite so much, but problem 2 is still there in a big way. Witness self-driving cars, where driving on an interstate highway in broad daylight is easy, but driving through a snow-covered construction zone at night is impossible. Or just dealing with a bicyclist on the road without killing them.

    We're not going to have AGI any time soon.

    • bjourne 1766 days ago
      The first "AI winter" happened in the 60's. In 1949 Warren Weaver published a memorandum outlining his plan for universal machine translation. He likened the problem to cryptography, a field that had exploded due to WWII, and thought it would be solved in a few years. In hindsight, we know that his ideas were very naive but Weaver was an influential and charismatic person so they inspired lots researchers. They also attracted lot of funding from the US government, likely because of the envisaged military and political uses of machine translation. Then in 1966, the ALPAC committee setup by the US government published a report that concluded that machine translation were infeasible and thereby killing the funding.

      Work on machine translation resumed at IBM in the late 80's, but then based on statistical methods.

      • sjy 1766 days ago
        It feels harsh to call him naïve because I'm sure I wouldn't have had any more insight, but it is surprising how little we knew about the science of language pre-Chomsky, considering that we have been learning languages for millions of years. Now that we have had several generations of computer technology fall short of the abilities of human translators, it seems obvious that you're probably not going to be able to write a machine translation program on a computer with a few megabytes of memory.
        • skybrian 1766 days ago
          Did you mean post-Chomsky? It's my understanding that machine translation started to become semi-useful after abandoning traditional grammars (or anything Chomsky would recognize) and switched to statistics.
          • sjy 1763 days ago
            I did mean pre-Chomsky. I was using him as a rough reference point for when linguistics started to get serious as a scientific discipline.

            Because people have been informally studying linguistics for all of human history (by trying to learn new languages in adulthood), it's surprising to me that we ever thought that natural language processing would be easily solved by computers. To me, it seems quite obvious that this is a fundamentally very challenging problem for computers, like theorem proving or painting, rather than accounting or printing.

            I was born with the benefit of hindsight here, but I find it strange that computability theory was formalised in the 1930s (before we could actually build powerful computers), and yet decades later, leading experts still failed to realise that language processing was fundamentally difficult, and intractable on the hardware they had at the time. This seems like an obvious consequence of a number of facts which they did know, like "describing an algorithm is very different to having a normal conversation," and "texts are open to interpretation and cannot be perfectly translated into other languages."

          • cr0sh 1766 days ago
            > It's my understanding that machine translation started to become semi-useful after abandoning traditional grammars...and switched to statistics.

            What's curious about this statement (and forgive me if I am in error here; I am not extremely familiar with the machine translation field) is that the community chose to go down the route of "traditional grammars" and their rules, for the purposes of translation, rather than down the route of statistics to begin with.

            Because as it was noted, this happened after WW2 and the advances in cryptography, which were by and large also driven by statistical analysis and other similar mathematically based advances; one would think that such insights would have carried over into the machine translation realm.

            But they didn't - which is simply a curious historical footnote. It's also something we see often in history - particularly as it relates to technological advances; that there are certain paths that are taken that lead to virtual dead-ends decades later, but which the path that should have been taken was presaged by earlier work, yet for one reason or another wasn't pursued.

            If we only had a way to avoid such "wrong turns", we could be much further along technologically - but of course, that would also be tantamount to "telling the future"...almost.

            • mcguire 1766 days ago
              The downside of statistical models has always been understanding and explaining why they do whatever they do. Any statistical model will generate a result, either right or wrong, but it's impracticable to figure out how it came up with that result: the only knob you have is training data.

              "Symbolic" techniques are the usual first choice because in order to get them to work reasonably well (i.e. beyond the 60%-80% threshold), you have to understand what they're doing.

          • burnte 1766 days ago
            "It's my understanding that machine translation started to become semi-useful after abandoning traditional grammars (or anything Chomsky would recognize) and switched to statistics."

            The same thing happened in speech recognition. IBM started throwing statistical modeling and horsepower at the speech recognition problem, and folks like Kurzweil thought it was inelegant and not "true" speech recognition. However, it serves the purpose, and we know that humans use similar tricks when listening to augment pure recognition. It's harder to hear someone in a loud room if you can't see their mouth, or don't know the context of what they're saying.

        • raverbashing 1765 days ago
          To be very honest you don't need Chomsky to realize how complicated languages are

          People who think it's easy usually don't speak a 2nd language (amongst other problems in communicating with non-natives for example) so I'm going to bet on that.

      • dreamcompiler 1766 days ago
        Good point. I thought you were going to talk about Minsky and Papert and Perceptrons. That caused another mini-winter, at least w.r.t. connectionist models.
    • Wiretrip 1766 days ago
      So true. The real danger with 'AI' at the moment is the premature deployment of half-baked parlour tricks or the over reliance on simple algorithms where there are real consequences. i.e. prison sentences and insurance eligibility and yes, autonomous vehicles.
      • tempodox 1766 days ago
        > ...premature deployment of half-baked parlour tricks...

        And the “fake it till you make it” attitude in SV only makes this worse.

    • jayd16 1766 days ago
      You're always going to see #2. Use cases get marketable before they get fully solved so you'll always see pretty good but not great examples. In the mean time there are a lot of things that are solved by neural nets that we take for granted such as voice and face recognition.
      • hodr 1766 days ago
        Voice is solved? Then wider hall cunt my giggle nexus ever get thaddeus tribbleshits?
        • burnte 1766 days ago
          That's MISTER Thaddeus Tribbleshits to you.
    • tintor 1766 days ago
      AGI is not needed for self-driving.

      None of what is mentioned above is a deal-breaker for self-driving car service:

      - lidar at night works just fine

      - plenty of cities with no or very little snow

      - construction zones: blacklisting, remote monitoring & manual mapping, detection of cones, barriers, re-painted lanes

      - self-driving cars with 360 degree view and plenty of patience and no distraction are safer for bicyclists than manually driven cars

      • carapace 1766 days ago
        If navigating the world were that easy we wouldn't be (non-A)GI. The bulb on the end of our spine doesn't use 1/5th of our oxygen (or whatever huge amount it is) intake because it makes our foreheads look sexy.

        An auto-auto (I'm using that term unabashedly for "self-driving cars", you can too) has to be able to perceive and understand that the car in the next lane with the mattress poorly tied down might, at any moment, suddenly become two large moving objects. And so on.

        If we set it up right the whole fleet will learn from the experiences of each member, likely in near-realtime. More than just traffic conditions, this will help with things like emergency response.

        (We should be making self-driving nerf golfcarts with a top speed of maybe 15kph. Duh! We could make those today and sell a million and then incrementally make f'ing sportscars and sh!t.)

        • marcosdumay 1766 days ago
          That bulb is there because of politics, and because speak sounds sexy. There are many animals capable of navigating the world even better than us (most fly, for obvious reasons) that don't have anything near the size of our brains.
      • patrick5415 1766 days ago
        >- lidar at night works just fine at night

        Yes, let’s rely on a single sensor.

        > - plenty of cities with no or very little snow

        But plenty of city have plenty of snow. So perhaps not a deal breaker in a limited set of circumstances.

        > - construction zones: blacklisting, remote monitoring & manual mapping, detection of cones, barriers, re-painted lanes

        Why should we add to already high construction costs just because the self driving tech isn’t up to snuff?

        >- self-driving cars with 360 degree view and plenty of patience and no distraction are safer for bicyclists than manually driven cars

        Citation?

        • skybrian 1766 days ago
          Why shouldn't surrounding infrastructure adapt a bit for self-driving cars? Look at everything that had to change for railroads, cars, and airplanes.

          If railroads were invented today they would never be allowed. They can't stop for miles? You have to teach everyone everywhere to stay off the tracks? How's that supposed to work?

          It's good that our safety standards are a lot higher today for new tech, but perhaps there are a few common sense rules that could be taught, rather than requiring perfection?

          • patrick5415 1766 days ago
            Honestly? Because I see self driving cars as expensive toys for rich people[1], and I don’t feel like spending tax dollars or making other concessions to subsidize that.

            I think the analogy to railroads is a bit tenuous. There are immediate benefits to connecting two sufficiently separated points with a rail line. A single stretch of properly marked and instrumented roadway is relatively worthless[1].

            [1] Trucking being a potential exception.

            • povertyworld 1766 days ago
              You do realize horseless carriages were also expensive toys for rich people when they were first introduced?
              • kiliantics 1757 days ago
                Yes, and the infrastructure we ended up building for them has made horrible changes to most cities and had extreme negative effects on the environment
        • AstralStorm 1766 days ago
          You also have night vision and radar to detect metal objects. Plus sound which is still barely utilized besides Doppler.

          Ultimately humans cannot match the senses of an automated car, the problem is in processing and integrating. Which we are pretty far away from solving in a general case.

          • mcguire 1766 days ago
            I'm not so sure about that. For one thing, unless they've changed a lot more than I think they have since the last time I got to fool with one myself, the sensors have very limited range. Take a look at the screens displaying "what the car sees" in this self-promotion video from Waymo:

            https://youtu.be/B8R148hFxPw?t=148

            What the car is in essence doing is driving by GPS and maps while staring at its feet. It is not as bad as driving with your eyes closed, but it's not that much better.

            Further, some of the infrastructure it is relying on, like lane markings, are very poorly maintained.

      • cm2187 1766 days ago
        > plenty of cities with no or very little snow

        But what happens that rare day it snows? Hundreds of deads? Cars get recalled for much less than that.

        • laichzeit0 1766 days ago
          Drive it in manual mode like we do right now? I mean even an autonomous car that could handle 80% of normal driving just fine would be great. And by 80% I'm talking about driving on a freeway without crashing into a barrier in broad daylight.
          • mcguire 1766 days ago
            Gonna suck for those people who don't own cars and are relying on the fleets of privately-owned pseudo-taxis.
        • tim333 1766 days ago
          There have been a bunch of prototype self driving cars driving in snow. I'm not sure it's fundamentally a much different problem to self driving in general. eg https://www.engadget.com/2018/05/08/waymo-snow-navigation/

          Sure they may perform worse in snow but humans do too.

        • Majestic121 1766 days ago
          I don't know if you live in such a city, but I can tell you that in mine the rare days it snows circulation is extremely chaotic, with people having slow paced crashes all over the place, and I'd wager AI is good enough to do that as well.
          • cm2187 1766 days ago
            My understanding is that AI has different failure modes. If it doesn't see the road, it might not realise it doesn't see the road and just go off road. It's not necessarily going to react like a reasonable human.
            • Majestic121 1766 days ago
              I see your point, and it's indeed valid : I do remember a tesla crash where the autopilot simply followed an old line on the road instead of the regular one, thinking all was fine until it hit a column, killing the passenger.

              But in this specific case of snow, I think this could be handled in a decent way by AI, as I would not expect snow conditions to be too hard to detect. Even something like "grip is not good enough for autopilot/snow detected on/around the road, stop on the side or slow down to the point of being impossible to have a fatal car crash until driver takes over" would be good enough.

      • TheOtherHobbes 1766 days ago
        Self-driving cars may be safer than manually driven cars in certain contexts.

        What's missing from AI is reliability. It's brittle, it works well in some situations and not at all in others.

        Which is a problem, because you can never be sure how well it's working for you.

        There's no anti-Dunning-Kruger-function to say "I'm sorry Dave, I can't handle this, you should take over" - partly because that's not something you want to experience driving into a hail storm at 70mph on a freeway if you're asleep, but also because it would require a level of domain-specific AI self-awareness that is barely on the radar in most domains.

        • yonkshi 1766 days ago
          Unfortunately any AI systems will encounter the verification dilemma, the more powerful an AI system becomes, the less verifiable it would be.
          • AstralStorm 1766 days ago
            Verification is easy, just like with humans. It's called a driver's license. And since the AI does not tire and can probably be sped up, you can put it through hundreds of thousands of hours of driving quickly in a good simulator with adversarial and normal situations, then rate it at various tasks. Just like we should do with human drivers, but fail to.

            Explanation is harder. But we probably shouldn't care, even in courts people cannot often explain what and why they did while driving, or they just lie. The thing is, for liability purposes you have to ensure it is not a series defect and that a good human driver would not be able to handle it

            • ak39 1766 days ago
              In the world of investments, that would be called buzzwordily as “back testing”.

              But is it sufficient to answer _why_ the machine chose to act a certain way given a certain set of instantaneous input criteria?

              • lmm 1766 days ago
                No. But, as the parent post said, we don't always know why a human driver chose to act a certain way either. So that shouldn't be a blocker.
                • bumby 1766 days ago
                  But we evolved to have empathy to help understand how humans act in the cases we don't have complete or good information.

                  This is why "crazy" people make us so uneasy. They don't fit our mental models for how a human should act. Would you be comfortable driving with a road full of unpredictable "crazies"?

                  I wonder if we'll ever have the same level of trust with AI as humans if it is still being used at a black box level.

          • DonHopkins 1766 days ago
            Bad AI will suffer from the Dunning-Kruger effect and overestimate its abilities, while good AI will suffer from Imposter Syndrome and underestimate its abilities.
      • 0xfaded 1766 days ago
        A standardized QR code printed on cones to say "construction ahead do this" seems relatively simple compared to correctly interpreting the construction site. Would also produce automatically labeled training data
        • skgoa 1766 days ago
          And then someone runs over the cone and 100 following cars crash into construciton workers at full speed. The core issue of automated driving isn't solving any of the 1000 simple driving tasks. It's creating an entire distributed system that is inherently robust against errors, failures or manipulation.
      • SomeOldThrow 1766 days ago
        If I just wanted to drive around a city I’d grab a smart car. I have taxis to take me home at night when I’m drunk and the busses have stopped running. This solves a problem only rich people have: traffic is tedious. The rest of us just take public transit and listen to podcasts or music to bypass it.

        From my perspective the idea of a self driving taxi is the opposite of public transit in the worst possible way: pay more so fewer people have a job and you’re even more lonely and alienated than before. If I indulge I’m just making myself and everyone around me more miserable, and no amount of marketing will convince me otherwise.

        The area where I’d appreciate this the most is the country where uber and lyft haven’t reached yet, and that’s precisely the environment where self driving cars will take the longest to reach. We’ll see.

        Plus, I can’t wait to see the game of “beat the shit out of the corporate empire’s self driving car for fun when drunk”.

    • JustSomeNobody 1766 days ago
      These are made up numbers, but I use this to explain the scale to people. If it costs $1Bn to reach 80%, it'll take another $100Bn to reach 90%. There isn't enough money to reach 100% for two reasons. 1) You can't get to 100% and 2) It will take so long people lose interest. You can't sustain the hype that long.
    • computerex 1766 days ago
      Right. There has been a lot of progress within the last decade in narrow AI, but AGI remains as elusive as ever.
    • ilaksh 1765 days ago
      The best self driving systems do not have trouble dealing with bicyclists. The one time it happened with Uber was a combination of user errors starting with a very serious misconfiguration of the system.

      Also, AGI and self-driving cars have almost nothing in common. Driving cars is a very narrow AI task.

    • Veedrac 1766 days ago
      What GOFAI technique got even close to 80% of the way? Maybe pathfinding, if I'm being particularly generous?
    • jsinai 1766 days ago
      These days it’s more 95% easy, last 5% hard.
    • misterman0 1766 days ago
      >> We're not going to have AGI any time soon.

      In your mind, how many inventions are we from GAI? 1? 1000?

      Also, given we have our very brightest working in the problem space (MS, Google, FB, OpenAI ...) how long would each remaining invention take?

      It seems plausible to me that we are only one invention from GAI, one novel combination of two or more existing solutions and that this may come from anyone and at any time.

      • onion2k 1766 days ago
        We've drawn a couple of circles and we want to draw the rest of the owl, and you're telling us there's only one step.
        • misterman0 1766 days ago
          Yes. Because we don't know it is an owl we should draw. It's equally likely AI is just three circles combined.
          • lsc 1766 days ago
            exactly... we don't even have a definition of consciousness that is better than Jacobellis v. Ohio - For that matter, I'm not sure we even have it that good. I'm not sure I'd recognize a non-human consciousness when I saw it. I'm not sure anyone would.

            I mean, sure, I can totally see the idea that consciousness is maybe some emergent property of complex systems... in which case, sure, we could stumble upon it by accident. That is totally possible; we're creating complex self-replicating systems all over the place.

            But as far as intentionally constructing the thing that will come after humanity? we can't do that until we know what, exactly, it is we are constructing, and as far as I can tell, we're pretty far away from any idea of exactly what that is.

            • pure-awesome 1766 days ago
              Is consciousness a necessary component for AGI, though?
              • misterman0 1766 days ago
                I don't think it's necessary for us to get a machine to reach "consciousness", whatever that is, before we realise it's overtaken our mental ability to combine two existing concepts in order to create a new one.

                I imagine this is what's going to happen: we humans have drawn two circles (thank you dear fella for that analogy) and we are about to draw a third, combing two existing concepts.

                At some point we will have drawn so many circles that the next one will not be drawn by us. It will be drawn by AI.

                Who knows when we'll reach that level of machine intelligence and if it even requires them, the machines, to be conscious? But we will, without any doubt, reach a point where there have been so many circles drawn that they are now smarter than us at drawing them. Absolutely without any sort of doubt in my mind this will happen.

                Tens of thousands of people comprehend Einstein's work. It used to be hundreds. We're smart as shit these days. Fuck me if it's not before I die.

                • lsc 1766 days ago
                  >Tens of thousands of people comprehend Einstein's work. It used to be hundreds. We're smart as shit these days.

                  In some ways, I feel like this is the real singularity. This trend towards feeding and offering education to everyone. Like, it used to be that to be an intellectual, you'd need to be born rich to get the free time and education. I mean, being born rich still helps, sure, but a lot more people are getting a shot.

                  On the other hand, it's a process that levels off at some point, like everything else. The population is leveling off; as we bring education and leisure time to more and more of the world, eventually most of the people who have the capability to do this sort of thing will have done it.

                  I mean, I see that with computers, too... like with the leveling out of Moore's law; exponential processes, in nature, tend to not remain exponential.

                  • marvin 1766 days ago
                    If you subscribe to the theory of great and continuing, accelerating change due to technological improvements, advanced technological education for everyone is an expected ovservation during the decades or centuries it’s occurring :)
              • lsc 1766 days ago
                >Is consciousness a necessary component for AGI, though?

                I always thought so. Like, without volition, it's just a really good natural language tool, and not a general intelligence.

                What would an artificial general intelligence without consciousness look like to you?

                • TheOtherHobbes 1766 days ago
                  Like the intellectual equivalent of a beast of burden - good at solving open problems by (apparently) intuiting connections and making inferences, probably with the appearance of creative operation and experimental learning, but lacking any genuine desire or personal agency.

                  IMO this is much more plausible as AGI than some kind of mythical general meta-intelligence that has a virtual soul.

                  I don't see why consciousness or agency are at all necessary for AGI - except as a science fiction trope.

                  • eli_gottlieb 1766 days ago
                    Besides which, consciousness and agency are two different things, damnit! I've got a github repo in which I train agents by policy optimization. I have no illusions that my Pytorch models are conscious.
                  • lsc 1766 days ago
                    >good at solving open problems by (apparently) intuiting connections and making inferences, probably with the appearance of creative operation and experimental learning, but lacking any genuine desire or personal agency.

                    so, like a philosophical zombie? like something that appears to be conscious but isn't?

                • fouc 1766 days ago
                  I mean, we could call it an "artificial general intelligence" or we could call it a "generalized optimizer & problem solver"

                  So basically if we can have a system that can solve any problem thrown at it, that's essentially the AGI.

                • AstralStorm 1766 days ago
                  A really efficient slave. Essentially it does not get creative and is ultimately predictable, even in failure modes. It would also have no subconscious drives to do unexpected things.

                  Capitalistic dream.

                  • carapace 1766 days ago
                    Asimov's Three Laws of Perfect Slavery (Forgive me Isaac!)

                    If such beings were conscious they would be saints.

                    One of my favorite AI jokes:

                    Q: What's AI?

                    A: When the machine wakes up and says, "What's in it for me?"

                    It would be fetish to actually make humanoid servitor robots. IMO.

                  • lsc 1766 days ago
                    without creativity... I don't think it would make for a very effective slave in a world that is constantly changing.

                    Is creativity possible without consciousness? I don't pretend to be an expert.

                    • AstralStorm 1766 days ago
                      Probably possible, yes. Subconscious drives are a thing. You could have sentience without consciousness. (The difference is that consciousness can change or modify drives and recognize its own actions on the world - its own agency. It is related to sentience and the line between the two is thin and blurry.) I'm not sure it is possible to be conscious without having been at some point sentient.

                      The idea was that such an AI would never question its priorities or change them, not that it wouldn't be able to solve novel problems.

                      (Which is the definition of intelligence, duh. And shows how far we're from that. What we have is glorified pattern matching algorithms and some basic symbolic logic and clustering.)

                  • walterstucco 1766 days ago
                    You don't need coscience to be creative.

                    You don't even need it to be free.

                    Are bacteria slaves?

                  • mruts 1766 days ago
                    That sounds more like a Communist dream. Capitalism doesn’t reward really efficient slaves.
          • mcguire 1766 days ago
            I may not know an owl until I see it, but I do know that's not it.
      • dorkwood 1766 days ago
        Applying this method of thinking, is there anything that we aren't one invention away from?
        • solveit 1766 days ago
          Large scale colonization of Mars would require multiple inventions. We know this because we have a good understanding of what a successful colonization of Mars entails, and have a pretty good idea of the main technical obstacles. AGI, on the other hand, might be one breakthrough away or two hundred breakthroughs away. We just have no idea what makes intelligence tick.
        • misterman0 1766 days ago
          I'm just trying to stay positive.

          If you talk optimistically about AI (or big data for that matter) you quickly get downvoted these days. I'm curious as to why that is. I thought this was 'marica. Isn't this 'marica?

          Edit: my apologies, that was condescending. I should have finished of with a wink of an eye.

          • striking 1766 days ago
            There's optimism and there's being too vague to disprove. You might be able to dodge downvotes by adding more objective content to your comments.

            History has played out a particular way quite a few times (https://en.m.wikipedia.org/wiki/AI_winter). If you can tell us why today isn't like all the other days using facts, I'll bet the downvotes will stop.

      • EliRivers 1766 days ago
        General intelligence? Does it even exist? I find the evidence that humans have it unconvincing, and I've seen nothing that makes me think we're "one invention" away. The direction of machine learning seems to be specialised tasks on specialised data sets; surely the very opposite of any kind of general intelligence.
        • z3phyr 1766 days ago
          So get whatever intelligence human have in computers. That was the goal. Call it general intelligence or call it x.
          • EliRivers 1766 days ago
            Human intelligence works well [0] from inside human shaped objects with human shaped biology, in human shaped societies, and to no small degree for doing human-style tasks.

            I am not convinced that putting that human intelligence, whatever it is, into a silicon box will give us the results we hope for (or is even a sensible ambition, given the degree to which our intelligence is part of our biology).

            [0] Actually, maybe it doesn't work really well - maybe it's a slow-motion car crash over millions of years. But it works well enough that we clearly want more of it.

            • antepodius 1766 days ago
              Whatever human intelligence is, there's clearly some part of it that can do things like design computers and build houses. That part, at least, people would like to be able to match in performance using machines. It's 'intelligence' in the sense of being able to do things like think up clever strategies in novel environments that people want to put in a silicon box.
              • EliRivers 1766 days ago
                If that is the goal, it's certainly not what the current batch of machine-learning and hype-train riders are chasing. They (or at least, the ones doing it via huge training sets) seem to be working on the very opposite of novel environment situations. I wonder if anyone is seriously pursuing that.
      • dreamcompiler 1765 days ago
        That's exactly what they thought in the 1950s and 1960s 1970s and 1980s. And oh by the way, our brightest minds were working on it then too.
  • pron 1766 days ago
    In 1949, some years after the invention of neural networks, Norbert Weiner, one of the leading minds of the time, was convinced that AI (AGI as you may call it) or a full understanding of the brain is no more than five years away. Alan Turing thought Weiner was delusional, and that it may take as much as fifty years. Seventy years later, we are nowhere near insect-level intelligence.

    I don't see any fundamental barrier preventing us from achieving AI, but if someone from the future came to me and said that AI will be achieved in 2130, I would find that quite reasonable. If they said it will be achieved in 2030 or 2230, I would find those equally reasonable. Our current scientific understanding is that we have no idea how far we are from AI, we don't know what the challenges are, and we don't even know what intelligence is. We certainly have no idea whether the approach we are now taking (statistical clustering, AKA deep learning) is a path that leads to AI or not.

    In the sixties, the leading minds of that time were also working hard on the problem and did not find it any further away from us as we do today. That some people are optimistic is irrelevant. The fact is that we just have no idea.

    • cr0sh 1766 days ago
      > Seventy years later, we are nowhere near insect-level intelligence.

      That's arguable: For instance, we have the entire connectome of c. elegans mapped out; we can easily simulate it, and it seems to act the same as the actual nematode. So, in one sense, we are at that level.

      However, we still have no clue how such a simple system actually works to produce the level of "intelligence" it has. So in that sense, we're not at that level at all.

      > We certainly have no idea whether the approach we are now taking (statistical clustering, AKA deep learning) is a path that leads to AI or not.

      One clue we do have:

      We may not be on the right path with that method; it's something the "grandfather" (or whatever) of AI (Hinton) has mentioned, and which I have stated before about...

      That is, the fact that we currently have no understanding of the mechanism by which biological neural networks implement anything like "backpropagation". From what we currently understand, as I currently understand it, we have yet to find such a mechanism that would allow for it.

      It's also one of the leading reasons why our current artificial neural networks consume so much power, as compared to biological systems...

      • pron 1766 days ago
        > For instance, we have the entire connectome of c. elegans mapped out... So, in one sense, we are at that level.

        Well, whatever "intelligence" C elegans has, I think everyone would agree that it's far from insect-level; it's microsocopic-nematode-level. But I am not sure a simulation of C elgans rises to the level of "artificial". As you note, we don't understand it yet. But we may have already built systems that are more "intelligent" (whatever that means) than C elegans, and we may have done that decades ago.

        > From what we currently understand, as I currently understand it, we have yet to find such a mechanism that would allow for it.

        True, but our path to artificial intelligence may not end up going through neural networks at all. We've not achieved flight by mimicking biological flight. I'm not saying it won't, either, but we cannot say for sure that it will. We really don't know.

      • synthmeat 1766 days ago
        > We can easily simulate it.

        To my knowledge, this is nowhere near truth. They got it to wiggle by basically scripting muscle contractions, no neural networks are involved in the process. When you turn on the network, it does nothing. (And I love project, the idea and community behind it.)

    • carapace 1766 days ago
      But Weiner's Cybernetics withered on the vine, or floated off into fluffy "Second Order" cybernetics.

      There was an experiment, I don't have the details to hand at the moment, I'm sorry, but Gordon Pask and someone else mage a cybernetics "machine" out of a dish of chemicals, and got it to grow an "ear" (filaments that were sensitive to certain sound vibrations, just like the hair cells in your inner ear)!

      If you really think about what they did (and you have to know some Cybernetics to understand it) then it's actually pretty scary. Like more-scary-than-atom-bomb scary.

      I'm only mentioning it here because we're about to need to grapple with this sort of thing in a minute or two...

      "Introduction to Cybernetics" W. Ross Ashby (1956) http://pespmc1.vub.ac.be/ASHBBOOK.html (PDF kindly made available from that page.)

      • pron 1766 days ago
        My point wasn't about the particular form Weiner's cybernetics ended up taking; I was only commenting on his optimism regarding neural networks in the late forties and early fifties (you can say it's about his vision of Cybernetics rather than it's actual manifestation).
        • mbeex 1766 days ago
          Sorry for a German's nitpicking, but it seems to be not a typo: The man's name is Wiener.
        • carapace 1766 days ago
          Fair enough.
    • DonHopkins 1766 days ago
      >Seventy years later, we are nowhere near insect-level intelligence.

      While the insects have been getting smarter!

      https://www.tedmed.com/talks/show?id=7286

      https://www.dailystar.co.uk/news/latest-news/403924/spiders-...

  • YeGoblynQueenne 1766 days ago
    >> Deep learning algorithms have proven to be better than humans at spotting lung cancer, a development that if applied at scale could save more than 30,000 patients per year.

    It's not easy to scale deep learning because deep neural nets have a very strong tendency to overfit to their training dataset and are very bad at generalising outside their training dataset.

    In a medical context this means that, while a particular deep learning image classifier might be very good at recognising cancer in images of patients' scans collected from a specific hospital, the same classifier will be much worse in the same task on images from a different hospital (or even from a different department in the same hospital).

    To overcome this limitation, the only thing anyone knows that works to some extent is to train deep neural nets with a lot of data. If you can't avoid overfitting, at least you can try to overfit to a big enough sample that most common kinds of instances in your domain of interest will be included in it.

    So basically to scale a diagnostic system based on deep neural net image classification to the nation level one would have to train a deep learning image classifier with the data from all hospitals in that nation.

    This is not an easy task, to say the least. It's not undoable, but it's not as simple as having someone at Hospital X download a pretrained model in Tensorflow and train its last few layers on some CT scans.

    • bodono 1766 days ago
      This statement is false, as recently demonstrated by DeepMind on retinal scans. Not only did they generalize outside of the training dataset but they were able to use the features learned by the model on an entirely different type of scanning device.

      https://www.nature.com/articles/s41591-018-0107-6.epdf?autho...

      "Moreover, we demonstrate that the tissue segmentations produced by our architecture act as a device-independent representation; referral accuracy is maintained when using tissue segmentations from a different type of device."

      • YeGoblynQueenne 1766 days ago
        In the paper you link to, the researchers trained an image classifier on data collected from 32 sites of the Moorfiel NHS trust. The trained model was tested on, presumably held-out, data from the same dataset.

        This is an example of scaling a model beyond a dataset collected from a single site. It is not contrary to what I say in my comment.

        The researchers further tested their model on data obtained from a different device than it was originally trained on. This data was collected from the same hospital sites. The original model performed poorly on this new data and was re-trained to improve its performance.

        This does not demonstrate an ability to generalise to unseen data- only an ability to adjust a model to new data, by re-training.

        • bodono 1766 days ago
          It contradicts your statement: "it's not as simple as having someone at Hospital X download a pretrained model in Tensorflow and train its last few layers on some CT scans" Because in this case it was as easy as taking a model from a totally different modality and retraining the first (in this case) few layers to accommodate the new device. Furthermore the original training used 15k scans and the retraining only required 152 scans. This is totally reasonable and clear evidence of transfer and generalization. Moreover, even human operators require retraining on new devices!
          • YeGoblynQueenne 1766 days ago
            My Tensorflow comment was a bit unclear. I meant that you can't just download a generic model like the kind that is readily available, e.g. one trained on ImageNet or CIFAR etc, and expect that you can retrain it easily and get a diagnostic tool that is competitive with an expert. The models in the paper you link were specifically trained on medical imaging data.

            My point is that you need a lot of work to make this work even for one hospital, let alone scale to many, even more so scale at the level of a national health service. I don't see that the paper you link contradicts this.

            Edit: if I may summarise: I said "it's not simple" not "you can't do it".

            Transfer learning is not generalisation to unseen data. If the pre-trained model and the end model don't have any common instances it doesn't work [Edit: "don't have any instances with a common feature space" is more clear].

            Also, you're talking about generalisation to new devices. My understanding is that this is only one aspect of the difficulties with scaling image recognition for medical diagnoses to data from different sites.

    • hadsed 1766 days ago
      It is getting easier to scale deep learning in difficult domains. It'll likely be some combo of pretrained semi- or self-supervised models that are transferred and fine-tuned. You mention large data, but we also have the knobs of inductive bias and training objectives. Once we crack inductive biases for CT scans, perhaps by analyzing a model trained on large amounts of data, then scaling gets easier. I don't think the situation is too dire, it's just a very difficult and high-risk domain. It could also get a lot easier once we figure out better training objectives, but just like inductive biases, those are pretty domain specific so they take some time to discover.
    • epiphanitus 1766 days ago
      >>the same classifier will be much worse in the same task on images from a different hospital (or even from a different department in the same hospital).

      Has anybody figured out why this is the case? Could it be socioeconomic factors? Or the presence of different toxic pollutants across different communities?

      • mcguire 1766 days ago
        More likely different imaging set-ups.
  • Barrin92 1766 days ago
    In my opinion the term intelligence itself is misplaced for machine learning tasks. Every problem that is solved with ML and "big data" appears to me to be a perception problem (which wouldn't be surprising because the mechanism is inspired by human vision, not cognition, which it lends itself to naturally).

    As a specific example, a few months ago or so openai released their text generation tool and branded it as "too dangerous too release", claiming it could , with the help of AI, generate believable texts.

    But what it generated was simply natural sounding gibberish. There were plenty of sentences in the text along the lines of "before the first human walked the earth, humans did..""

    What, for me at least, lies at the core of intelligence is understanding semantics. An intelligent system can recognise the sentence above as flawed because it could extract meaning.

    Everything coming out of the field of ML seems to me just like sophisticated statistics. In many ways symbolic AI to me still seems more valuable, profit aside.

    • kaolti 1766 days ago
      I agree. Extracting meaning is NOT a math problem, meaning comes from the human context and context is infinite. Hence different humans extract different meanings.

      I think AI and ML are great for processing large amounts of data and looking for patterns. Patterns on their own don't mean anything though, it's always up to us to interpret them.

    • ilaksh 1765 days ago
      Right, that tool makes gibberish and didn't understand much of anything.

      AI research actually started by focusing on symbolic AI but eventually it was found to be too difficult to define all of the symbols. See the Cyc project.

      AGI as a field aside from narrow AI/narrow ML has been making useful but not mind-blowing progress for decades. The sidebar and recent posts/post history on Reddit r/agi has useful links for learning about the field. Also more and more posts on r/machinelearning are providing more general purpose tools that address some problems like better semantic understanding.

      There is a promising strain of research that is focusing on core AGI requirements. One of the big challenges is bridging the gap between low level sensory information and high level concepts. This is known as the symbol grounding problem. In my mind the approaches tackling that type of challenge have a lot of promise. And the amount of research in that area is growing.

    • Wiretrip 1766 days ago
      In the text generation tool outlined above (and indeed many of the convnet-based visual networks), the hidden layers are there precisely to extract 'meaning'. The lower layers (closer to the source input) deal with syntax and feed upwards to hidden layers that extract semantic features, which in turn feed upwards to more layers, each with a bigger overview of the semantic features and thus ultimately the context. That's the idea anyway.
      • foldr 1766 days ago
        >the hidden layers are there precisely to extract 'meaning

        That is just wishful thinking, no? I mean, there is no particular reason to think that the hidden layers will actually do this with any high degree of success.

  • xiaolingxiao 1766 days ago
    I can attest. while doing research in a T1 university, all the professors were mildly disgusted by the hype pushed out by startups, and even Google's own internal marketing department.

    Nonetheless they too are minting the same nonsense in the "introduction" part of academic research, it's a clear case of everyone is playing the game, so "I have to play or be left behind."

    • ImaCake 1766 days ago
      I used to work on fundamental molecular microbiology. We looked at what happened when DNA replication went wrong in E. coli.

      What I used to do when writing or speaking about it was to start with cancer or antibiotic resistance as if anyone in my field gave a crap about either of those topics. Sure, we do care about those things in the broad sense, but we didn't consider ourselves to be on the front line of solving either of those problems.

  • solidasparagus 1766 days ago
    The author seems confused about what artificial general intelligence is. People have not meaningfully moved towards AGI - it's still a distant pipe dream.

    The closest we've gotten is probably a Dota bot that's pretty good as long as you give the bot a huge advantage. Which is an incredible piece of technology, but about as close to AGI as an ant is to a human.

    • ramraj07 1766 days ago
      The ant to human analogy is surprisingly apt in a way you might not consider though - evolutionarily speaking the ant and humans diverged relatively recently, if you count things from the beginning of life. In that way, we might also be closer to AGI than some might think..
      • slavik81 1766 days ago
        Ants are from the other branch of creatures with bilateral symmetry. That fork would have occurred before the Cambrian explosion, so that's at least a half a billion years ago. That's a long time!
        • stoobs 1765 days ago
          Which is relatively new compared to geological or cosmological time.
          • slavik81 1764 days ago
            It's a while even on a geological time scale. By comparison, Pangea was a mere 175 million years ago. Half a billion years is 1/9th of the age of the Earth. Another half billion years in the future, geological changes are expected to have ended the carbon cycle, killing most plant life on Earth.
    • isolli 1766 days ago
      Agreed. This paragraph (in an otherwise insightful essay) was particularly jarring:

      "Remarkable things are happening in the field of artificial (general) intelligence. Deep learning algorithms have proven to be better than humans at spotting lung cancer."

      This is very emphatically narrow AI.

      • AstralStorm 1766 days ago
        Pattern matching is not intelligence. Calling it AI rather than ML is disingenuous.

        Tell me when a generic algorithm can solve many different games on different environments at the very least.

        • solidasparagus 1765 days ago
          Pattern matching is certainly intelligence - classical AI was focused on identifying patterns and reacting to them with techniques like decision trees.

          You can make a fairly convincing argument that intelligence is nothing more than a hierarchical system of pattern matchers.

    • H8crilA 1766 days ago
      But that's precisely the point. A microtargetting solution for McDonalds isn't A[G]I, and as the article states - everyone knows it. But the brand power of calling something AI is too strong to resist it.
    • Causality1 1766 days ago
      Not even an ant. If AGI is a human then what we have is the equivalent of synthetic RNA molecules.
      • klmr 1766 days ago
        What? Ants don’t have general intelligence, and even ant colonies’ decision making (= simple swarm intelligence) is readily replicable in a programmed system, and has been, for a while.

        I don’t think a gradual scale is very helpful because I don’t think that the progression from current-generation AI to AGI is going to be gradual (it will require at least one paradigm shift). That said, if you want to compare AI progress to actual animals then our current-gen AI way beyond ants. Note that, while we haven’t fully mapped the neurons/connectome of ants yet, this is unnecessary to emulate their decision-making power. And we have mapped (and can simulate) the full connectome of simpler animals (e.g. C. elegans, P. dumerilii) so we’re definitely a long way beyond single molecules.

        • computerex 1766 days ago
          If you are referring to the open worm project, then the conclusions you have drawn are exactly the opposite of what I have drawn.

          As I understand it open worm is a hodge podge of statistical and numerical methods to try and replicate the sensorimotor behavior of c elegans. Open worm is neither complete, accurate nor elegant, despite knowing c elegans connectome and having mapped the some 900 cells in the worms body.

          • klmr 1766 days ago
            I wasn’t explicitly referring to that, it’s just one of many efforts. Anyway, you’re certainly right that none of the existing efforts are “elegant” but that’s hardly relevant. What matters is that the connectome is fully mapped, and that we can accurately simulate arbitrary behaviour. The issue with projects such as OpenWorm is that they have so far not been successful in generating new insight (this may be connected to your issue with lack of elegance) but this is distinct from being able to accurately simulate behaviour. Another issue is that of simulating the physical environment because — surprise, surprise — simulating the worm neurons without any realistic external stimuli is a pretty pointless exercise for most purposes.

            But pick any set of stimuli you like, feed it into the models and you get a response that corresponds exactly with empirical observation. I’d therefore definitely call the neuronal model itself accurate and complete.

            • computerex 1766 days ago
              No actually we can't do arbitrary simulation of c elegans. Can you link me towards a publication which contains validated results supporting your assertion?
      • tim333 1766 days ago
        For RNA molecules they are doing quite well at trashing us at go, chess, dota and starcraft.
    • neonate 1766 days ago
      > a Dota bot that's pretty good as long as you give the bot a huge advantage

      Excuse my ignorance but what is the huge advantage? And what happens if you don't give the bot that?

      • solidasparagus 1766 days ago
        The details require a little knowledge of Dota, but essentially the bot only knows how to play a much simplified version of the game.

        They play with a reduced number of playable heroes (5 of the 100+). Each hero changes the dynamic of the game and many heroes have unique interactions with each other, so this is a very substantial simplification.

        Additionally, Dota is a game where mechanics (the ability to quickly and precisely click the thing you intend to click) play a huge role and bots have a natural advantage there.

        Another important skill in Dota is watching everything that's going on, positioning your screen in the right place and paying attention to the minimap. As bots don't interact with the game via a screen (I believe the game exposes variables that describe the full state of the game that the bots can see), this is another advantage they have.

        • neonate 1766 days ago
          Thanks, that's interesting. I don't recall those caveats being explained very prominently when the "machines beat humans" articles came out about that.
          • solidasparagus 1765 days ago
            The DotA bot is such an impressive accomplishment that the caveats aren't usually worth highlighting.
    • computerex 1766 days ago
      The DotA bot is a very specific subproblem. Look at the open worm project for progress towards AGI
      • solidasparagus 1766 days ago
        There are broadly two approaches to AGI. Behavior-driven or biology-driven. That's a cool biology-driven project, but for now, OpenAI Five is much closer to AGI than OpenWorm is.
        • computerex 1766 days ago
          Do you care to cite your assertions? The dichotomy you are referring to doesn't exist. Do you have a reliable source that says that there are two broad approaches to AGI, behavior driven and biology driven?

          https://en.m.wikipedia.org/wiki/Embodied_cognition

          There is widespread consensus that embodiment is significant for AGI. Modern approaches to AGI tend to be biologically inspired.

          OpenAI five is not even remotely in the same class as an AGI system.

          • solidasparagus 1765 days ago
            I doubt I can find something to cite - it's an observation that I didn't think was particularly questionable. We have researchers like OpenAI trying to move towards AGI by advancing RL through specific applications. Others like OpenWorm are trying to mimic the building blocks of life. Of course biology inspires all AI, but given the nascent state of AGI, it seems like you have to choose between those two approaches.

            And if we're being pedantic, can you cite your claim that there "is widespread consensus that embodiment is significant for AGI"? I find it hard to believe that there is widespread consensus about anything around AGI.

  • derka0 1766 days ago
    The hype is BS but narrow AI in the context of automation is here. Job are so specialised nowadays (driving, cashiers, fulfilment, paralegal, diagnostician ...) that a narrow AI (i.e. a glorified automation algorithm) that can do just 10% better at a cheaper cost will take down the job. The confusion is real (AI, AGI, terminator...) but pattern recognition softwares powered with big data has already proven business value and are here to stay.
  • mindgam3 1766 days ago
    > The technologists know it’s bullshit. Fed up with the fog that marketers have created, they’ve simply ditched A.I. and moved on to a new term called “artificial general intelligence.”

    Not to detract from an otherwise excellent BS takedown, but unfortunately the author fails to mention that there’s a non-zero possibility that AGI itself is merely taking the bullshit to the next level.

    It continues to astound me how some technologists actually believe AGI is not just inevitable but around the corner. When to my naive perspective (as a machine learning rank amateur but with several decades experience as a professional human being) all I see is machines that can do some form of pattern recognition, but nothing resembling the common sense that the words “general intelligence” seemed to indicate at one point.

    Minor quibbles about truth and meaning of words aside, I have to support any article that skewers the soft underbelly of the phony AI ecosystem as effectively as this one does.

    • roenxi 1766 days ago
      The real issue we are facing is that everything that we thought was not going to be pattern matching and tree search has turned out to be pattern matching and tree search. I remember my father telling me computers were never going to be able to play Chess, because it required creativity for example. Nowadays a neural network with tree search plays chess that looks remarkably human. A lot of problem domains have fallen to what is basically pattern match and tree search.

      Extrapolating the trend of the last 30 years, there is evidence that computers will be able to solve every task a human can using pattern matching. If that isn't AGI, it might turn out to be better than intelligence.

      The technological future is unknowable, so believing AGI is certain is too much. But believing it certainly isn't around the corner is also too little. If computers can do anything a human can intellectually, they have reached AGI. The list of discrete tasks (games, decision making once the parameters are defined) a computer can't do is a very short list.

      If someone finds an objective function for deciding what decision parameters are important AGI could be upon us very quickly. As a postcript, I think people radically overestimate human intelligence.

      • rhacker 1766 days ago
        I kinda see it this way:

        AGI is Data in Star Trek TNG - trying to be human, making decisions to want to be alive, eventually dreaming and finally using an emotion chip. Another alternative here would be Moriarty or the various doctors in Voyager.

        AI is the Ship in TNG - lots of heuristics to figure out what the user is trying to do. Past usage of commands and relating major events outside the ship with algorithms for battle, life support, etc.. Events categorized by importance and automatic handling to save lives when necessary. Basically an extremely advanced Siri that doesn't really misunderstand you - while at the same time not really caring about you or knowing anything about being alive other than the priorities built into its software.

        I think for the next 100 years we're going to have AI progressing like the ship in TNG. I don't think we'll have AGI until maybe 100-200 years.

        Then again when I was born no one had a fucking clue eventually we would have something like the iPhone and talk to someone in China with <1 sec lag. So my estimates could easily drop to half.

        • TomVDB 1766 days ago
          Your comment reminds me of this xkcd cartoon: https://xkcd.com/1425/

          It's 5 years old now (coincidentally the time span quoted to develop a solution), but recognizing a bird was already considered a solved problem 3 years ago, less than 2 years after the publication of the cartoon.

          Predicting the future is hard.

          • AstralStorm 1766 days ago
            Surprisingly, recognizing a bird is much harder when you count rare species and running birds. Thus, sparse data. Does it recognize penguins too?

            AIs today still fail at it. Some folks were trying to train one to match endangered species and they had to pull mighty tricks to have some 70% accuracy. I think it was here on HN some time ago, but can't recall a link.

            • visarga 1766 days ago
              And yet, average humans can classify even fewer species.
              • tomp 1766 days ago
                With the same amount of training/data as NNs? I doubt it...
                • TomVDB 1766 days ago
                  Do you take into account billions of years of training due to evolution?
                  • tomp 1765 days ago
                    Technically that’s network architecture, not training data... admittedly though humans are “pre-trained” from birth.
              • nikbackm 1766 days ago
                Also after extensively studying ornithology?
        • TeMPOraL 1766 days ago
          AGI as discussed by people working on it is closer to the Ship than to Data. The defining thing about AGI is that can figure its way around arbitrary challenges just like humans do, but not necessarily the same way humans do. There's a concept called "orthogonality thesis" that tells you that intelligence and values are orthogonal. That is, there's no reason why a powerful enough AI would have to develop values similar to those of humans (like Data, trying to be humans) - you could have an AGI that's smarter than humans in everything, but is "not really caring about you or knowing anything about being alive other than the priorities built into its software".
          • ForHackernews 1766 days ago
            > not really caring about you or knowing anything about being alive other than the priorities built into its software".

            Is that really "general" intelligence, then? I would argue we don't know today whether consciousness and agency are separable from general intelligence. You could say that "general intelligence" implies some degree of intelligence on any topic, which would include self-reflection and metacognition.

            It sounds to me like you're describing something like a p-zombie[0], which we don't know to be able to exist.

            [0] https://en.wikipedia.org/wiki/Philosophical_zombie

            • TeMPOraL 1766 days ago
              I believe it is. I'm also not describing a p-zombie, since I find the whole concept of p-zombies utter nonsense.

              The general AI I described would have self-reflection, agency and arguably consciousness, yet - per orthogonality thesis - it may not have anything resembling human values.

              • ForHackernews 1766 days ago
                So like, a sociopath, then? I guess we do know that those can exist.
                • TeMPOraL 1766 days ago
                  Yeah, sure. The orthogonality thesis essentially implies than a GAI randomly plucked out of space of possible minds will likely be considered sociopathic by our standards. That is, if we can comprehend its thinking at all. "Not sociopath" is a very particular set of values.
                  • ForHackernews 1765 days ago
                    > a GAI randomly plucked out of space of possible minds

                    Sorry, but I don't think you have any rational basis for imagining what the "space of possible minds" represents. The only minds with human-level intelligence we know of are human minds that have (with variation) human values.

                    The claim you're making is analogous to saying "any extraterrestrial life we find won't be carbon-based because out of the space of all possible substrates for life, carbon is a very particular one", but that's an ill-founded supposition because we have a sample of N=1 and maybe carbon-based life is the only kind of life there is.

                    Maybe a true GAI mind will be "like us", maybe it won't be, but we don't have anywhere near enough data to speak with confidence about it.

      • mindgam3 1766 days ago
        Although I will admit to have used/benefited from the myth that chess skill is related in some way to general human intelligence, it’s a myth.

        Key differences between chess and real world: perfect information game, finite problem space, well defined rules. It blows my mind that serious people believe that ability to outperform human chess players using massive compute is some kind of major step towards AGI. If only it were that simple.

        It’s not that people overestimate human intelligence, it’s that they underestimate the meta-cognitive reasoning that we call common sense.

        • Quekid5 1766 days ago
          I suppose it's a subset of "well defined rules", but it's worth calling out explicitly, I think: Chess also has an objective (and trivially verifiable) win condition.

          There are few interesting situations in real life where such a thing exists.

          • w0utert 1766 days ago
            In that regard, AlphaGo is a lot more impressive, not just because it is a vastly more complex problem to find a loss function for playing Go (compared to chess), but also because AlphaGo basically learned to play the game by itself without even having a model of the game rules initially (or so I've heard).

            That said, I would still not consider it anything like generalized AI, if only because the set of possible (valid) actions at any point in the game is tiny, while in real-world problems it is basically infinite.

            • z3phyr 1766 days ago
              Now I want AlphaGo to teach me. Verbally. Like humans do.
          • mindgam3 1766 days ago
            Agreed and thanks for calling it out. It is so much harder to “learn” without obvious win states.
        • roenxi 1766 days ago
          > perfect information game, finite problem space, well defined rules

          Chess isn't reality, but the things you list are all mostly things that humans can't deal with either. Take imperfect information - humans can't make decisions using information they don't have any more than machines can. Humans certainly can't deal with the infinite (and their approximations to do so are probably measurably worse than those a computer uses, because a computer can use honest-to-goodness probability formulas).

          Operating without rules is not clear cut, but most people do invent a whole heap of funny rules because they can't operate without clear rules either. Humans often literally hate and fear things that look different or don't follow all the funny rules they come up with.

          These are the same arguments as are deployed against self driving cars - if a computer doesn't have the information needed to make a decision then neither will a human in the same situation.

          The threat, opportunity and potential of AGI is very real. Once technology settles down and stops changing then we'll know that the situation has stabilised. But even as it stands what we have now would easily pass muster as AGI for the 1910s and it is still improving extremely rapidly.

          • atoav 1766 days ago
            I remember when I was a teenager: a moonless night in the woods and my bicycle light was broken. It was so dark that I literally couldn’t see my hand in front of me. I could drive (slowly) because I knew that part of the street by hard and I knew there is a gravel patch on each side of the road. So I every time I entered gravel from the left I just turned a little more to the right and vice versa. And I knew I entered gravel by ear.

            There was also a faint red light from a memorial site candle that I could use as a orientation point.

            The thing was that I have never done this before (or after), nor did I think I ever would ever find myself in such a situation. I drove through that very road often at night also often without light because I had shitty broken bicycles, but this night was truly exceptionally dark and the darkest night I ever had seen since.

            The question is, what would an AI have done in a similar situation (sensors go dark for some reason)?

            • visarga 1766 days ago
              Probably have the common sense to stop. It's a trivial test case.
              • tomp 1766 days ago
                And also an inferior solution.
      • thom 1766 days ago
        My fundamental problem with this viewpoint is that people appear to radically underestimate the amount of training data (all of it incredibly rich in context) that humans have had access to over the course of their lifetimes.
        • glitchc 1766 days ago
          The pool of accurately labelled, relevant, sufficiently diverse, training data is actually rather small per problem.
      • otabdeveloper2 1766 days ago
        > The real issue we are facing is that everything that we thought was not going to be pattern matching and tree search has turned out to be pattern matching and tree search.

        Absolutely false. It just seems that way because you read biased research and articles.

        There's a boatload of problems that cannot be solved with pattern matching and tree search. Even really simple ones.

        One example is estimating/predicting binomial proportions adequately.

      • jbay808 1766 days ago
        I'd be interested in knowing what coungerarguments the people downvoting this comment might know of, which I apparently don't.
        • nineteen999 1766 days ago
          >> The list of discrete tasks (games, decision making once the parameters are defined) a computer can't do is a very short list.

          > I'd be interested in knowing what coungerarguments the people downvoting this comment might know of, which I apparently don't.

          I didn't downvote, but I will cite artistic endeavours.

          How long until a film crew of computers can shoot, edit and score a documentary or film that would be interesting to humans to watch?

          How long until they could develop an AAA computer game worth playing?

          How long until we could assemble an orchestra of computers that can interpret sheet music with feeling well enough to impress a human audience?

          I could go on and on finding other examples of human endeavours that computers/robots/AI will suck at for an extremely long time, if not forever.

          As we've seen recently, we are further from completely autonomous self driving cars than was hyped over the past few years.

          In my experience, programmers typically like to underestimate the breadth of human endeavour outside the domain of programming, particularly where the arts are concerned. And larger groups of humans working on larger artistic endeavours will take even longer to be displaced by AI, IMHO.

          • ahartmetz 1766 days ago
            I don't think that you need to limit creativity to the arts, which is unfair to machines because art really depends on human emotional quirks. That is even kind of the whole point of art.

            What about technical inventions? Can AI invent, let's say, the process to produce aluminum? Or planar semiconductors? Or a rocket engine? These are also creative works.

            (I do think that AGI soon claims are rubbish)

            • nineteen999 1765 days ago
              Agree completely, just used art as its an easy example.
          • pfisch 1766 days ago
            > How long until we could assemble an orchestra of computers that can interpret sheet music with feeling well enough to impress a human audience?

            I think AIs making music will come long before AAA games and movies, both of which encompass music as an art and then throw in like another 5 artistic pursuits on top.

            I think AIs will be making music inside 10-20 years or less. Honestly I think it could happen in like 1-2 years if a company with lots of AI resources chose to focus on it.

            • atoav 1766 days ago
              Note that the parent comment wrote interpret music not make it.

              I am a musician and there were already impressive algorithmic compositions in the 70s using Markov chains, and today we use machine learning to create something that resembles the works of J.S. Bach.

              But how is that creativity? Creativity means coming up with new and interesting things (and fair enough many musicians make quite uncreative decisions all the time).

              These statistcal improvisators can come up with some interesting combinations but they are completely unaware of them and don’t repeat it ever again.

              IMO we didn’t even solve composition, because it also requires a good feel for how humans react to a piece and how a given piece or instrument fits culturally and what emotions it envokes for what reasons.

              Interpretation is yet another thing, it means interacting with an audience in one way or another.

              To think we managed to solve composition by scrambling together some melodies from an input of thousand melodies and thinking we are done with the hardest part is hubris at it’s best.

              • 0815test 1766 days ago
                > To think we managed to solve composition by scrambling together some melodies from an input of thousand melodies and thinking we are done with the hardest part is hubris at it’s best.

                To me, that's missing the point. The output of systems like BachBot and DeepBach is musically interesting precisely because of how weird and serendipitous the results of that "scrambling together" are. IOW, it's not just scrambling, but scrambling that manages to learn and preserve at least the short-term structures that we associate with "music". (That's a big improvement over simple Markov models.) It's nowhere near what humans would make, same as a picture of an actual dog is not similar to what the DeepDream network outputs as "dog-like" - but it's already interesting in its own right.

              • TeMPOraL 1766 days ago
                Creativity and purposeful manipulation of human emotions are orthogonal concerns, even if often bundled together under the term "creativity" in context of arts.

                To disentangle them I propose a simple test: take these 'impressive algorithmic compositions that resemble the works of J.S. Bach' and play them to people, telling them they were composed by a gifted human. Then ask what they think of their creativity, and of emotions the author intended to communicate.

                That's creativity. As for purposeful manipulation of human emotions, this is harder and would require an AI with a theory of (human) mind, or some equivalent of that. Doesn't sound insurmountable though.

                • nineteen999 1766 days ago
                  > Creativity and purposeful manipulation of human emotions are orthogonal concerns, even if often bundled together under the term "creativity" in context of arts.

                  No, absolutely not, I disagree. If we know that a piece of music is generated by software algorithms, it instantly loses any real meaning.

                  > To disentangle them I propose a simple test: take these 'impressive algorithmic compositions that resemble the works of J.S. Bach' and play them to people, telling them they were composed by a gifted human. Then ask what they think of their creativity, and of emotions the author intended to communicate.

                  The problem here is you are lying to people in order to achieve an effect. Deception has been known to cause a significant backlash amongst music fans [1]

                  There are enough music fans out there that will honestly want to know whether the music was created by a human or a computer program. If it ever becomes common place that procedurally generated music is misrepresented as the works of a human or group of humans, then there is a large segment of the population that will turn their noses up, and only listen to live music, where it is apparent that the music is performed by humans.

                  The rest of the audience, meh, if they want to hear meaning in music composed by machines, that's their prerogative I suppose. Most people I know who actually play instruments or sing are appalled by the idea, or at best, slightly bemused.

                  I will assert, those who have never stepped away from their computer keyboard long enough to have taken a deep breath, stood in front of a microphone, plucked an electric guitar at high volume or smashed drum skins with sticks in front of a live audience of 100 to 1000 cheering people may just be incapable of understanding this. The feeling in the air can be electric.

                  Humans connecting with other humans through the creation and reception of music will never be correctly emulated or simulated through silicon, no matter how good the facsimile.

                  [1] https://en.wikipedia.org/wiki/Milli_Vanilli

                  • TeMPOraL 1766 days ago
                    > No, absolutely not, I disagree. If we know that a piece of music is generated by software algorithms, it instantly loses any real meaning.

                    Which is kind of my point, so I feel we're at least partially in agreement.

                    The point I'm making is this: creativity in software is either easy to achieve, or near impossible, depending on what you really mean by the term "creativity".

                    If you have a piece of performance in front of you, and your judgement of whether or not it's "creative" changes when you learn whether the author was a human or a machine, then that "creativity" is impossible for machines by definition - and, frankly, it's also not worth talking about, because it does not depend on the author, but whether or not you consider the author human.

                    If, however, your opinion wouldn't change upon learning whether the author was human, that kind of creativity is trivial to achieve for machines - it involves relaxing the constraints of whatever algorithm is used to create the performance, and injecting some randomness into it.

              • TheOtherHobbes 1766 days ago
                Interpretation - better than adequate, not necessarily world class - is a solved problem.

                The heuristics for dynamics and tempo changes aren't particularly complicated. You can do a lot with fairly simple phrase recognition. You don't even need a full harmonic, melodic, and structural analysis.

                Composition is a much harder problem - especially at the Bach level. And the state of the art is nowhere close to being able to produce satisfactory Bach-level compositions. (In spite of what David Cope says about his work.)

          • jbay808 1766 days ago
            I think music will be the first of these milestones, and before long AI will be writing enjoyable compositions and pop chart hits.

            Next AI will write passable stories, and then decent ones, and then marketable novels with a coherent plot.

            AI assist tools will be able to help animators create scenes, backgrounds, and characters using just keywords descriptions. Then with the ability to write scripts will come the ability to create movies. It will take some editing to filter out the nonsense but we'll probably have AI film productions within 30 years.

            That's my guess, purely based on what's been demonstrated so far.

            • arugulum 1766 days ago
              I actually think AI will never really make great "pop chart hits", for the mundane economic reason that once we figure out how to make good music with AI, it will be possible to absolutely flood the market with that genre of music, making it impossible for any potentially great hits to stand out.

              That said, I absolutely believe that AI-generated music can soon and will easily easily replace any sort of background or generic license-free music uses, where they simply need to be "good enough" rather than great.

          • TeMPOraL 1766 days ago
            > How long until we could assemble an orchestra of computers that can interpret sheet music with feeling well enough to impress a human audience?

            As others wrote, this will be the first to go. In fact, I believe current breed of NNs can do this already.

            Artistic creativity is a trivial problem compared to all others. All you have to do is inject a bit of randomness to the process and then not tell people that the work was done by a computer program.

            The "interpret sheet music with feeling" actually happens within the brain of the listener, who tries to connect the music and feelings it evokes in them to a vision of a human being who created that music. In other words, emotions are actually projected. The mechanism works somewhat well if the artist is a human with clear intent of creating emotional impact. But when the artist doesn't intend to create that impact, the audience will find one in there anyway. And so will they if the artist is actually a matrix multiplicator running on a stack of GPUs.

            The same phenomenon happens in writing[0], but writing (and similarly, painting) is harder, because the sentences have to have at least some semblance of sense[1]. In music, anything that's not just pure white noise can get accolades for creativity if you insist hard enough that it was composed by a gifted human.

            --

            - [0] - Ever heard of the stories about people building these whole towers of interpretations of a literary work, and then when someone asks the author whether they meant any of that, it turns out the whole edifice is just a pile of bullshit, and the author really just wanted to write a story they liked?

            - [1] - But see https://slatestarcodex.com/2019/03/14/gwerns-ai-generated-po..., in particular near the end of the post.

            • jbay808 1766 days ago
              Yes exactly. You can even get a head start by training on award-winning performances by Izhak Perlman and such, comparing them to the sheet music. Then your NN will be able to read sheet music in the style of Perlman, the same way that GPT2 can write in the style of Tolkien.

              Kasparov remarked that Deep Blue seemed to make insightful moves in a way that wasn't machine-like, and he suspected it was human assisted. I don't know if he was right or wrong, but I'm sure today's chess software will feel at least as insightful as Deep Blue did to a chess grandmaster, especially if they think they're playing against a human.

      • goatlover 1766 days ago
        > Extrapolating the trend of the last 30 years, there is evidence that computers will be able to solve every task a human can using pattern matching. If that isn't AGI, it might turn out to be better than intelligence.

        Only if the trend lasts and is applicable to every human task. Those are pretty big assumptions.

        > If someone finds an objective function for deciding what decision parameters are important AGI could be upon us very quickly.

        And if that function doesn't exist, because the real world (not a board game) is messy, dynamic and complex?

        > As a postcript, I think people radically overestimate human intelligence.

        Individually maybe, but as a group we're pretty damn impressive. I think you're radically underestimating the species. But this has been the case for strong AI proponents since the 1950s. AGI is always 20 years and one good algorithm away.

      • dboreham 1766 days ago
        Agreed (with the analysis, not necessarily that we're anywhere close to achieving GAI). Nice to finally see proof that John Searle was 100% wrong with his assertion that intelligence couldn't possibly be algorithmic though.
      • marcosdumay 1765 days ago
        I don't know when your dad used to say that, but any researcher from the 50's would quickly tell you that Chess is just a giant tree search problem.

        We have a huge number of unsolved pattern matching problems around, so AI still has a lot of value to bring. But we don't have that much evidence that it suffices for everything.

    • povertyworld 1766 days ago
      While it's true the AI hype is getting a bit overboard, couldn't one make the argument that humans, and to a lesser extent other animals, are just a collection responses to pattern recognition like when I see a thing that looks like this I should eat it, but when I see a thing that sounds like that I should run away? After all, what is common sense but "believe it when I see it" type thinking which is most certainly pattern intuition? The overhype part is that there are armies of lowly paid Mechanical Turk tier workers tagging all these data sets, so that these pattern recognition algorithms have something to train on.
      • Thorentis 1766 days ago
        Sure, one could make that argument. But suggesting that we are close to creating a general intelligence on-par with humans is entirely unfounded.

        Even if we assume that the only thing differentiating humans from other forms of intelligence (including artificial neural networks) is more pattern to learn, then that still requires data sets millions if not billions the size of what we currently have. Plus, we know very little about the types of patterns we would need to train an AI on in order for them to reach the AGI level.

        I think it's safe to say that "artificial general intelligence" is just another iteration of the BS Industrial Complex.

        • visarga 1766 days ago
          > a general intelligence on-par with humans

          Humans aren't general intelligences. We're good at things that make us survive, and that's about it. We're not good at 'general' problems, it takes us decades or hundreds of years of small steps and theories, and we may never find the answer. Most of us can't even grasp the fine nuances of math and physics, even after 12-16 years of education. Look at how hard it is for us to even imagine quantum and relativistic effects, or 5-dimensional geometry.

          Consider how long it took us to create germ theory of disease and how many of us died without raising to the challenge of understanding the cause. Look at what we believed about nature just 500 years ago. Humans without the larger system of culture, society, industry, lots of time and resources, can't do much. AI will inherit our tech, culture and scientific advances right from the start.

          There is a limit to how intelligent a system can become, and this limit is given by the complexity of the survival problem and the environment. You don't become smarter than the problems you have to deal with require. And environments don't evolve exponentially, so AI won't evolve exponentially fast either. Evolution is a series of sigmoids, not an exponential, and there can never be really exponential processes in nature, there's always an upper limit.

          I'd say it would be enough to have AI that can survive by itself and sustain us along with it, instead of AGI.

      • mindgam3 1766 days ago
        > After all, what is common sense but "believe it when I see it" type thinking which is most certainly pattern intuition?

        I guess I would argue that common sense encompasses something outside of pure pattern intuition, which is the ability to synthesize not just solutions to a known problem/question but to figure out what question to ask in the first place, across a variety of situations include some that have never been seen previously.

        The overhype part isn’t just the Mechanical Turk later, although that’s part of it. It’s also the assumption that pattern recognition within any given domain can generalize to this meta-ability to essentially level jump outside previously established boundaries or “training data.”

        • zubspace 1766 days ago
          So at what point in a life of a human does general intelligence, or common sense, develop? Watching a baby do things feels like watching a robot trying to perform a task for the very first time. It seems like pattern matching overall with a very complex and fine grained sensory input system.

          Also some things are definitely preprogrammed and passed on from one generation to the next. There are many examples regarding animals, like small pigeons reacting to the shape of an eagle which they have never seen before.

          Now add a decade long learning phase and you get general intelligence? Or is it all just layers and layers of pattern matching, imitation and preprogrammed behavior?

          Could be the reason why humans are so bad at adressing some problems like climate change, plastic pollution or poverty. There is no clear pattern we can match, learned through individual sensory input, leading us to a solution.

          • mindgam3 1766 days ago
            Fair question. I can't claim a definite answer as to when this general/commonsensical ability develops, but I think to some degree we are born with it/preprogrammed as you point out. Certainly by toddler phase most humans have figured out visual pattern recognition better than the world's most advanced machine learning algorithms.

            My own thinking on this has been very much influenced by an early AI researcher-turned-philosopher and HCI practitioner, Terry Winograd. His book Understanding Computers and Cognition has some solid arguments for seeing cognition as a trait evolved as a form of "structural coupling" between a living organism and its environment. Highly recommended for anyone looking for an alternate perspective on AI from someone who was doing it way, way before it was cool.

            • zubspace 1766 days ago
              Thanks for the book recommendation.

              We humans are really fascinating machines, continuously fed with a huge stream of data which we learn to filter and analyze. It takes time to accumulate the data and refine our pattern matching until we seem to be intelligent. It's hard to draw a line when this happens.

              Maybe everything happens so fast and seamlessly that we consider it to be something magic and simply call it intelligence? Or maybe there really is something in there, reading in-between the lines, a soul?

              We all know that each of us is looking outside from somewhere within a brain and you are able to influence your actions. But, is it some kind of feedback loop which can be reproduced mechanically or something supernatural?

              As long as we don't know this, every attempt to recreate general intelligence may be futile.

          • zimpenfish 1766 days ago
            > Could be the reason why humans are so bad at adressing some problems like climate change, plastic pollution or poverty.

            I think that basically comes down to greed and ignorance.

            > There is no clear pattern we can match [...] leading us to a solution.

            Except, as best I know, both the problem and solution were recognised in the 80s but it wasn't in the companies' interests to torpedo their profits at the time.

            e.g. https://www.theguardian.com/environment/climate-consensus-97...

            • zubspace 1766 days ago
              It's astounding what mankind was able to achieve. Every invention is an ingenious marvel when you look at it in an isolated manner. But you could argue that each invention was based on previous attempts, combined, refined, either through complex analysis or through sheer luck.

              Societal problems are a different beast. There are so many feedback loops, strategies and socioeconomic forces, that it's hard to find a clear cut solution even if you know the problem.

              Stupid example: If you play Sim City, are you able to handle overpopulation, collapse of transportation and disasters? After a few games you definitely get better, recognize the patterns and handle accordingly.

              So why doesn't it work in real life, climate change for example? Maybe the data is just too overwhelming, contradictory or incomplete? Maybe we just don't know any patterns which could help us? Or maybe we humans are simply not intelligent?

              • zimpenfish 1766 days ago
                > After a few games you definitely get better, recognize the patterns and handle accordingly.

                Because there's a benevolent dictator (you) who can impose societal solutions without contradictory structures (capitalism, etc.) getting in the way. If you were playing multiplayer Sim City with everyone's goal being "make their city the best", I guarantee that most games will end up with societal disasters being unhandled.

                > So why doesn't it work in real life, climate change for example?

                Capitalism.

                > Maybe the data is just too overwhelming, contradictory or incomplete?

                Nope, it's complete, non-contradictory, and perfectly clear. The only problem we have dealing with the climate crisis is that it will cost money and discomfort and, unless it's imposed from above, people generally aren't interested in worsening their lives to help distant others (cf tax rates, zoning laws, racism, ...)

    • jcelerier 1766 days ago
      > all I see is machines that can do some form of pattern recognition

      what proof do we have that the brain isn't just doing that ?

    • blablabla123 1766 days ago
      But also the whole concept of AI is so deeply fascinating and even frightening to people that maybe there is no other way than to buy into this. Once it happens, everybody wants to be a part of it ;)
    • runciblespoon 1766 days ago
      @mindgam3 .. “Minor quibbles about truth and meaning of words aside, I have to support any article that skewers the soft underbelly of the phony AI ecosystem as effectively as this one does.”

      I fully concur ..

    • thrwo434234 1766 days ago
      > AGI itself is merely taking the bullshit to the next level.

      You sir are a heretic! Are you saying that transformers along with a gazillion GPU-years of compute, and tons of intern-descent at the hallowed halls of OpenAI, Google and Facebook, can't solve AGI ? You must be out of your mind!

    • baybal2 1766 days ago
      We have not yet invented "Artificial Intelligence," but are already making big leaps in "Artificial Ignorance"
  • lkrubner 1766 days ago
    I've been collecting examples of where the ads that I see are based on extremely simple algorithms of the type that could have easily been supported 30 years ago, and yet I keep reading articles that suggest that the advertising industry is deploying sophisticated tools to target ads to me. I wrote about this recently:

    -------------------------

    Despite much talk about Machine Learning and AI improving advertising results, what I’m seeing is getting worse and worse. Despite billions invested, the ads shown to me are much less relevant than that ads that I saw on the Web 10 years ago.

    I hired 3 developers from Fullstack Academy. They were all great, so I went and checked out the website, curious about the curriculum. And now, every website I go to, I see an advertisement for Fullstack Academy. (See screenshot.)

    I’ve been writing software for 20 years. I’ve written semi-famous essays about software development. I am not going back to school. I do not need to go to a dev bootcamp. So why show me ads, as if I’m thinking of going to school?

    For the last several years I’ve been seeing articles about the surveillance economy. In theory, advertisers know more about me than ever before. In theory, they know about my entire life. And yet, the ads I see are less targeted than what I used to see online 10 years ago.

    http://www.smashcompany.com/business/when-will-machine-learn...

    • onion2k 1766 days ago
      So why show me ads...

      To keep the brand in your head so you post about it on Hackernews.

      • lkrubner 1766 days ago
        That’s a “just so” story. You’re looking at something that is easily explained by incompetence, stupidity and irrationally, yet you’re working to transform it in your head into something rational. Take a moment to think of the money they’ve wasted and what do they gain? How likely is a sale? Did Fullstack imagine this scenario when they authorized their marketing firm to spend this money? Or is the marketing firm simply trying to spend money so they can bill for something?
        • onion2k 1766 days ago
          I was being a bit facetious about you posting on here, but the point I was making was serious. In advertising there's a thing called the "effective frequency"[1] which is the number of times you need to see an ad before it has an impact on you. Obviously this series of adverts has worked on you - you know the brand and you use it as an example of which ads you remember. If the company is advertising in order to raise the level of engagement they're getting that's a fail; if their ads are intended to get people talking about the company that's actually a pretty good result.

          There are more reasons to advertise your business that simply "getting more sales". Indirect communication is very useful.

          [1] https://en.wikipedia.org/wiki/Effective_frequency

    • NeedMoreTea 1766 days ago
      Matches my experience.

      I'm reminded of 80s and 90s sales lead phone lists - those used to be marketed as precision means to reach your choice of age, job-level, city, income etc. I once worked for a company that tried a few of these, from allegedly fresh, first generation data. They were universally crap, with the same errors and copies of everyone else's wildly wrong and obsolete garbage. Lists priced per record. Aha!

      Adtech is burning down the web and everyone's CPU with all that precise tracking and ML targeting that tells them nothing. Priced per click. How surprising. When Google and Facebook opened some of their profiles to be looked at, maybe 5 years or so back, Google got every major thought about me wrong - my gender, my age, my interests. As it had with most in the office. The whole myth around precision seems no more than a marketing fairy tale to sell ads and justify tracking, very badly.

      Peak for advertising being useful was very early web with static page ads, simple keyword ads on search, and the odd site sponsorship. Oh, and "customers also bought" on Amazon, that worked well for books and CDs, but doesn't work at all for the 499 other categories they now sell.

      These days I block everything - JS, uBlock, PiHole. I think there's 10 or 20 sites allowed a little JS, and the odd reluctant exception for bloody hateful reCaptcha. The web's speed is lovely again. Haven't seen a web ad for years - except the odd one or two at work.

    • dijksterhuis 1766 days ago
      I willing to take a punt and guess that the models they’re using are using short term/isolated data, not 10 years worth of your entire browsing history.
      • plaidfuji 1766 days ago
        Haha, yeah. When n=1, both human and computer will say “keep doing what you’re doing!”
    • foldingmoney 1766 days ago
      Apparently you didn't get the targeted ad telling you that advertisers' dark patterns have become so sophisticated that free will is literally fiction now.

      /s, obviously

  • benreesman 1766 days ago
    If everyone sophisticated enough to be on this site would just use the term “applied computational statistics” (even just in their own thoughts) instead of “deep learning” or AI, the world would be a better place. Gradient descent finds some fun minimia (my current venture is heavily based on that idea) but to assign more agency to Adam or RMSProp than they merit is just an exercise in feeding the trolls.
    • mbeex 1766 days ago
      Couldn’t agree more. All these delusional discussions: Is it intelligence? Is it true intelligence? How far is it to become true...? Skynet rising?

      To be fair: The last question is certainly adequate regarding the application of unverified/-able algorithms in a life-changing incarnation as a virtually unsupervised decision making quality. This is horrible. But it is another question (and better answered skipping the pseudo-philosphical part)

    • YeGoblynQueenne 1766 days ago
      Could you please explain in what sense deep learning is "applied computational statistics"?

      What about classical planning, SAT solvers, automated theorem proving, game-playing agents and classical search? Could you please explain how one or more of those are "applied computational statistics"?

      Further- I don't understand the comment about "agency". Could you clarify? Why is "agency" required for a technique or an algorithm to be considered an AI technique?

      • plaidfuji 1766 days ago
        I don’t know anything about the underlying algorithms for the examples you rattled off, but deep learning trains a graph of neuron weights such that they are statistically optimized to minimize error in computed output labels for some domain of input data. Very much “applied computational statistics”.
        • YeGoblynQueenne 1766 days ago
          The examples I gave are classic AI algorithms that are very easy to look up on wikipedia. They do not compute any statistics.

          I'm not sure what you mean about "neuron weights that are statistically optimised". Modern-era, deep neural nets train their weights with backpropagation, which is basically an application of the chain rule, from calculus. They do not use statistics for that.

          For example, calculating the mean of a set of values or calculating the pearson correlation coefficient of two variables are computations typical in statistics.

          Could you please clarify what you mean by (applied) "computational statistics", so that I don't have to double-guess you?

          Edit: Do you really not know what a SAT solver is? Not to be rude but if that is the case, from where do you draw your confidence about the correct terminology to use for AI?

          • tnecniv 1766 days ago
            He means that neural networks are applied statistics in that they solve a statistical regression problem. It's not conceptually different from classical methods of regression like least squares. The phrase "statistically optimized" is certainly a funky one, but regression is certainly as much a part of statistics as the two problems you mentioned.
            • YeGoblynQueenne 1765 days ago
              That doesn't sound like what the OP was saying.
    • theferalrobot 1766 days ago
      There are non-statistical methods for training neural nets (no backprop) so 'applied computational statistics' really wouldn't capture it. Beyond that what is wrong with the term deep learning? I can at least understand objections to the use of the term AI (even though it was originally used to refer to narrow AI but was appropriated by hollywood) but deep learning seems like a fine term to me.
  • kranner 1766 days ago
  • waynecochran 1766 days ago
    Getting ready for the next AI winter.... this is a cyclic phenomena.
    • raverbashing 1766 days ago
      Hopefully we don't take decades again for a simple but important change like changing tanh to relu activations.
      • dijksterhuis 1766 days ago
        my bet is on capsule networks, Hinton is usually on point with his stuff
    • tim333 1766 days ago
      Or the singularity. Then it isn't.
      • waynecochran 1765 days ago
        Not yet ... not anywhere close to human neural cortex.... still several orders of magnitude to go...
  • AstralStorm 1766 days ago
    Next: NoAI, like NoSQL. All natural real intelligence, fully organic and explainable. Just add caffeine. ;)
  • DonHopkins 1766 days ago
    In 1996 I made this AIML (Artificial Intelligence Marketing Language) parody by taking an actual VRML article from some shameless trade rag, and globally replacing "Virtual Reality" with "Artificial Intelligence".

    (from "ArtificialPostModernIntelligenceInterActivity", V2 #4 April 1996, p. 20)

    https://www.donhopkins.com/home/catalog/text/SupportForAIML....

    Another closely related technology is BSML: Bull Shit Markup Language. (Note: most of the features described in the BLINK tag extension were eventually implemented by FLASH!)

    https://www.donhopkins.com/home/catalog/text/bsml.html

    At one point years later, somebody actually emailed me, asking me to take it down, because they were developing a "real AIML [TM]" product, and found my parody of their unique original idea to be beneath their dignity, distracting, and confusing to their potential customers using google to search for their prestigious "AIML" product.

  • throwaway287391 1766 days ago
    > In this way, Dynamic Yield is part of a generation of companies whose core technology, while extremely useful, is powered by artificial intelligence that is roughly as good as a 24-year-old analyst at Goldman Sachs with a big dataset and a few lines of Adderall. For the last few years, startups have shamelessly re-branded rudimentary machine-learning algorithms as the dawn of the singularity, aided by investors and analysts who have a vested interest in building up the hype. Welcome to the artificial intelligence bullshit-industrial complex.

    As an AI researcher, I think a lot of people are a little too sensitive to the term "AI" and make a lot of big assumptions upon hearing it. It's a very general term that doesn't really imply any particular degree of complexity or sophistication. Labeling simple machine learning algorithms and heuristics as "AI" isn't at all unique to this era of hype that began in the last ~5 years -- rather that's how the term has been used in academia for many decades. If you took a college class called "AI" or looked up some of the most popular textbooks on AI [1], you'd find that a lot of it is dedicated to search algorithms (breadth-first, depth-first, A*), linear classifiers, and feature engineering. If you think "artificial intelligence" is a bad name for these things, fine -- but don't blame the recent wave of hype, this is what the term AI means and has pretty much always meant. So go ahead and call your startup's linear regression "AI", and if the VCs leap to fund you under the impression that it means you'll be behind the singularity, that's on them. AI != deep learning. AI != AGI.

    [1] e.g., "Artificial Intelligence: A Modern Approach" by Russell and Norvig

  • Iv 1766 days ago
    "Deep Learning projects are typically written in Python. AI projects are typically PowerPoints."
  • ackbar03 1766 days ago
    Of all the hypes going around (blockchain mostly lol) I think ai is going to have the most substance to it though. I would say the breadth of problems being solved are much wider and there is still a lot of research which hasn't really found its way to actual implementation yet
  • ecmascript 1766 days ago
    I think honestly think westworld (yes the tv-series) has the best explanation of why general intelligence is a hard problem to solve.

    They mention consciousness but I think the same apply to intelligence in general. Humans in my mind aren't different from say a program you write except that we have a lot more input and possible outputs depending on a much larger variant of external variables.

    If we could build machines that have eyesight just as we do, muscles just as we do etc I'm sure we could reverse-engineer the human being.

    https://www.youtube.com/watch?v=S94ETUiMZwQ

    • toxik 1766 days ago
      I find this analysis reductionist. You're basically saying "brains aren't hard to reproduce once you have biological sensors and actuators." Why not? They're _extremely_ delicate, intricate organs.

      Claim 2 is also a difficult one: of course you can easily claim consciousness doesn't exist, but it is impossible to argue by logic. You need a metaphysical philosophical framework, and then it's already left the realm of empirically observable truths.

      • dspillett 1766 days ago
        I'm not sure the claim is that consciousness doesn't exist. More that it is an emergent property of complex systems rather than something that is (or can be) deliberately programmed.
        • ecmascript 1766 days ago
          Precisely. It's the complex system that gives us an illusion of consciousness. At least, that is what I naively believe in since there is a lack of evidence for anything else.
          • goatlover 1766 days ago
            So you think experience is itself an illusion? When you kick a rock and feel pain, you're not really experiencing pain? Is the rock also an illusion?
            • ecmascript 1765 days ago
              Well it depends on how you view it. You feel the pain from kicking the rock and remember it, so you won't kick the same rock against a few minutes after.

              That is an experience to me. An experience is simply a memory of an event/feeling etc. Without any memories, you won't remember any events or feelings and will gladly kick the rock again since you won't have any memory of it hurting you.

              Or how else would you define an experience? A memory isn't an illusion, there is definitely something physical in your brain that say that that specific event has happened. But you can also remember things that haven't happened, which is probably why a lot of people believe in ghosts, religion etc.

              I don't know why, but it probably serves a biological purpose and people are probably more likely to survive if they are afraid of things and are careful.

            • dspillett 1765 days ago
              I can't speak for the other guy, but from my PoV something being an emergent behaviour doesn't necessarily mean that it is an illusion. Patterns can really exist without being explicitly optimised or programmed for.
      • ecmascript 1766 days ago
        No this is not what I am saying. I am saying it isn't some kind of magical thing going on that we can't replicate given enough technical progress and skills.

        Yeah sure, but so is the claim for that consciousness does exist.

        • evanagon 1766 days ago
          It might not be magical, but I wouldn't underestimate the complexity of replicating even simple cells. We haven't been able to replicate an amoeba let alone neurons let alone a brain.
          • ecmascript 1765 days ago
            This is a valid and good point.
  • mattigames 1766 days ago
    Everything is bullshit until is not, humans were talking about transportation without using animal forces for decades before it became a reality, and a lot of people were highly skeptical of such thing being even possible until it actually happened in 1804 (first steam train), same thing happens with Artificial Intelligence, and we are in such uncharted territory that someone could say AGI is just 10 years away and someone else say 100 years away and both get the same amount of credibility, meaning near none cause we don't even know what is that we don't know to achieve AGI.
    • dboreham 1766 days ago
      Your example isn't quite as it seems : "trains" (cars running on rails) were used in mining for hundreds of years prior. The steam engine was first documented in 1698. What happened in 1804 was someone figured out the manufacturing processes to make a steam engine light enough and powerful enough to usefully pull a train of cars over some reasonable distance.
    • tim333 1766 days ago
      Unless you believe the Kurzweil argument that we will figure what is needed from reverse engineering the human brain in which case you guestimate a timeline.
  • m0zg 1766 days ago
    There is a lot of froth, as in any hot field. However, unlike before, there are many cases where AI actually works now. Some perceptual tasks work better than a human, in fact. We can quibble about the naming and whatnot, but that's not something you can say about the last AI winter. It's sort of like dotcom bust of 00, sure things imploded back then, but there's no sign whatsoever e-commerce will implode at any time in the future because unlike before it actually works this time.
    • ethbro 1766 days ago
      > Some perceptual tasks work better than a human, in fact. [...] that's not something you can say about the last AI winter

      Eh. I'd say that's somewhat apples to oranges.

      A) There were some useful and successful expert systems.

      B) Things seemed to be going swimmingly, until they hit a fundamental wall.

      C) We're working with a few orders of magnitude greater compute than they had access to.

      • m0zg 1766 days ago
        Sure, but we did figure out how to make things more robust and generalizable, at least for perceptual tasks so far. Knowledge representation and probabilistic reasoning are still non-existent, though. Moreover, nobody is even working on any of that, for fear of being compared to Doug Lenat.
        • Quetelet 1766 days ago
          Representation learning and probabilistic methods are huge sub-areas of modern machine learning, just take a look at the proceedings of ICLR2019.
          • m0zg 1766 days ago
            Representation learning != knowledge representation, probabilistic methods != probabilistic reasoning. I'm talking foundations of AGI, which as far as I'm aware, nobody is seriously working on at the moment.
        • snaky 1766 days ago
          So robust and generalizable that adding a pixel here and a pixel there, too small to be even recognized by a human, confuse image recognition system to the point it confuse panda with gibbon?
          • Quetelet 1766 days ago
            These adversarial examples are generated in a very artificial way that will not be present in natural images (and if you’re thinking of security issues, the attacker needs access to your model...)

            They’re still an interesting topic to explore but hardly evidence that neural nets don’t generalize.

            • goatlover 1766 days ago
              Wouldn't the worry be that generalizing to real world tasks involves enough variation that the same issue could arise naturally?
  • yonkshi 1766 days ago
    AGI is a gradient, not an arbitrary threshold.

    We are not capable of recreating human level intelligence yet, but our modern algorithms had become magnitudes better at generalization and sample efficiency. And this trend is not showing any signs of slowing down.

    Take PPO for example (powers the OpenAI 5 dota agent), the same algorithms can be used for robotic arms as it does with video games. Two completely different domains of tasks now generalizable under one algorithm. That to me is a solid step towards more general AI.

    • jhanschoo 1766 days ago
      Asking how close our computers and algorithms are to AGI is like asking how close our machines and power systems are to "human physicality".
    • taurath 1766 days ago
      It’s a gradient but according to the marketers it’s basically going to overtake humanity any week now.
      • yonkshi 1766 days ago
        I agree. I think a big part of this problem is that smaller companies usually cannot afford AI research. I would even go as far as to say there are more AI companies than capable AI researchers, and this causes a large number of faux-AI companies poisoning the AI branding.
      • misterman0 1766 days ago
        "AI any week now"

        What marketers proclaim that? Are they saying that or are they saying there is _utility_ in AI, now? Because me thinks, there is real utility, now, but it's going to take years until it overtakes us. Years!

      • SomeOldThrow 1766 days ago
        For the problem you’re trying to tackle, this startup has already solved it and will show you insights previously impossible!
  • arbuge 1766 days ago
    Consider this article:

    http://fortune.com/longform/single-family-home-ai-algorithms...

    If you read it, you'll find that their methods to value homes and renovations are based on algorithms written to value mortgages in the 80s, 90s, and early 00s.

    I'm going to bet that there's not much of what the average HNer would think constitutes AI going on in there.

  • galaxyLogic 1766 days ago
    What's the most difficult thing AI should be be able to solve but can not as of yet?

    I would say it is writing a program which writes an AI program. Why? Because it is so difficult for us to define what exactly an AI program should be able to do.

    This shows that we have an issue with not being able to ask the right question. If we could answer exactly what the AI should be able to do then it would be much easier to create such a program and also create a program that writes such a program.

    We could say that an AI program should pass the Turing Test and many have written programs that more or less pass it. But so now, write a program that writes several different programs that all pass the Turing Test one better than the previous one.

    I don't really have an idea how I would start writing such a program that writes a program that passes the Turing Test better than previous AI programs. That makes me guess we are still far off from General AI. But I of course may be wrong, just because I don't know how to do something does not mean others would not.

    • chrshawkes 1766 days ago
      We know what we want it to do, we want it to have some basic ability to think for itself. That is something we just simply can't do. Back propagation is far from a spanking for acting out of line. AI has no ability to understand it's acting like a fool or how to deal with uncertainty with emotions which cause us to act without regards to consequences and in many cases reality. It lacks understanding of what future consequences its trying to prevent such as our daily decisions to get up and go to work each morning. The AI has no understanding of it's future and the consequences of not going to work until it's fired 40,000 times for not showing up or it's children are taken from him/her/it.

      I'm glad people are finally waking up to the fact that AI is not ML and AI is all hype at the moment. Google used algorithms quite effectively to adapt and learn, but they have no greater understanding of what we want, just what we and others have wanted in the past.

    • IRLIamOffline 1766 days ago
      I agree that defining a goal along with metrics would be very helpful to make meaningful progress towards AGI. However, defining this test is extremely hard, to a point where I'm not sure if we could define a test like this. So far it seems, that just by defining a test we can come up with a narrow AI that optimizes for this test.
    • jbay808 1766 days ago
      By this argument, humans also aren't general intelligences, because we haven't been able to write one yet either.
    • foldingmoney 1766 days ago
      >We could say that an AI program should pass the Turing Test and many have written programs that more or less pass it. But so now, write a program that writes several different programs that all pass the Turing Test one better than the previous one.

      That's essentially what machine learning is, though.

  • moneytide1 1766 days ago
    These types of AI promotion seem to be a sort of cop-out that suggest we all look forward to a hands-off future where computers will be able to do everything for us.

    Then human minds will be allocated away from thoughtful interaction with their environment and into an all-hands-on-deck scenario where neural net operations are given top priority so they can churn out some answers.

    • tachyonbeam 1766 days ago
      My main short-term fear is that increased automation will lead to an increasingly isolated society. I can already get almost everything delivered through amazon, order takeout through an app without speaking to anyone. Watch movies on Netflix without needing to go to a video store. What's the world going to be like when drone deliveries become a thing, and I don't even have to speak to a delivery driver? How will it affect kids if they do all their schooling online?

      I think that, even before AGI happens, AI assistants will become placeholder friends for a lot of people. You'll be able to have a conversation with Siri or Alexa. Eventually, people might have pseudo relationships with robot boyfriend/girlfriends. Imagine having a friend who is anything you want them to be, does everything you want, and most importantly, never challenges you or tells you anything you don't want to hear. People will get used to that, and it will become difficult for them to have real human relationships.

      In other words, technology is enabling everyone to function without directly interacting with others. People might choose not to interact with other humans out of convenience, insecurity, fear. Japan already has a population of "herbivores", people who choose not to get into relationships, and the rest of the world could become like that too. I hope we find a way to reverse this trend.

      Short documentary on hikikomori in Japan: https://www.youtube.com/watch?v=wE1UIK85E3E

      • jcranmer 1766 days ago
        I think your fear is misguided. People have been complaining about how technology is causing humanity to become more socially isolated for literally thousands of years, and the actual evidence has been that those complaints are unfounded. If anything, we've probably become more socially interconnected, but that's more due to the increased population density of our environs than technology changes.

        What a lot of people miss, I think, is that human beings are fundamentally social animals, and we crave social interaction. And I say this as a strong introvert--as someone who has to be alone to recharge myself emotionally. Things like distance learning or working from home are not well-received by most people, especially not on a long-term basis. Sure, some people will find it comfortable, but those people are a tiny majority, and I should point out that it's not a new phenomenon: Emily Dickinson for the last 10 years or so of her life or so refused to meet visitors face-to-face and rarely left her house, which is more severe than most hikikomori.

      • moneytide1 1766 days ago
        Your short-term fear is seeming to be validated over time as your listed "person-less" capabilities are constantly being implemented. I myself have lived brief bouts of the hikikomori lifestyle, although I still provide for myself as opposed to being taken care of by family as many are in the video you linked. But over the past few months, I've been prioritizing face-to-face conversations, especially with new people. This is the timeless, natural "neural net":

        When we interact in this way, we isolate certain topics that are more relevant to our modern condition because both parties involved understand there is a time constraint. There is a sort of natural algorithmic process going on in both our minds because we are calculating the ideal things to say with this "new person". If we are speaking in proximity to other people, our conversation can take on a whole new shape because there is a "public" element and perhaps we are trying to dialogue for an audience that could eventually participate if they choose to. All of these mechanisms that keep people in check with each other are completely lost when out-sourced to programmed automation and looped control.

        Perhaps it is not all bad though, because ordering through services like eBay/Amazon quickly and efficiently could ultimately save resources/emissions as we are having items routed to us that would historically take up space in a brick/mortar location. But the concern we are sharing here is that community is compromised when everyone has the option to anonymously do everything.

  • bernardv 1766 days ago
    I totally agree with the gist of this article. This hype is being propagated by a lot of folks who are willingly clueless, as for example, in the data science crowd. This band-wagon is crowded and isn’t stopping any time soon.

    It irks me to no end to comme across tutorial-style articles proclaiming to teach an AI algorithm, also known as ‘linear regression’.

    What bugs me the most though, are the countless ‘influencers’ on LinkedIn which spew rubbish about machine learning, AI and all the wonderful things that are just around the corner.

    Lastly, it doesn’t help when countless articles/books are written on the subject of AI dangers, AI ethics and are ‘robots coming for us?’. These add fuel to the fire of hype.

    In the end, this behavior will only guarantee the eventual blowing-up of the bubble, when promises are not delivered.

  • nottorp 1766 days ago
    Is Medium for pay now? They told me to sign up to get "one more free story".
    • Veedrac 1766 days ago
      Medium lets writers opt-in to a paywall. It is not the default, but does come with some perks for authors.
  • mikorym 1766 days ago
    So can I call it "second year linear algebra" now instead of "AI"?
  • plaidfuji 1766 days ago
    Sure, “AI” as it is used today implies “software that codifies decision-making using data”. No, it’s not the T3000. But as the author acknowledges:

    > Dynamic Yield can pay for itself many times over by helping McDonald’s better understand its customers

    Ok, so it’s not hype - it is delivering real value. “AI” is just a marketing term to help C-suite suits and Silicon Valley sales reps get on the same page about what’s being sold with as few words as possible. What’s being sold is software that helps make optimal decision using data.

    AI isn’t a rigorously defined academic term, so people will use it how they want. It’s only hype when real value isn’t delivered.

    • epr 1766 days ago
      > What’s being sold is software that helps make optimal decision using data

      Doesn't this apply to virtually all software?

      • plaidfuji 1766 days ago
        In an extremely reductionist sense, maybe. Do I use Microsoft Word to automate decision making? No. Does Facebook help me make important life choices? Heh.

        How about this: Amazon.com is not AI, but their recommendation engine is.

  • cirgue 1766 days ago
    There is a massive positive, though, for the 'geeks building the future": AI is where everyone else is looking. If you know where you should be looking, you have a decisive advantage over the rest of the market.
    • bombingwinger 1766 days ago
      Doesn’t your last sentence go for literally everything?
      • cirgue 1766 days ago
        Of course it does, but we can say with confidence that attention and capital are misallocated toward a specific, identifiable set of activities. That's rare.
      • chrshawkes 1766 days ago
        Everything but the inevitable AI winter to come. :)
  • soobrosa 1759 days ago
  • chewz 1766 days ago
    > The Turk, also known as the Mechanical Turk or Automaton Chess Player (German: Schachtürke, "chess Turk"; Hungarian: A Török), was a fake chess-playing machine constructed in the late 18th century. From 1770 until its destruction by fire in 1854 it was exhibited by various owners as an automaton, though it was eventually revealed to be an elaborate hoax.[1]

    https://en.wikipedia.org/wiki/The_Turk

  • colechristensen 1766 days ago
    Progress of civilization could be summarized in the slow march of BS elimination parallel with the creation of creative new forms of BS (people don't actually learn anything, they just form the same crazy opinions about something new)

    Strikeout "of Phony AI." The BS-Industrial Complex is huge and the rise of the Internet has made it worse by empowering the less-informed to share ideas. That is somewhat the price you pay for progress.

    The hopeful idealistic information superhighway myth of the 90s turned into something else.

    • ethbro 1766 days ago
      I look at BS as an inevitable symptom of the Singularity.

      As we approach the capacity of human reason, fewer people are able to keep up with the world, and are therefore more susceptible to it.

      • colechristensen 1766 days ago
        I don't know, look back two thousand years and you see plenty of it. More like it's a symptom of humanity. Animals are stupid machines, humans aren't nearly as far away from them as we'd think ourselves.
  • nl 1766 days ago
    AGI will arrive as soon as someone can arrive at a reasonable definition of intelligence.

    Try it. Everything I've seen is already achievable by computers.

    • AstralStorm 1766 days ago
      Solving novel problems. Show me.

      By novel I mean multiple categories. A system that can serve as archive, mathematician, calculator, can move a robot, drive a car and additionally make coffee from scratch. Oh and talks (speaks and understands and acts upon orders) in 3 human languages at decent levels plus can roughly explain what it's doing. Oh and can learn more unrelated skills.

      Hey, people do it all the time.

      • nl 1766 days ago
        A system that can serve as archive, mathematician, calculator, can move a robot, drive a car and additionally make coffee from scratch. Oh and talks (speaks and understands and acts upon orders) in 3 human languages at decent levels plus can roughly explain what it's doing. Oh and can learn more unrelated skills.

        I'm a bit unclear if this is supposed to be a definition of intelligence.

        Stephen Hawking would fail this test, but no one would argue he isn't intelligent.

  • bjoernbu 1766 days ago
    Imho it has gone further. In a way, all the things described as not actually AI now "are" AI, because the term AI has been used in that way so many times.

    I don't think we'll ever use a better (more accurate) term for the ML- and data-driven value current systems create. Instead "true" AI will get a new fancy name to build the next hype around in several year.

  • dr_dshiv 1766 days ago
    We should be focused on designing "smart systems" that optimize measurable outcomes

    Who cares how complex the algorithm is! What matters is that it works better. Is there a measurable outcome that matters? Can the system optimize that outcome over time, through a coordination of human processes and technology design?

    That is what organizations need. Not hyperparameters.

  • nsajko 1760 days ago
    It seems the author has deleted the post. Maybe Dynamic Yield asked him to take it down? Anyway, currently it is accessible through https://outline.com/FP487e
  • a_imho 1766 days ago
    It is pretty much spot on, but I'm not convinced anyone should really care. When was software not hype driven?
  • JustSomeNobody 1766 days ago
    This is no different than anything else. You hype what you're working on so people get interested and throw money at you. AR/VR glasses, AI, self driving cars, it's all the same. You generate interest, make lots of money and who cares if it ever gets to market.
  • orpep90nxkfo 1766 days ago
    This reminds me of the article the other day about the internet being an SEO wasteland

    Basically our business networks run the same way (not a shock at all): sycophants spam aristocratic investors with half assed bullshit solutions to juice the odds of hooking one

  • East-Link 1766 days ago
    Rudimentry machine learning algorithms are indeed AI, by common usage.

    Try typing into Google Images something like "ai machine learning deep learning venn diagram" and you'll see that by common usage, machine learning is a strict subset of AI.

  • Wiretrip 1766 days ago
    For a real emperor's new clothes moment, look at SpinVox!

    https://en.wikipedia.org/wiki/SpinVox

  • diehunde 1766 days ago
    The problem is when you work at a company that tells you, "we are different, we are not BS like the other A.I. companies"
  • tabtab 1764 days ago
    Just AI? IT is filled with BS and fads. Dilbert is a documentary, not just a comic strip.
  • holografix 1766 days ago
    Repeat with me: Machine Learning != AI
  • dijksterhuis 1766 days ago
    I despise the term Artificial Intelligence. This is all PROBABILISTIC MODELLING. Nothing to do with AI/AGI/whatever.

    The computers aren’t thinking or learning. It’s just modelling fancy probability statistics.

    E.g. classical neural networks are basically a load of linear regression equations with an activation function stuck on the end of each of them. No magic. Just lots of linear regression.

    This stuff only works when:

    1) you are trying to solve a specific problem that is suited to probabilistic models

    2) you have a data set that is sufficiently large, varied and specific

    3) the model is developed, trained, tested, implemented and updated in a rigorous and sensible manner

    • Quetelet 1766 days ago
      Actually most modern neural networks are not probabilistic, they are deterministic function approximators.

      Also your point 3) isn’t quite correct either, often a “standard” architecture and training procedure (e.g. ResNet50 with Adam) will work on a new task with sufficient training data and minimal modification of the model.

      • dijksterhuis 1766 days ago
        The only nnets I mentioned were “classical” as a purposefully over simplified example. Yeah, they can model any function, but historically they were used for probabilistic density functions (if I remember correctly).

        Most of what the article talked about can be done with much simpler models, which is what I get peeved about.

        Also, yes, you can transfer learn with resnet. But if I throw my bank statements at it, it’ll do bugger all.

        Similarly, if I throw new images at resnet in a silly way, it won’t transfer properly.

        • Quetelet 1766 days ago
          You might be confusing the historical use of the sigmoid activation function with probabilistic modeling, neural networks in the 80s were used similarly to how they are today, albeit at a much smaller scale due to hardware limitations at the time.

          The development of neural networks is a major contribution of the machine learning community, so even if you’d like to split hairs about whether the “computer is learning” (“learning” has a a precise technical definition by the way), NNs are not “just statistics.”

          • dijksterhuis 1766 days ago
            Ok, it seems like there are some crossed wires or missing context here. Also, widely off topic.

            I never said anything about the term machine learning. Check my bio, see what I’m working on. Fully aware of neural network contributions.

            I’m all for machine learning. Just not “AI”. “AI” is hype bullshit.

            “Learning” when used by the people who spout this BS is not the technical definition version, and is what I was referring to.

            Could probably have made that clearer, but I’m 1.5 days without sleep.

            What does feeding test data into a network yield? Inference results. Inference seems vaguely familiar from probabilistic modelling?

            Bayes rule applies to neural nets too. Two different models may give vastly different results. Whilst they can be very good approximators, they can also be very unreliable if care is not taken during training.

            G(x) ~ f(w.f(w.x+b)+b) is literally a fancy weighted sum. A linear regression. It is some easy stats combined together with a few other things that aren’t explicitly necessary, eg activation function can be identity to cancel out f().

            EDIT Both the parameters of a network and the training data are variables in the application of Bayes rule. Which inherently deals with likelihoods (probability). /EDIT

            So at their fundamental, they are “just some stats” stuff. They may have a few more bells and whistles to make them complex (and better) systems, but they still output a classification/regression based on inference.

            You can, of course, approximate many functions with them. I’ve built a network with only weights of +1/-1, for example.

            But those examples have extremely specific use cases that are not applicable to anything the article discusses.

  • luc4sdreyer 1757 days ago
    Seems like the post has been taken down.
  • wolfi1 1766 days ago
    if there is no natural intelligence around, you need an artificial one
  • module0000 1766 days ago
    TLDR; machine learning == "AI", just as much as colocated servers == "cloud"
  • macawfish 1766 days ago
    Ben Goertzel.
  • antonvs 1766 days ago
    Paywall.
    • 3xblah 1766 days ago
      Not when you have Javascript turned off.
    • gumby 1766 days ago
      Just delete your medium cookies.
  • stareatgoats 1766 days ago
    > The BS-Industrial Complex of

    Brilliant! This is really a thing, and the computer industry is (and has always been?) rife with it.

    • harry8 1766 days ago
      IBM Global Services. Oracle. Accenture. Any company with 100+ employees who does consulting involving the design, implementation and maintenance of computer systems for any government bureaucracy.

      Is there anyone around here who thinks this industry sector is something else than industrial grade BS and if every single one of those companies disappeared overnight that we would not be in a better place as a civilization very, very quickly as we were forced to pick up the pieces.

      Industrial quantities of BS are the norm, right? Most of us do startups to do something more than to schmooze, threaten and ultimately bilk customers paying with other peoples money. We kind of want to do tech.

      • adev_ 1766 days ago
        > Most of us do startups to do something more than to schmooze, threaten and ultimately bilk customers paying with other peoples money.

        Do you know Theranos ? That's the definition itself of bullshit and it was a "startup". https://en.wikipedia.org/wiki/Theranos

        Bullshit comes from 5000+ employees to companies with 5 dudes. Scale does not change anything.

        Business culture, profit as only value and the culture of fake it until you do it are the source of the problem.

        And against that their is not magic solution, excepted trust a lot less the ones that speak and trust a lot more the one that do. In the good old Nerd world, we named that Show me the code

  • mrbanks 1766 days ago
    i.e. Babylon Health
  • gok 1766 days ago
    "I was able to bullshit about A.I., so the whole field is bullshit."