The Economics of AI Today

(thegradient.pub)

83 points | by guidefreitas 1554 days ago

6 comments

  • Animats 1553 days ago
    That's kind of broad. Also, the robots shown have very little "AI".

    Maybe machine learning has reached a peak. It's routine now to make classifiers that are about 90% accurate, and really hard to get much beyond that. What we really have are systems which extract lots of signals from an input set and construct a statistical model that maps signals to results. This works moderately well with enough data, but hits a limit at some point. It's great for the class of problems where that's good enough. Like ad targeting and search. Not so great where a wrong result is a serious problem. Like self-driving cars and medical diagnosis.

    I wonder what the next idea will be. I'd like to see progress on "common sense", defined as being able to predict the consequences of real-world actions as a guide to what to do next.

    • V-2 1553 days ago
      Humans (doctors) aren't perfect at medical diagnosis either, so any improvement over human performance is still a net gain, even if it's still below 100% accuracy.
    • choppaface 1553 days ago
      Classifiers don’t need to be 100% accurate. They might need 100% precision, or 100% recall, and likely just for a controlled set of data thrown at them. The spam filter in your email client doesn’t need high accuracy, just high precision. The “recommendation system” in your favorite product doesn’t need high accuracy, it needs good content to surface to users.

      And there’s a spectrum of what constitutes a classifier. It might be a fancy deep model, or more likely just be some threshold functions (perhaps crafted based upon the results from some deep model that’s not too accurate and not yet ready for production).

      The hype is that these classifiers are all 90% accurate now and thus all software is going to turn into AI. That’s garbage and can disenfranchise people who don’t understand AI. That storyline only benefits investors.

      What’s different today versus 2012 is we have some deep research models with impressive results. But more importantly, software has compounded itself. There are tools to store and mine data. Mobile compute is also now closer to where laptops were years ago. Competition requires taking hold of these developments in order to innovate. Products will inevitably become more intelligent, AI hype or not.

    • TulliusCicero 1553 days ago
      > Also, the robots shown have very little "AI".

      That's what happens when you change what "AI" means every time there's a breakthrough.

      • unishark 1552 days ago
        The bar has been getting raised overall though, as more computer tasks get taken for granted. Unless your requirement is nothing short of scifi machine consciousness.
    • 2sk21 1553 days ago
      Even the best attempts at task-oriented dialog currently using purely ML techniques are pathetically bad. Having worked in the field for a while, one thing that I have observed is that chatbots simply discard the bulk of the useful information that user provide them. They then fall back to asking users direct questions. Apart from this, they can only handle limited information retrieval tasks for which training data exists.

      The next challenge in my opinion is to create a task-oriented chatbot that can help users to actually solve real problems which may not be directly related to previously seen problems. Related to this, consider the problem of creating a chatbot to automate support for an entirely new product.

      I have played around a little in this space and feel certain that hybrid approaches will be necessary. For example, I created a car diagnosis/remediation chatbot driven by a Bayes net model of a car's subsystems. This actually showed signs of working - sadly got distracted with other projects.

      • Animats 1552 days ago
        I know; I've been playing around with "Rasa", a chatbot system based on Tensorflow. All the ML part does is match up canned answers with incoming questions. Someone has to provide all the answers and a few questions for each answer, then look at errors from user input and manually classify them for retraining. The rest of the system is just a template system for implementing phone trees.

        Interestingly, MIT's START questioning-answering system is pretty good.[1] That project started in 1993, before machine learning, and it's more "traditional AI". Try it and comment.

        [1] http://start.csail.mit.edu/index.php

    • unishark 1552 days ago
      Don't know anything about the amazon robots specifically, but manufacturing robots commonly use computer vision methods which perform a lot better than 90 percent. I don't really agree with the claims of 99.9% accuracy and being generally superior to humans in this area. But in practical terms a vast number of new vision tasks are solved problems now, a really huge achievement.

      Though on the "curve fitting" versus "general intelligence" argument, I agree with you. The robots still can only do exactly what we tell to do, we just don't need to be quite so exact in telling them how to do it. But for the most part, this is only gained by having to show them very very thoroughly how to do it by using tons of data.

  • sgt101 1553 days ago
    This worth reading - a serious and interesting write up.

    I'm concerned by the language around how the economy works here "that suggests that the market expects..." The market doesn't expect a thing - the market is made of a herd that flocks from shiny thing to shiny thing. It's not a rational system and modelling it as such is surely completely debunked at this point?

    Also the section on regulation is narrow; the responses cited - putting managers in, licensing a regulator to check outcomes - are weak and old fashioned. Two fundamentals are not mentioned; first an investigative and proselytizing enforcement agency as per areospace that can identify specific routes to failure and then communicate them to the professional and business actors. Secondly professions in the sense of proper engineers who are personally liable and insured and required for the use of the technology (brew what you like in your room, but if you use it on people then prison beckons). Third process and infrasturture that is required for use and that no professional would contemplate life without (like design drawings and stress analysis in engineering).

    The section on the political economy of AI is interesting, it put me in mind of the latest William Gibson book "Agency" in there there is a quote - something along the lines of "at the boundary of unauthorized military research and the most reckless kind of commercial use" describing the origination of an AI. The framing and narrative of the book is much more plagent and potent than the literature of economic analysis; the data and logic of the economists so far are not as convincing as fiction and polemic. If I was an economist I would be concerned by that.

    • algo_trader 1553 days ago
      Has any one read both Agency & the Peripheral? Which one is the lighter read?
      • sgt101 1553 days ago
        They are both pretty light and riproaring. Definitely novels of ideas not characters, fantastic language and clever plots.
  • ArtWomb 1553 days ago
    Well-sourced conference report. Actually saves me a lot of time. To summarize: it's still the Wild West in AI. But there is broad recognition that governance is essential

    I'll just add one more report, as if the dozens already mentioned were not enough. It's from Berkeley's Center For Long-Term Cybersecurity. And it addresses the enormous challenge of securing AI systems from adversarial attack. A glimpse into the vortex of how the "industrialization of AI" creates a self-perpetuating, fractal-like cycle of eternal dependencies. Requiring us to create ever stronger AI to protect and serve the AI on which our new engineering platforms will be founded upon

    https://cltc.berkeley.edu/wp-content/uploads/2019/02/CLTC_Cu...

    It stands to reason then, the ultimate AI-mediated prediction problem is predicting the impacts of AI itself ;)

  • netcan 1553 days ago
    At least in my bubble, AI discussions have been going in predictable ways.. and getting grounded in the same places.

    First, we try to build the (proverbial) foundation: What is AI? What is intelligence? Is it general? Lots of places to get stuck here. Can machines understand meaning? Is general intelligence statistical. Can it be?

    No real way of settling these, so we have poor foundations.

    Then we ask: What can it do? When? What will it do? Why? Can it automate driving? Other stuff? How big a deal is this economically? Will all cars be taxis?

    At this point, foundations crack. We're trying to predict the economic side-effects of a technology, its viability, timelines, regulation... and we're building these predictions on very abstract foundations. Obviously, it's all too squishy so we end up nowhere.

    Anyway, the effects of technology are very hard to predict... especially recent ones. Computerisation of offices has not measurably increased productivity since the 80s/90s[1], for example. A PC landed on every desk. Many more people work at a desk than before. What predictions would we have made in the 90s, when this was starting to look inevitable.

    [1]David Graeber, Tyler Cowen & others highlight this point. It's hard to define or measure, so most economists don't. But within wide margins of error, it does not seem to

    • ksec 1553 days ago
      >Computerisation of offices has not measurably increased productivity since the 80s/90s[1],

      As far as I can tell. It isn't AI, or whatever technology that is not getting productivities increases in business. When was the last time you saw a CRM / ERP replacement that had any productivities increase? It seems there is a very clear divider in between all Technology Companies vs Other non-tech company. It is that No one knows how to best integrate the two to maximise potentials. Only the one which have their feet on both side ( Amazon ) seems to understand it.

      • netcan 1553 days ago
        I think it runs deeper than "can't figure out how to X."

        We're very good at figuring out productivities when dealing with a factory or somesuch. A transport authority or facebook "business headquarters" doesn't interplay with technology in the same ways.

        • ksec 1552 days ago
          >I think it runs deeper than "can't figure out how to X."

          Yes. I wish there is an in depth article to explain some of these observation we see in real life.

  • trycrmr 1553 days ago
    Had hoped an article like this would shed some light on the environmental impacts of the additional compute required to build and maintain a feature backed by AI. My understanding is it takes significantly more compute + data, and therefore electricity, to build and maintain a feature baked by AI. As AI becomes increasingly accessible, particularly through offerings from cloud providers, the power consumption would increase faster than when folks were exclusively writing scripts/uploading binaries to process transactions. My hope is it's negligible. Haven't had the time to crunch numbers to figure this out as it's out of my daily lane.

    Maybe I skimmed the article too fast and missed it while enjoying my coffee and bagel sandwich. Let me know if that happened.

  • mistrial9 1552 days ago
    related to ide.mit.edu ?