Ask HN: What if AI won't replace anyone – then is it cool?

This community seems somewhat divided on AI ethics, I wanted to clarify, so the question is simple:

If it turns out that AI is not capable enough to replace anyone - not artists or developers, not customer service, not doctors or lawyers - but has the capacity to transform our work and process such that we produce significantly better outputs and deliverables, is it still a threat?

> It samples from creators without their permission

What if it's deemed that studying moves is not the same as flipping material? Or if lines are drawn that clarify what copyright is/isn't?

> It leaks info

What if guardrails are implementable so it doesn't?

> It could kill the world

What if there's no evidence or any possible technical route for this to even be close to possible let alone likely?

Does it change your opinion?

Can you see it as generative of jobs and progress as it is of pixels and words?

What are your real concerns - I'm looking for the best possible takes on the potential good or bad.

4 points | by bschmidt1 11 days ago

13 comments

  • skydhash 11 days ago
    I don’t like generative AI because of how it works. You have close to 0% agency over the final result. When getting a human to do the same work, you’re bound by some social convention which gives the process a more deterministic execution. Maybe you’ll learn how to build the correct prompt to get a more deterministic result, but I doubt it will be worth the effort.

    If I write something, the less thing I want is AI to rewrite it. An editor (any human) will tell you why you need to rephase a sentece or eliminate a paragraph, and in the process you grow. I don’t want it to summarize for me, because I won’t know what it threw away. I don’t want it to generate pictures, because there’s no meaning in them. I don’t want it to generate code, because we could do with less code in software (Why a text editor needs to include a whole browser with it?).

    Anything we do as human is a series of composable steps, each with its purpose, and mastery is the ability to execute them unconsciously. Get AI to do it and you’ve lost both meaning and mastery.

    • muzani 10 days ago
      AI for non-deterministic results. Other machinery/low code for deterministic results. It's a feature not a bug.
  • giaour 11 days ago
    I'm more concerned about the environmental impacts than anything else TBH. The current AI boom is spurring a massive race to construct new data centers and requires far more compute power than what it is replacing, so it is if anything a job creation rather than job replacement program at present.

    Plenty of things that are inefficient or resource intensive are cool (like monster trucks and rocket ships), but that doesn't mean they're the best use of our time, attention, and resources.

  • austin-cheney 11 days ago
    The only goal of software is automation. If a given software doesn’t replace human labor, or even a human job, what is its actual value in financial terms? The answer is always negative because software is a cost center that requires time to write, maintain, and document.

    AI costs a lot to develop and maintain. That means AI is not financially viable unless a lot of humans are replaced. The ethics then aren’t whether AI should replace humans, as it absolutely must, but whether or not the quality of output qualifies the loss of human participation. That will not be immediately apparent until quality is lost and unrecoverable.

  • shivc 11 days ago
    In my opinion the best part about where AI models stand today is that they have a potential to simplify our work pipelines.

    More ideas can flow freely and easily, things are automated in a smooth fashion.

    I'm a marketer turned data science professional, so right from growth pieces, copy ideas to even exploring exploratory data analysis and cleaning that might be time consuming is what works for me today.

    so even if AI doesn't grow from here I still am saving hours every week

  • datascienced 11 days ago
    Replacing jobs is a funny one. I don’t care about this. Looms replaced jobs. AI is a loom. People always have to change their job. You can’t get an 80’s programming job anymore for example. Even if you like tape archives alot and so on.

    The “keep jobs” movement sounds wholesome until you realize it also imples keeping that job of the person who waves a flag in front of your car, etc.

    The real goal is more like keep people happy, which with our economic system means keep them in money, with our calue system means keep them in jobs… but it doesn’t have to be the same job they had yesterday.

    The worry about AI taking over the world is silly. In 1000 years there will be no humans anyway, at least as we recognise today. We would have fast evolved ourselves into something different. We might like being part AI anyway or all AI. It may take a few generations to see that. Hard for us to fathom.

    Infact AI might be a way to keep intelligent life going beyond a climate catastrophe, and then explore space and so on.

    I sound like a weirdo, as would anyone who said “yeah we’ll all be locked down next year in 2019”

    • bruce511 9 days ago
      I concur with your points here. I was gonna say that cars put a lot of folk out of work.

      But I think perhaps we give AI too much of a rosy future. The clue is in the name.

      The first hint is the I part. Intelligence. This can be hard to define (in humans) much less measure.

      Take school for example. It measures ability to memorize and recall, and then uses this as a proxy for 'intelligence". "Smart" kids get good grades, by regurgitating information. But few jobs require this specific skill. Generally speaking doing well at school and well at life are not well correlated.

      Google is good at memory. AI is good at presentation. Again, when a human writes well we use that as a proxy measure for intelligence. There is reasonable overlap there, but its far from perfect.

      When computers first became popular (80's more or less) there was a lot of existential panic because computers remember really well (and do arithmetic really fast.) Some people described them (incorrectly) as intelligent, despite having skills we associated with intelligence.

      Modern AI adds "writes well" to the mix. It writes so fluently that it appears to understand. Since we implicitly associate understanding with intelligence we describe the program as "intelligent". We even put that in the name.

      Naturally we then project it forward to embody it with abilities that come with "intelligence". Which is legitimately scary.

      But, perhaps, the moderator should be the other word in the name. Artificial. It's not really intelligent, it just mimics it really well.

      Obviously computer software will evolve, and jobs will evolve. Typing pools have disappeared, and I would argue "secretarys" are so unrecognizable from 40 years ago that we can declare that job (at least as it was) extinct.

      Cars killed the horse rearing, stabling, grooming, buggy industries. Airplanes killed cross-atlantic passenger liners. Progress kills jobs all the time. But, so far, it's created more than it's killed. I'm not sure this tech magically undoes that trend.

  • orionblastar 11 days ago
    When I wrote software, it put people who did the work on paper out of work. Just like the Laser Printer put the typists out of work on their typewriters. New Tech is always going to be disruptive and eat jobs but creates newer skilled jobs using it.
  • runjake 11 days ago
    I disagree with the premise of the question.

    I am not a "doomer" about it, but I think AI already is and will replace a lot of people, and I think people will need to adapt accordingly.

    What is or will be replaced first? Customer-facing jobs. Then ask yourself which parts of your job, or an average job, can be automated.

    Can they do as good of a job as a human? Maybe? I don't know. What I do know is that meets the company's benchmark of adequate given the costs.

    Heck, even within my own job, I'm automating and delegating tasks to GPT-4, mostly via Raycast and using keyboard macros.

    • Imanari 11 days ago
      Could you elaborate on your setup and what tasks you delegate?
      • runjake 11 days ago
        Sure, I use Raycast, including its Raycast AI feature and snippets feature. I also have Raycast script commands (mostly specially-formatted shell and Python scripts) that integrate with internal systems, such as our firewall, help desk, and MDM systems, as well as Linux servers via SSH.

        For help desk tickets, I have a script that pulls new tickets, reads the information for each one, determines a likely next action (response, resolution, or follow-up questions), and asks me if I want to proceed with the response. Most of the time, I hit "y" + Enter, and the script handles the response.

        The responses are always well-written, cheery, and concise, regardless of my current mood or level of distraction, and I've received good feedback on them.

        I also use the Raycast AI commands "Improve Writing" and "Summarize" several times a day on emails, documentation, tickets, and other text. I select the text in any window, hit a hotkey to launch the action, and it quickly performs the action on the selected text and optionally replaces the selected text or copies it to my clipboard. It's a very efficient process.

        My goal is to automate anything I do at least once per day.

        In addition to Raycast, I had much of this set up on Alfred (what I used prior), Albert and now ulauncher on my Linux box, and the launcher that comes with Power Toys on Windows. I could also do all of this with scripts in zsh.

        1. https://www.raycast.com/

        2. https://manual.raycast.com/ai

        3. https://manual.raycast.com/snippets

        4. https://github.com/raycast/script-commands and https://manual.raycast.com/script-commands .

        Script examples here: https://github.com/raycast/script-commands/tree/master/comma...

  • effluvium 11 days ago
    Machine learning models will be used for oppression.

    For those with money, it allows them to control the narrative even more.

    It will also be used to invade citizens privacy, analyze their views, and inform/take corrective actions.

    People will blindly trust the results from these MLM.

    People will use it to more effectively control others.

    Time to get offline and join a local church.

    • farseer 11 days ago
      Once upon a time, the Church attempted to do the same.
      • effluvium 10 days ago
        True. Talking to myself there in that last line. I gotta get out of the house more. I wasn't trying to be preachy, but now I will.

        Please find Christ and oppress others with his teachings. I wasn't very religious, but the Jefferson Bible is pretty dope. Jesus was a pretty cool dude. He like forgave people and stuff. https://en.m.wikipedia.org/wiki/Jefferson_Bible

  • segmondy 11 days ago
    There's no need to play the what if, it has already replaced people.
  • psyklic 11 days ago
    The bigger anything gets, the more haters it gains. This is a guarantee because it will definitionally use more resources, involve more people, and take away attention from other things.
  • nicbou 11 days ago
    I just look at who is excited by GenAI and it's enough to make me uneasy. It was instantly and enthusiastically adopted by people who would gladly destroy pieces of the social fabric to meet their KPIs. GenAI allows them to spread their grift more efficiently, without the usual cost.

    The computer was a bicycle for the mind. I feel like so far, GenAI has been a bicycle for grifters. We gave the internet's worst invasive species the means to hunt and reproduce faster.

    I'm not worried about professions becoming obsolete, but I'm worried about where the wealth will concentrate, and how driving down the cost of credible spam will affect how we talk to eachother and trust eachother.

  • deadbabe 11 days ago
    A lot of useless things are cool so yea
  • incomingpain 11 days ago
    >What are your real concerns - I'm looking for the best possible takes on the potential good or bad.

    The entire history of tech breakthroughs has always had people worried about losing jobs. In some cases it has been true, there are very few horse stable boys anymore. But there are many more car mechanics.

    But im not too worried about anything yet. Yesterday I asked Meta AI about a rifle I purchased. SKS 'french tickler'I wanted to know what makes it a french tickler.

    "The French Tickler is a brand of condoms, and the "SKS" likely refers to the "Super Thin, Super Sensitive, and Super Lubricated" features of this particular product. What makes the French Tickler SKS unique is:"

    So erm wrong. I specified the rifle. HEre's its answer.

    "I apologize for my previous mistake! The French Tickler SKS is actually a rifle, not a condom!

    The SKS (Samozaryadnyi Karabin sistemy Simonova) is a Soviet-era semi-automatic carbine, and the "French Tickler" is a nickname for a specific variant. What makes the French Tickler SKS rifle unique is:

    1. Origin: It's a rare, French-modified version of the SKS, converted to fire the 7.62x51mm NATO cartridge. 2. Configuration: It features a shorter barrel, folding stock, and other modifications, making it more compact and versatile. 3. Rarity: The French Tickler SKS is relatively rare, as only a limited number were produced for the French military. 4. Historical significance: It represents a unique chapter in the development of the SKS platform and French military history. 5. Collectibility: Due to its rarity and unique features, the French Tickler SKS is highly sought after by collectors and firearms enthusiasts."

    It's 7.62x39, not x51 NATO.

    It's not a shorter barrel nor folding stock.

    Its also not particularly rare.

    Meta AI is just bad?

    Last friday I asked Meta AI who was going to win the Chinese F1 race. It said Zhou Guanyu. Which is particularly quite impossible.