Digital dystopia: how algorithms punish the poor

(theguardian.com)

123 points | by pseudolus 1648 days ago

12 comments

  • benjaminjosephw 1648 days ago
    > Automating Poverty will run all week. If you have a story to tell about being on the receiving end of the new digital dystopia, email ed.pilkington@theguardian.com

    This is dangourous journalism. The reporter has clearly taken a stance and is not attempting to open up a nuanced and careful discussion of the factors at play here but is instead insighting fear and mistrust of technical progress in the public sphere.

    I can understand the fear and concern around the changing digital landscapes and the potential impact on different parts of society. What's needed is a public discourse about these things that doesn't conflate the issues into one single problem of a "digital dystopia". Are we really talking about a "flawed algorithm", for example, or simply the encoding of a badly designed government process?

    Technology _is_ changing how poor people interact with the state and this is an important topic to discuss openly and broadly. We'll never have the quality of discussion that's needed while there are fear mongering reporters like this subverting that conversation to the level of luddism.

    • Sileni 1648 days ago
      I go back and forth on the value of a "devil's advocate". I hope someone else will chime in to help me out if I don't explain this eloquently.

      This is a case where I believe the topic won't be properly discussed unless someone is willing to take a hard stance on the negative side. Someone has to point out all the potential failings of the system that so many people are pouring their lives into building. Even if that person ends up taking a much harder stance than they actually believe in, and becoming a little too disconnected from reality.

      The situations being described in the article are horrifying, and sound an awful lot like what you might expect from software bugs in their early stages. That wouldn't be a problem if there was an appropriate human force behind the systems going online, but we've all seen stakeholders push systems into production before they're ready without adequate support.

      You're probably on the right track to call it a "badly designed government process", but I'd wager the human element is what softened the blow from that bureaucracy. It shifts the burden of proof from the case worker to the support seeker. It changes the conversation from "You think you deserve benefits? Let's look over your evidence" to "You've already been rejected, why should we support you?". When you're talking about people surviving, that's a significant difference.

      • whatshisface 1648 days ago
        >Even if that person ends up taking a much harder stance than they actually believe in, and becoming a little too disconnected from reality.

        Sure, that will get the issues discussed - discussed and then promptly rejected. A bad arguer arguing for the right side can do a lot of damage. For example, imagine how successful climate change skeptics would be if someone got in to the news by saying that New York was a month away from flooding.

      • antisthenes 1647 days ago
        Devil's advocate only works if the person is truly informed about the underlying science of both viewpoints.

        Most often the people who call themselves that aren't even competent enough to explain their own viewpoint.

    • 0-_-0 1648 days ago
      In addition, there is nothing in the article to support the point it's trying to make. Not a single number to compare against. It says "new thing X is bad", while it should be saying "New thing X is worse than old thing Y and here is how that was measured". Otherwise how should I know if a similar article could have been written about how "old thing Y is bad". Surely, humans making decisions about welfare is also error prone and much more resource intensive, and the job of a journalist would be to make the comparison.

      I'm interested in facts, not opinions completely lacking in (and separated from) facts.

      • shadowgovt 1648 days ago
        It looks like this article is setting up a week of reporting, and I assume the facts will come in subsequent stories. It's a weird format though; not one I'm accustom to in a world of self-contained news reports.
    • dictum 1648 days ago
      Sometimes the desired neutrality, put to practice, is so dull that it fails to reach advocates for either position.

      In practice, a biased statement can bring better counterarguments: https://meta.wikimedia.org/wiki/Cunningham%27s_Law

      • benjaminjosephw 1648 days ago
        > "the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer."

        So true! My issue here isn't so much the bias but the poor reporting on what the reporter means by terms like "automating poverty". The difference between algorithms, AI and the organizational processes behind them are significant to the discussion. My view is that tech reporting in the media tends to be so inprecise and unfocused that the quality of discussion suffers as a result.

      • im3w1l 1648 days ago
        That's only true where and when both sides are able to safely speak and be heard. Hence why it needs the qualifier "on the internet".

        In meatspace, and even on many parts of the internet, that just isn't true.

        If a newspapers advocates for the devil, that's the only side people will hear.

    • ForHackernews 1648 days ago
      > Are we really talking about a "flawed algorithm", for example, or simply the encoding of a badly designed government process?

      Isn't that the same thing? If you take a flawed algorithm haphazardly implemented through government process and reify it in code, that doesn't fix any of the problems with it, and it may well make the situation worse by removing flexibility or the possibility for human intervention.

      • benjaminjosephw 1648 days ago
        This is true, and I can see how technology may make these kinds of problems worse, but the underlying problem isn't the algorithm itself. The real problem is the process.

        These are two distinct things. Understanding how technology exacerbates the problem is one thing. Attacking technology as the source of the problem is another.

        • zAy0LfpBZLC8mAC 1648 days ago
          > but the underlying problem isn't the algorithm itself. The real problem is the process.

          So, the underlying problem is the algorithm, not the algorithm? That totally makes sense!

          Also, if there are two contributing factors to a problem, and removing either would solve the problem, it is completely useless to arbitrarily declare that one of them is "the real problem".

    • xboxnolifes 1648 days ago
      One person need not be both sides of the argument. One journalist can bring out as strong of an argument as possible from one side, while another does the other.
      • DarkWiiPlayer 1648 days ago
        There's no argument at all; it's just unproven opinions and a select few instances of systems failing.
    • 6gvONxR4sf7o 1647 days ago
      >Are we really talking about a "flawed algorithm", for example, or simply the encoding of a badly designed government process?

      There's a massive difference between people carrying out directions and machines carrying out the codification of those same directions. The more rigidly codified the system, the shittier life is for nonstandard cases. Machines are the ultimate rigid system.

      Consider how much shittier someone's life is when a corner case happens and there's no one to help you resolve it. E.g https://gazette.com/news/born-in-the-usa-without-a-shred-of-...

    • kenny87 1648 days ago
      > We'll never have the quality of discussion that's needed while there are fear mongering reporters like this subverting that conversation to the level of luddism.

      We can and it's actually happening. So called "dangerous" journalism and "quality" discussion are not mutually exclusive. The article actually mentions one such discussion happening at the level of the UN, an inquiry headed by Special Rapporteur Philip Alston.

    • coding123 1648 days ago
      You're basically criticising the author and then taking credit for the idea that we need public discourse.
      • benjaminjosephw 1648 days ago
        I agree with the author that we need a public discourse. I'm pointing out that starting the discourse with such a biased and undisciplined approach is counter-productive.
  • iliketosleep 1648 days ago
    It's a bit like all the arbitrary account terminations we hear about with Google. A machine makes a decision with virtually no recourse for the user. When governments are applying this same concept to welfare, it does indeed create a dystopian scenario where people are left to starve because some algorithm got it wrong.

    In essence, it's about cost cutting. Governments seem to think that machines can replace humans in making decisions where there is nuance involved and the stakes are extremely high. It is frightening and should not be accepted. The furthest the automation should go is to flag irregularities for review. Instead, the machines are given far too much autonomy, with the robodebt collection being a scary example.

    • hcarvalhoalves 1647 days ago
      A human isn't necessarily less arbitrary, in fact it can be much worse.

      You (and Guardian, it seems) are falling for the fallacy of the danger of automation - "it's acceptable for bad decisions to be made as long as it's been decided by a human".

      Fear mongering is not how you prove/disprove automation, you should pick a useful metric (e.g. are people starving?) and benchmark against that - the same way you would measure if a department comprised of humans were doing the same job.

      • Majromax 1647 days ago
        > You (and Guardian, it seems) are falling for the fallacy of the danger of automation - "it's acceptable for bad decisions to be made as long as it's been decided by a human".

        Not necessarily. Even bureaucratic systems tend to recognize that humans make imperfect decisions -- we're susceptible to everything from bribery to exhaustion. These systems then tend to not treat first decisions as final, and they allow an opportunity to appeal.

        But what happens when the decision is made by an algorithm, especially one that wasn't built to give an explicable reason?

        > Fear mongering is not how you prove/disprove automation, you should pick a useful metric (e.g. are people starving?)

        That's not a good metric.

        Suppose the algorithm were in fact perfect and nobody starved -- except you. For some reason, it couldn't recognize your ID (and only your ID). By the "are people starving" metric, the algorithm would be doing a fantastic job compared to an imperfect human system, but it would also be profoundly unjust.

        • hcarvalhoalves 1647 days ago
          > Even bureaucratic systems tend to recognize that humans make imperfect decisions -- we're susceptible to everything from bribery to exhaustion. These systems then tend to not treat first decisions as final, and they allow an opportunity to appeal.

          So the problem is not automation, is lack of recourse.

          Lack of recourse is a real problem that already exists today. People do go unattended by not knowing how to navigate bureaucratic structures, and not having money to pay a lawyer. It also affects the most the countries the Guardian article targets as "automating poverty".

          > For some reason, it couldn't recognize your ID (and only your ID). By the "are people starving" metric, the algorithm

          This is not an algorithmic issue.

          If you lost your ID, you wouldn't get your benefit even if you walked into a social security branch either.

          • Faark 1647 days ago
            > So the problem is not automation, is lack of recourse.

            The problem is automation, if it exaggerates the "lack of recourse". It means that, without being careful, adding automation would screw up most of such systems. Thus such articles to boost awareness and make sure automation is done well. I'm disappointed with what a hard time many on HN seem to have with accepting the need to be cautious. Especially with Gov, were you cannot just find or build a better alternative on the open market.

            > If you lost your ID, you wouldn't get your benefit even if you walked into a social security branch either.

            Is that actually the case in the US? The programs (run by the church and heavily supported by german gov) I've come in contact with certainly did not. Never was at a soup kitchen, but would be surprised for them to ask for IDs.

            • GreaterFool 1647 days ago
              I don't think you need to convince anyone here that automated systems fail. I think many of us have been affected by such systems. Accounts blocked for no reason, etc. Glitch.

              The problem is not the automated systems. The problem is zero recourse.

              If nothing was automated then we'd have to wait hours on the phone to get to an actual human to do something trivial. Or make an appointment, travel somewhere, spend few hours to get something simple done.

              In an automated world most of the time things work out.

              The situation becomes despicable where there's no one to complain to and no way to seek redress.

              In the tech world it became pretty common to seek support by Twitter shaming. Often there's no other route.

              Still, it's not the automation that is at fault. It's the greedy humans. They could save 80% of the cost while keeping everyone happy but they'll chose to save 90% of the cost at the price of making many miserable.

              Can't change the human nature?

            • jpindar 1647 days ago
              In the US, "social security branch" is not a soup kitchen. It's a branch office of the Social Security Administration, a government department which provides monetary benefits. They can't sign you up for benefits if you don't have an ID.

              However, the human beings who work there could give you advice on where and how to get an ID - which might differ depending on your individual circumstances. A computer probably wouldn't give you any more information than you could read on the internet, and if that info doesn't make sense or doesn't apply to you, you're going to have trouble finding better information.

      • kazagistar 1647 days ago
        It's a matter of incentives. In non automation, someone can be on the hook if things go wrong, so they have an incentive to be right, even if they are flawed and make lots of mistakes. In automation you spread responsibility out, and things aren't so clear.

        Automation can be better, more accurate, and in general kinder, but it can also be more profitable, efficient, uncaring and cruel, and when its pointed out the people at fault can avoid blame far far easier.

    • Clubber 1647 days ago
      One of the problems with the nature of macroeconomic / statistical thinking is it removes any humanity from the decision, because we focus on just numbers. Anytime we discuss policy as citizens, we really need to keep in mind that every 1 in a number represents an actual individual which an actual family and life and life experiences.

      An example is raising the social security age from 65 to 67. How many lives did that simple change affect? What repercussions did that have on that individual's family? We don't really think about it, we just think about it's effects on the bottom line.

      "One death is a tragedy; a million is a statistic" -Joseph Stalin (1879-1953).

  • DarkWiiPlayer 1648 days ago
    This is just fearmongering; misrepresenting and oversimplifying the truth to the point where all meaning is lost.

    - There's no distinction between the different types of systems involved.

    - Human problems are blamed on the system (A lack of human support isn't the programs fault)

    - Individual stories are used not to underline broader evidence, but meant to be the evidence ("a man died in india, therefore all computers are evil").

    - There's constant generalizations. It's not "some algorithms", it's "the algorithms" that are being blamed. (Ctrl+F shows the word "some" appears just once in the text FFS)

    I don't know if the author is satisfied with this article; if they are, they should be fired. Misinformation like this is ammunition for those crying "fake news" to further destabilize the political landscape, which, in turn, harms the reputation of real journalism, the kind that our societies rely on to maintain a functioning democracy.

    • zAy0LfpBZLC8mAC 1648 days ago
      > - Human problems are blamed on the system (A lack of human support isn't the programs fault)

      Oh, you can sue programs now? I wasn't aware of that!

      I mean ... seriously? What are you even trying to argue there?!

  • wiglaf1979 1648 days ago
    Two books that dived well into this subject. Unlike this article they have a bit more meat to their research. I would recommend giving them a read before getting your pitchforks only due to this article. It's bad and needs to be fixed but a measured response instead of a purely reactionary response will just make things worse.

    Weapons of Math Destruction https://www.goodreads.com/book/show/28186015-weapons-of-math...

    Automating Inequality https://www.goodreads.com/en/book/show/34964830-automating-i...

    • bigwavedave 1648 days ago
      > a measured response instead of a purely reactionary response will just make things worse.

      I'm not sure I understand, would you clarify this for me? It's been my experience that pure reaction is usually bad for discourse, so I'd like to know more.

      • Pryde 1647 days ago
        Going from the context of the comment, I believe parent meant something along the lines of:

        "we need a measured response instead of a purely reactionary response which will just make things work"

        At least, that was my first impression, definitely could be interpreting the comment incorrectly.

  • shadowgovt 1648 days ago
    There's an interesting assumption baked into this entire approach, which is that the human-evaluated system is better. I'll be interested to see if that's an artifact of this summary article or is reflected in the underlying reporting coming up, because it's wise to not leave that assumption unchallenged. FTA:

    > Instead of talking to a caseworker who personally assesses your needs, you now are channeled online where predictive analytics will assign you a future risk score and an algorithm decide your fate.

    If the caseworker is racist, this isn't a worse scenario... unless such racism has also been baked into the risk assessment algorithm, which is definitely a possibility. But at least from my personal experience, I trust our ability to identify and de-train racism from assessment networks more than I do our ability to consistently identify and hire non-racist caseworkers. Deprogramming racism out of humans is hard (and even if you succeed for one, you can't copy their state-vector into the brains of their peers).

    • chongli 1648 days ago
      I recall reading a story in the book Weapons of Math Destruction [1] about a system used to assess people’s recidivism risk which judges were relying on for sentencing hearings. The problem is that the system showed a clear racial disparity in the risk scores yet at the same time it was more accurate.

      That means if we try to tune the system to make it less racist, we’ll be making it less accurate. In essence, the system isn’t really racist, it’s a reflection of the racism in society which is leading to these outcomes. Ultimately, the problem is that putting someone in jail increases the likelihood that their relatives will commit crimes. It increases the likelihood that both they and their friends and family will reoffend. It’s a vicious cycle and it doesn’t appear to have any technical solution.

      [1] https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction

      • Nasrudith 1647 days ago
        It brings to mind a sarcastic way to get perfect prediction of patient outcomes. Step one is to decapitate the patient.

        Predictability/accuracy isn't the important part except as means to the end - the outcome.

      • vokep 1645 days ago
        That system was the result of asking the system one question "How likely is this person to be arrested?" and taking the answer as an answer to an entirely different question "How likely is this person to commit the crime again?"
    • fwip 1648 days ago
      One benefit is that repeated visits to a government office generally result in you meeting a different person, or the same person but in a good mood this time.

      Another thing about racism in humans is that it's easier to detect than once it's codified in a technological black box.

    • KineticLensman 1647 days ago
      Compared with human case workers, algorithms apply any encoded incompetence and / or bias at potentially massive scale
      • shadowgovt 1647 days ago
        An army of social workers raised in the same culture will on average do the same, and be much harder to correct once bias is identified.
        • Faark 1647 days ago
          > An army of social workers raised in the same culture will on average do the same,

          In a society were it is normalized, sure. But can you imagine what systems such a society would come up with? So far we are only talking about shitty automated systems accidentally producing unwanted outcomes by e.g. reflecting subtle / unwitting / intolerable issues.

          > and be much harder to correct once bias is identified

          At the end of the day, I'd expect most social workers wanting to help and do a good job at it. AI hasn't reached that level, yet. So I'd expect a much better baseline from humans, with "fixing" outliers being much harder, yes.

  • chooseaname 1648 days ago
    This article is blaming technology. This is very dangerous because it diverts the blame from who it should be on and that's the governments for making bad decisions.
    • chongli 1648 days ago
      The whole point of switching to technology for systems like this is to block people from holding you accountable. It puts the burden of proof on the accuser to show that an inscrutable software black box is biased.

      It very much resembles the shift from human tech support operators to software-based robo operators. In effect, it’s a wall to keep you away.

      The apotheosis of this concept is Google, a company that goes to ridiculous lengths to stop people from being able to contact a human being that works there, unless you’ve paid them money for the privilege.

    • raxxorrax 1648 days ago
      I don't think the article does that honestly. But I don't think it is helpful to forego the discussion because it might feel a bit alarmist.

      If you look at modern enterprises, digital technology is most often used in the most un-imaginary way possible. For surveillance of workers. KPI and rationalization are keywords here.

      That has made working profoundly worse. Until we see more constructive deployments of information technology, I think the recoil of accusations like this should be more tempered.

      And yes, there is a certain subjection to data. In the health care sector, nurses get a fixed amount of minutes to be done with the patient. Based on collected data that allegedly gave us the insight to make that determination.

      It should be the other way around. Managers should fear that digital technology surpasses their abilities to efficiently coordinate tasks and give nurses information about the most urgent cases. Digital technology has failed spectacularly to achieve any of that.

    • diffeomorphism 1648 days ago
      The article is saying the opposite: that diverting the blame like this is a big problem and that we need accountability.

      See the last few paragraphs.

    • danmg 1648 days ago
      "I was just following orders!"
  • 4ndr3vv 1647 days ago
    Many here seem to cite this article as being low on fact, and high on emotions.

    It is worth noting that this is merely the introduction piece to a series of articles that the Guardian seem to be running this week[1].

    It's intention _is_ to be brief.

    [1]https://www.theguardian.com/technology/series/automating-pov...

  • idl3Y 1648 days ago
    This article miss the point, misrepresents and oversimplifies, it's not denouncing the culprits, it's dancing to their tune. This is what they want: "blame the algorithm, forget the real culprits".
  • vkaku 1648 days ago
    It tells you that a wrongly applied system can cause damage.

    In respect of Aadhar in India, I know the Anti pattern where they deduplicated many undeserving people receiving 'welfare' payouts from the state.

    Is this to state that either of these approaches are right?

    No. Eliminate the skunk out of the politics, and such things won't happen.

    • firasd 1648 days ago
      I'm not sure what welfare payouts you're referring to; the Indian state mostly assists via subsidies rather than cash. Lately they have moved to direct benefits transfer (ie a payment into a bank account) for the cooking gas subsidy, which is an anti-corruption move meant to route around intermediaries who divert subsidized cylinders and sell them at market rate.

      The problem is that Aadhar (introduced initially for subsidies) has become a biometric ID system to link more and more of a person's life to this government ID. A few weeks ago a child died of rabies and the first excuse the hospital gave to delay treatment was that they wanted her Aadhar: https://timesofindia.indiatimes.com/city/agra/rabies-infecte...

      I am disappointed whenever I see Bill Gates mindlessly applauding Aadhar. It is easy to side with technocratic ideas from just reading headlines, but if a biometric surveillance state is such a great idea he should have it advocated for it in the US first. (Another example of a technocratic authoritarian move he initially made some positive murmurs about was demonetization, which now, three years later, has proven to have had a disastrous effect on India's economy.)

      • kristianc 1648 days ago
        > I am disappointed whenever I see Bill Gates mindlessly applauding Aadhar. It is easy to side with technocratic ideas from just reading headlines, but if a biometric surveillance state is such a great idea he should have it advocated for it in the US first.

        I agree with this. It’s not acceptable to use the developing world as a staging box for ideas and policies that are, at best, in beta and likely to have bugs.

        • whatshisface 1648 days ago
          India is a democracy that votes for its own laws, nobody is using it as a "staging box."
      • vkaku 1648 days ago
        Here's a direct example of what happened after Aadhar came in to play:

        https://www.livemint.com/Money/aJ6VPyH94gNVwm8alEs87L/11-lak...

        So there were roughly 1.1m duplicate tax numbers, and were cut off from writing tax savings of roughly $4000+ a year.

        Like I said, it's not a question of what it is, but how it's used that matters.

        When people (inside and outside government) treat social security schemes and tax payer money as profiteering schemes, bad things will happen.

  • golergka 1648 days ago
    First six paragraphs offer no facts, but are very heavy with very emotionally loaded statements. Even if the whole article after that is completely true, it's impossible not to perceive this writing style as manipulative propaganda.
    • efa 1648 days ago
      I hate to say it but I agree. Flashy phrases like "automating poverty" or "weaponized through technology" just distract me and make me feel like the author is trying too hard to present the horror of technology. The article was very weak on content. For example, the antidote about the man who died - he started getting thinner and he died. So nothing else you can add? - did he try to contact someone? what period of time did he have no benefits?

      There is no surprise here. Every aspect of life will become more automated and reliant on technology. Of course anything involving human safety (healthcare, benefits) has to account for possible problems in the system. There has to be some type of support where you can talk to a human. This is something that should be debated. The author had plenty of provocative information without resorting to this writing style.

    • zwkrt 1648 days ago
      Reporting can be emotional and true. Journalism and activism can’t be separated.
      • dexen 1648 days ago
        The article belongs in an Opinion section, not in a (Technology) News section.

        >Journalism and activism can’t be separated.

        They should and they can. Either have a separate publication, or at the very least have a separate, clearly delineated "Opinion" section. Otherwise you run the risk of media trust becoming very low, and thus media losing its ability to guard the republic (or the democracy).

        Consider a similar assertion - "journalism and advertising can't be separated" - it's obviously false. Granted, there is some undue influence from advertisers to be expected. However this is considered a negative that should be avoided. And indeed the good practice is to have a "firewall" between the advertising department, and the news desk, in any publication.

        • kristianc 1648 days ago
          > They should and they can. Either have a separate publication, or at the very least have a separate, clearly delineated "Opinion" section. Otherwise you run the risk of media trust becoming very low, and thus media losing its ability to guard the republic (or the democracy).

          Practically it is impossible to separate the two. Even news reporting doesn’t usually come free of editorializing. There are places where you can go to get ‘just the facts ma’am’ but newspapers have never and will never be those places.

          • shadowgovt 1648 days ago
            I agree that the mere act of choosing to observe or report something is, in some sense, activism, at least in that it indicates the author thinks the topic has some merit. But it's a sliding scale.

            (1) And an author can identify words designed to elicit emotional response and work to minimize them in their writing if they want to focus on the facts vs. their interpretation of the facts.

            (2) Listen friend, only an idiot doesn't understand words and phrases have emotional tone, and an author chooses whether they want to piss you off or stay out of your emotions when they choose their words.

            ... these two sentences have the same conceptual content.

          • meerita 1648 days ago
            Just because someone don't do it doesn't mean you cannot separate them. There are plenty objetive publications out there and of course, if you rely on the gossip magazines and yellow newspapers maybe you will never get objetive journaling.
          • raxxorrax 1648 days ago
            It is impossible to have perfect knowledge, but journalists and authors in general still write articles.

            If you show efforts in reporting neutrally, your readers will pick that up.

            I agree that media companies, which are part of a large multi-billion industry, have their shortcomings. That makes trying to keep an open mind even more important for journalists. There is a reason why being independent is still a positive qualifier for journalists.

      • tonyedgecombe 1648 days ago
        In this case I suspect it's emotional and wrong.

        "In Illinois, the Guardian has found that state and federal governments have joined forces to demand that welfare recipients repay “overpayments” stretching back in some cases 30 years."

        This isn't technology, it's politicians and bureaucrats deciding policies. There is no reason the technology can't write off aged debts like this.

        The whole article stinks.

        • dictum 1648 days ago
          > This isn't technology, it's politicians and bureaucrats deciding policies.

          The underlying argument of critics of algorithms that make decisions, in government and in private companies, is that this technological apparatus becomes a justification in itself: if a mistake is made, blame the algorithm; if the algorithm says so, it must be right — we can't make exceptions.

          Over time, people and institutions begin to shape their own actions so they don't fall afoul of the criteria that can be reverse-engineered out of a black box decision-making system.

          Regarding "this isn't technology", this doesn't invalidate the criticisms: part of the argument is that, indeed, what gets called "technology" often shouldn't be simply described as technical tools and processes. They're often the opinions and goals of individuals (politicians, bureaucrats, and other wielders of power) transformed into code, data and other technical artifacts; sometimes this transformation merely hides the underlying sources of these opinions and goals.

      • golergka 1648 days ago
        They can and should be. Today's journalists becoming activists is one of the fundamental reasons of public distrust in media.
        • commandlinefan 1648 days ago
          I'd go so far as to say they can be and they must be.
  • eecc 1648 days ago
    I, Daniel Blake

    watch it

  • impatientduck 1648 days ago
    We're not discriminating against black people, our algorithm just doesn't allow loans to anyone who buys things that black people just happen to buy.

    Asshats.

    EDIT: In case anyone thinks I'm kidding: https://www.technologyreview.com/s/613274/facebook-algorithm...