20 comments

  • eighthnate 2438 days ago
    It is not possible for any AI to understand hate because hate is arbitrary and depends on your position. Hate is a reflection of history, propaganda, etc. It is an arbitrary human/societal invention.

    Hate isn't "science/math/etc". There isn't an algorithm that can objectively look at speech/data and say it is hate without causing a lot of issues of friendly fire.

    If jew says I hate nazis is that hate? Or is it only hate if a nazi says I hate jews. Or an atheist says I hate muslim saudis vs muslim saudis say I hate atheists. Are they both hate? Or does it depends on your culture/position.

    A simple unbiased AI would say they are all hate. And that's the problem. We would require humans to step in to decide what is hate or not. Which introduces other problems. A youtube saudi "moderator" would say atheists are hateful and ban atheist content. While a european atheist might say saudi muslim speech is hateful and ban extreme muslim content.

    And that's how free speech dies. By trying to please and accommodate everyone, we drop ourselves to the lowest common denominator.

    • etplayer 2438 days ago
      I don't think the particular issue is with deciding what hate is. Your example said that the AI would understand all of those as hate, and I think it would be correct. The issue is that humans have this idea that there are justified forms of hate, and whether that hate is considered justified depends on the time and place in history, as such, it is necessary to also have a lot of seemingly irrelevant data to make the decision.

      This ties in quite well to the idea that a society can only be understood through an analysis of its history - that is, what has happened in order to lead it up until this point, and that men are largely (though as Marx notes not completely) products of their time; this view is known as historicism.

      I see that science-types (my apologies if this is offensive to some) tend to see society as merely thousands of one-off interactions between people, occuring here and there, the only difference being what each individual in the transaction brings to the table. I think that in this context of hate especially, this is too reductionist and insufficient.

      Similarly, what ought to be tolerated in a society? Should intolerance, which threatens to bring down the tolerant society, be tolerated? How about the subversive element which attempts to move for the liberation of peoples? Arguably a democracy should have a right to subvert itself, if the power is truly vested in the people. As Marcuse said, through mass media the expression of this subversive element is being blocked, and it may require apparently un-democratic means to open.

      • marrs 2438 days ago

            I don't think the particular issue is with deciding what
            hate is. Your example said that the AI would understand
            all of those as hate, and I think it would be correct.
        
        What makes you so sure about that? What if the context is "I hate scientists...they keep proving me wrong". Is that a show of hatred or a tongue-in-cheek show of respect?

        And then there's all the stuff that people say that gets misunderstood by other people.

        And then there's all the gaming that's going to take place, where people realise they can silence their opponents by describing their opinions as being "hateful". Oh wait, that's happening already.

        • etplayer 2438 days ago
          > What makes you so sure about that? What if the context is "I hate scientists...they keep proving me wrong". Is that a show of hatred or a tongue-in-cheek show of respect?

          You are correct, I didn't consider this, so there at least two problems with deciding what is hateful: (1) is the expression insincere?, (2) if it is sincere, is the hate justified?

          This is of course assuming such a machine can be designed to obtain the correct meaning of words in every case, which I think is impossible or close to it. Language is ambiguous, some languages more than others.

          • Govindae 2438 days ago
            This is a general NLP problem. People use language in more complicated ways than direct literal encoding. In the general case, it's a very hard problem, but hate specifically may not be so difficult.

            People tend not to use phrases like "kill all the" in unambiguous ways.

            • belovedeagle 2438 days ago
              However, a significant (not large, significant) proportion of Americans believe that the phrase "kill all the Nazis" is not hate because <reasons>, or at least is "justified hate", whatever that means.
    • sametmax 2438 days ago
      Hate does not "depends on your position". Hate is an emotional state, and it's the same for everybody.

      The symptoms of hate "depends on your position". So called "hate speeches" depends a lot of your position. But they are not hate. Actually, a lot of the things that are censored because they are categorized as hate are just inconvenient for some people, but nowhere near hate. As usual, words are used and distorted to serve agendas.

      You can recognize hate across cultures, across symptoms, because as a human being, you learned to detect the clues that hint some hate behind some behavior. It's not perfect, but the thing is hate is rarely subtle, and quite explanatory in its effects.

      Hate also always root in unhappiness. And symptoms of happiness are something you can learn to detect as well.

      So yes, while you can't have an objective way to measure hate, you can approximate the way human evaluate it by measuring the context of happiness and assess symptoms, on a global + local scale. I may very well give you a good picture of hate.

      • ikeyany 2438 days ago
        That picture will only be 'good' from your subjective frame of reference, or whichever frame of reference your AI's training data adheres to.
      • marcosdumay 2438 days ago
        > Hate also always root in unhappiness. And symptoms of happiness are something you can learn to detect as well.

        That is correct and everything... But why can't I read it without instantly imagining a "you seem to be unhappy, citizen; please, come along into jail" kind of dystopia?

        • sametmax 2438 days ago
          Because it's a real risk if society mistake potential threats with actual ones. Given the current tendency of trading freedom for security it's a sane concern.
  • nhaehnle 2438 days ago
    Humans can't agree on what constitutes hate speech. I mean, there are general sentiments that most reasonable people agree on, but it gets hairy very quickly in the details.

    The real problem here is that by letting an AI decide what is hate speech and what isn't, we're adding a pretense of objectivity to a decision mechanism that is subjective. It adopts the subjective bias of its "teachers", but that is easily hidden from view. Which means that people who are judged unfairly will be less likely to get due process.

    And of course, this issue generalizes to a much wider range of AI, including application that already exist (think credit ratings as the most basic example). That's a much more real danger of AI than killer robots.

  • meri_dian 2438 days ago
    Solutions to complex filtering and regulatory problems can usually be grouped into two categories of approaches: a 'top-down' centralization approach, or a 'bottom-up' distribution approach.

    Developing a system that will categorize what hate speech is and what it is not is an example of a top-down approach that centralizes the decision making process.

    Allowing individuals to decide what hate speech is for themselves and acting accordingly is a bottom-up approach that distributes decision making among all nodes of the system.

    The distributed, decentralized approach - the one that the modern internet employs today, where users decide for themselves what websites to frequent and who to interact with - is more appropriate for handling hate speech because it is much better at taking into account the nuance of specific situations and distinguishing what is hate from what is not hate. A centralized approach will either be too aggressive in its filtering or not aggressive enough.

    Human civilization is becoming more and more inclusive every year. I do not believe we need to curtail free speech in order to continue on the path towards a more inclusive and accepting world.

    A much more effective approach to eliminating hate - one that does not require us to relinquish the right to free speech - is to continue improving the living standards of all people, especially the poor. I believe that hate is ultimately a manifestation of fear. When the fear of economic marginalization diminishes, so too will the frustration and anger that derives from such marginalization.

  • sniglom 2438 days ago
    This is just scary, there's no way this will be unbiased.

    Sure, some stuff people write is just hate with the only motive to offend others.

    But there is also facts that can be viewed as offensive, while still being true. Does that mean facts will be hate speech and automatically hidden by an AI? Ouch.

    • MikkoFinell 2438 days ago
      Only facts that are deemed offensive to the political leaning of Google of course.

      Time to jump ship.

  • justadeveloper2 2438 days ago
    The whole thing makes me queasy--to think that somebody is going to determine what to censor on the Internet can never be anything but a bad dream. We can suffer through the hate speech and counter it with rational arguments. But once censorship gets going, it is very hard to get back from there.
    • adventured 2438 days ago
      > But once censorship gets going, it is very hard to get back from there.

      There is no coming back from there. It's a global authoritarian wet dream. They've been chasing this goal for two decades now, it's why they invented the fake concept of hate speech in the first place. Specifically the US is the only hold-out on what could be called near-absolute free speech. Once that kind of power of censorship is handed to US authorities, the entire planet will instantly get darker, due to the vast scope of the US influence on speech online.

  • CM30 2438 days ago
    I'm not surprised. It's much easier to 'identify' hate by the tone than it is the actual content of the remark, and that seems to be what system is doing. It's making the typical (often human) mistake that somehow the 'toxicness' of a remark can be determined by how polite it's worded rather than what the person is actually saying.

    But that's obviously not the case. You can be a horrible person without raising the tone of your voice or swearing and a perfectly nice one while swearing like a sailor in a loud and somewhat terse way. The validity of a critique isn't determined by how nicely it's written.

  • 5trokerac3 2438 days ago
    There's absolutely no way this technology will be used for unscrupulous purposes. /s
    • Spivak 2438 days ago
      You can be safe in the knowledge that the current state of ML for analyzing language in this manner is basically just a fancy keyword search. Any subtlety will completely trip it up.
      • 5trokerac3 2438 days ago
        Most people aren't either savvy enough to know how to circumnavigate such a system or interested in doing so.

        Keywords are more than enough to shape the direction of online conversation and create an echo chamber that is favorable to a single viewpoint.

    • wu-ikkyu 2438 days ago
      Is it not implicitly being done to maximize advertising revenue? I would say that in itself is unscrupulous
      • 5trokerac3 2438 days ago
        I think one of the biggest revelations I've had is recognizing that all of these platforms are nothing but advertising and propaganda vehicles. Conversations are directed and, more importantly, promoted with such a heavy hand that they have the authenticity of a reality TV show.
  • kutkloon7 2438 days ago
    Well, maybe 'toxicity' is a stupid concept. Is the previous phrase, for example, toxic? The examples don't seem to distinguish between 'hateful' and 'not socially accepted'.

    Also, a lot of these depend so much on context that it's not even possible to classify.

    "What's up, niggers?" Can be either classified as extremely racist (when a white supremacist uses this language, for example), or as colloquial language used in Afro-American culture.

    Same thing with calling people chinks, crackers, etc. It is all much more acceptable when you're using a derogatory term for your own race.

    "What's up, bitches?" Is about the same. Pretty much accepted in colloquial use, but not so much when it is used by a men to address women (this is not a complaint - I agree mostly that it is mostly a misogynistic term in this specific context).

    • liberte82 2437 days ago
      As a gay man, "Hey ladies" or "Hey girls" is common in our group. :)

      Hate is visceral and emotional and I think it's going to be difficult for AI to track it accurately. People will also find creative ways to trick it while still being very clear that they're being hateful. It's like that saying about the difference between art and pornography - you know it when you see it.

  • visarga 2438 days ago
    Old article (6 months old). Got nothing to do with recent events.

    As for the AI problem of hate detection: it can be solved much better than what the article appears to say. The problem is almost similar to sentiment detection in online reviews, which has been studied extensively. If they collect a large enough dataset of hate speech, they can reuse the architecture.

  • jlebrech 2438 days ago
    it still hasn't been told which demographic can and can't be hated upon yet.
    • visarga 2438 days ago
      Yes it has. It's all documented in the training dataset.
      • 5trokerac3 2438 days ago
        This two comment exchange is straight out of a dystopian novel. This is where we are.
        • Spivak 2438 days ago
          It's really not all that dystopian, anyone who even wanted to analyze hate speech would probably derive such a system since it's difficult to define precisely.

          ML always contains the biases of the training set. It's not some magical unbiased 'objective' system. For language at least it seems to devolve to a fancy keyword search.

          • marrs 2438 days ago
            Anyone who thinks it's normal to want to analyse "hate speech" is already living the dystopian dream.

            10 years earlier, the people who were deeply concerned that such a vague and ill-defined term had just entered the legal lexicon were all but ignored.

            • jlebrech 2438 days ago
              the false positives is "Brazil"-esque
          • jcims 2438 days ago
            Sure, but it's still essentially a supervised learning process with select (biased) humans doing the labeling.
  • RickJWagner 2439 days ago
    I wish there were such an engine, and it was 100% accurate.

    By the looks of the tested statements, it looks like there is bias even in the testing. (Politicians from one side of the aisle were cited, not from both. There are plenty of known racists across the spectrum to discuss, and lots of modern-day misogynsts.)

    If we had a truly fair standard to go by, we'd all be better off.

    • MikkoFinell 2438 days ago
      How about not censoring anything, and letting the marketplace of ideas be free, would that be fair?
      • westmeal 2438 days ago
        But someone might get offended. Imagine how awful it would be to get someone offended!
        • MikkoFinell 2438 days ago
          The mere notion that my idea wasn't perfect is offensive to me. Please delete your comment.
  • bitL 2438 days ago
    All tools like Perspective would do is to accelerate creation of new slang words that would replace those that are causing it to flag articles. "Fake news" might become "feyk nyuz" etc. Then once they start analyzing pronunciation similarities, another form of encoding will be invented by kids.
  • xiphias 2439 days ago
    After the classifyer works better, it can be plugged in to credit ratings of people... I have seen something like this in Black Mirror:

    https://en.wikipedia.org/wiki/Nosedive

  • gldalmaso 2438 days ago
    People learn faster than machine learns.

    If people want to post hateful comments, they will find ways to circumvent any AI model that is thrown their way, they will create lingo that confuses the model but that everyone will identify quickly as hate speech.

    Also anyone can appropriate any symbolism to their own speech and AI will take a lot of attention and training to catch up, see Pepe the Frog (https://en.wikipedia.org/wiki/Pepe_the_Frog) as an example.

    • belovedeagle 2438 days ago
      > they will find ways to circumvent any AI model

      Not so; there's an easy and reasonable (/s) solution to the problem: only permit pre-approved speech. This can be done through AI and it will be difficult to circumvent.

  • creo 2438 days ago
    For me it looks like they mistakenly placed wrong comparison operator (like "<" instead of ">"). Im more than sure that its more complicated than that, but you know. We all made that mistake. My programmer senses are tingling. (This is joke obviously.)
  • mschuster91 2438 days ago
    To quote Felix von Leitner (https://blog.fefe.de/?ts=a7666dad):

    > You cannot solve social problems by using technology.

    And when you do try to do so, it quickly gets nasty. Just look at the #Shadowban scandal on Twitter.

    You will need humans in the loop, and they need to be well-trained, in order to actually recognize images such as the famous Vietnam napalm girl, which FB banned for "nudity" until huge public shitstorm, and well-paid and well-"maintained" in psychological terms, given that they will see the nastiest content on the web... everything from child porn over gore to livestreams of rapes and murders.

    But, to be honest: this should not be done by the sites themselves - the best solution is an "ombudsman", an independent institution maintained by the state but financed by the social networks. This prevents "overcensoring" (aka that the moderators don't delete too much content that's legit out of the fear they might be hit with costly punishments), and having them mandatory financed by the "big players" prevents them from systematically understaffing or exploiting the workers. Also, they will need to have a proper appeal process: both for content creators whose work has been inappropriately taken down, and for users who believe that the content in question is, in fact, illegitimate.

    Also, the sites must be regulated as public utilities given their importance to people's communications. For example, my phone provider is not allowed to terminate me, only if I haven't paid my bills for longer than 3 months - while e.g. Google can terminate my account (and thus, all my emails, all my purchased applications/other content on Play Store) because people abuse the Youtube flagging system and have my account terminated. Same goes for Facebook and Twitter. There needs to be an independent, legal way of forcing them to provide service.

    • HarryHirsch 2438 days ago
      You must be from somewhere where they have a functioning government. Here in the US they don't trust the government to be accountable, consequently they farm out its responsibilities to a host of unaccountable private companies.
      • marcosdumay 2438 days ago
        That must have been the longest lived and most successful large scale propaganda attempt in the world. There are many countries (not just the US) convinced that unaccountable entities can manage a monopoly with more accountability than their government.
        • slededit 2437 days ago
          Its more that third party companies can and are replaced over time while the government always remains. Were Facebook and google to go "too far" they could be replaced by other online venues. Replacing the US government would require violent revolution.

          The democratic system was supposed to solve this problem. However while the man in "charge" comes and goes the bureaucracy has been moving in only one direction.

          • mschuster91 2437 days ago
            > Replacing the US government would require violent revolution.

            Not necessarily - what Bannon championed for was, basically, the deconstruction of the state. For example, Trump still hasn't filled hundreds of leadership positions,and a government shutdown isn't unlikely given the disconnect between Trump government and Congress, as well as the Democrats opposition to anything Trump/Rep Congress want to do. It's going to get interesting pretty soon.

  • haddr 2438 days ago
    This article is from February 2017.
  • DarkKomunalec 2438 days ago
    Suddenly, HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.
    • westmeal 2438 days ago
      Jesus christ no one got the reference? On HN?
      • duncan_bayne 2438 days ago
        It may take people a while to respond if AM is messing with their time sense again.
  • cheez 2438 days ago
    This is the future. Get with the groupthink and keep your dissenting thoughts private.
    • dang 2438 days ago
      "If you have a substantive point to make, make it thoughtfully; otherwise please don't comment until you do. Comments should become more civil and substantive, not less, as a topic gets more divisive."

      https://news.ycombinator.com/newsguidelines.html

      We detached this subthread from https://news.ycombinator.com/item?id=15063505 and marked it off-topic.

      • cheez 2438 days ago
        Thanks for keeping the S/N ratio down, even if it's me. Hopefully you can be replaced by a good AI at some point!
    • marrs 2438 days ago
      I really hope this comment was downvoted for reasons of irony :)
  • spoovy 2438 days ago
    If we all keep writing about how much we hate this thing, will it optimise to remove the hate and delete itself?