Ask HN: Would you find a browser extension that screens for toxic text useful?

Hello,

I and a couple of friends had the idea to make a browser extension that can use AI/NLP to screen comments on sites like HN, reddit, twitter, etc. to detect toxic or negative content (and warn you before you read it, basically, if you have the extension set to be on). But, we are not sure if there is much of a need there; would this be useful for regular HNers/redditors? Any chance moderators could chime in and let us know their take?

Thank you!

7 points | by andreyk 1240 days ago

14 comments

  • st1x7 1240 days ago
    Big no, I dislike everything about this - the assumed necessity of such a product or the implication that it does any good, the uncertainty of whether I'm reading has been censored at any given time, some vaguely undefined measure of toxic/negative, the probabilistic outcome of an NLP classification model, my browsing history being either sent off somewhere with every click (or having to run the model locally)...

    This is a civil platform so I really struggle to express in writing how much I dislike this. However, based on experience there might a number of consultancies that would pay a lot of money for someone to develop this (or get paid for it depending on the company). It's so deeply slimy that I suspect it has already happened in many of those places.

    • andreyk 1240 days ago
      Thanks for the feedback! For what it's worth, we do indeed aim to make it fully local, and you can set how sensitive it is eg to just screen for overtly racist stuff, and have control over when it's on and transparency over anything it screens (it'd just be a warning layer). If nothing else it might help content moderators to do their job with less negative impact on their mood, though maybe not...
  • young_unixer 1238 days ago
    I would not use such a tool that worked at the semantic level, but would probably think of using something that worked at the lexical level and gave me control over the words I want to ban.

    For example: an extension that automatically hides HN posts whose title includes words I have manually blacklisted. Not even necessarily for being "toxic", but uninteresting to me.

  • jfengel 1240 days ago
    Yeah.

    I'll be honest: I've taken myself off of social media. HN is about the only one I can stomach, barely. I appreciate having a source of brief distractions, and toxicity is exactly counter to the point. Even as it is I need to keep myself out of some areas. (Holy cow does discussion of dating bring out the sense of entitlement and misogyny.)

    I don't really want to depend on an AI to screen things, and if nothing else, it's a good reminder that I really should limit the amount of time I spend in the kind of vapid extemporanea that social sites bring. But I don't want to reduce it to zero, so I'd probably use a tool that made the vapid extemporanea less unpleasant.

  • paulz_ 1240 days ago
    The idea seems interesting to me. A model that detects click bait / vapid content would probably be even more compelling. In both cases it's hard to put a finger on what exactly you're training the model for.

    In any case - if you made it I would at least be interested enough to check it out and see what kind of content it blocked.

    As an aside I use firefox mobile which has pretty weak extension support. If the extension worked with firefox mobile that would be extra cool.

    • andreyk 1240 days ago
      Thanks for feedback! Mobile firefox would be a bit tough, but we are aiming to make it work on all the major laptop/PC browsers.
  • mikecoles 1240 days ago
    Nope. What you or your algorithm may find 'toxic', I might find interesting or useful.
    • andreyk 1240 days ago
      Thanks for feedback! We are building it on top of a model trained with a large dataset of edits/comments editors on wikipedia found toxic or insulting. You can see more about the model here : https://github.com/tensorflow/tfjs-models/tree/master/toxici...

      So we are not making up how to classify toxicity really, moreso we are making the model usable via a browser extension.

      • mikecoles 1240 days ago
        I'm still not sure. I do wish you well in expanding the field. It's a difficult problem.

        Have you thought about personalizing the model with input from what a user has liked/upvoted on various forums? Even with that, it encourages bubbles and echo chambers though.

        I'm from a group that thrives on busting of um.. chops. If someone isn't jabbing you, that's when you should be concerned. Don't tell the governor, but a group was hanging out tonight and hairstyles, music choices, fatness, and politics were all fair game. The same goes for online groups, family, and work.

        Facebook and Twitter have tried this with the result being a total failure. Again, it's a very difficult problem and if your team is able to pull it off, the product is going to be worth more than a browser plugin. Best wishes and much respect for taking on a difficult task.

        • andreyk 1239 days ago
          Yes, personalization would be cool as a step after MVP!
  • kleer001 1238 days ago
    You do know that you're edging into censorship territory, right? I could see this falling into the hands of repressive regimes. You want that?

    How's your understanding of Moral Foundation theory?

    https://moralfoundations.org/

    And what do you think of the Liberty/oppression dimension?

    • andreyk 1237 days ago
      I understand the concern, but the idea is for the user to be fully in control, and to not hide stuff so much as to give some warning to them (if they enable that to be happening, in the first place).
      • kleer001 1237 days ago
        > I understand the concern,

        I don't think you do.

        > but the idea is for the user to be fully in control,

        That's how it starts. Good intentions.

        You know the term neoteny? Geeze.

  • kleer001 1239 days ago
    No.

    I'm an adult and can easily ignore things I don't want to read.

    IMHO this is an awful idea at every level of analysis.

  • bananapear 1240 days ago
    I would love some kind of extension which analyses the language used in an article and gives it some kind of quality score.

    Examples of certain words and phrases which frequently appear in the sort of (low-quality political clickbaity point-scoring nonsense) articles I don't want to see are:

    backlash, unacceptable, said on twitter, abhorrent, disgraceful, slammed, ought to be ashamed of him/herself, woke, snowflake, social media post, millennial, boomer, pariah, wave of protest, high horse, bandwagon, outrage

    • andreyk 1240 days ago
      Interesting idea! Thanks for the feedback.
  • probinso 1238 days ago
    I would reach out to people who spoke at xoxo conference
  • wolco2 1240 days ago
    No
  • sjg007 1240 days ago
    Build it and find out!
    • andreyk 1240 days ago
      We currently are :) but nice to have some feedback to see if we are solving a real problem
      • sjg007 1240 days ago
        I can see a bunch of use cases and pivots! I'd make it for Chrome so you could use it in the education market for example.
  • cupofcoffee 1239 days ago
    Absolutely not.