21 comments

  • ceejayoz 9 days ago
    > Darien was being investigated as of December in a theft investigation that had been initiated by Eiswert. Police say Darien had authorized a $1,916 payment to the school’s junior varsity basketball coach, who was also his roommate, under the pretense that he was an assistant girls soccer coach. He was not, school officials said. Eiswert determined that Darien had submitted the payment to the school payroll system, bypassing proper procedures. Darien had been notified of the investigation, police said.

    > Police say the clip was received by three teachers the night before it went viral. The first was Darien; a third said she received the email and then got a call from Darien and teacher Shaena Ravenell telling her to check her email. Ravenell told police that she had forwarded the email to a student’s cell phone, “who she knew would rapidly spread the message around various social media outlets and throughout the school,” and also sent it to the media and the NAACP, police said.

    So, in this case, clear motive, a prominent figure, and a suspicious chain of custody gave the cops a reason to dig in a bit, but not everyone's gonna be that dumb about it.

    The swatters who do this to random people are gonna have a field day with this.

    • TrainedMonkey 9 days ago
      Concur OPSEC is not great, there is an easy to follow insertion vector. Imagine instead if he got a burner phone, made a new TikTok account and posted this tagging a few students.
    • colpabar 9 days ago
      > Ravenell told police that she had forwarded the email to a student’s cell phone, “who she knew would rapidly spread the message around various social media outlets and throughout the school,” and also sent it to the media and the NAACP, police said.

      That seems like a really terrible way to handle it.

      • ceejayoz 9 days ago
        It sounds very much like she was in on it; joint call with the perpetrator to the third teacher, subsequent resignation? Probably just not enough proof.
    • setgree 9 days ago
      That's true, but also the person allegedly used a school computer to research his crime:

      > Police wrote in charging documents that Darien had accessed the school’s network on multiple occasions in December and January searching for OpenAI tools, and used “Large Language Models” that practice “deep learning, which involves pulling in vast amounts of data from various sources on the internet, can recognize text inputted by the user, and produce conversational results.” They also connected Darien to an email account that had distributed the recording.

      One of our best defenses against crime is that criminals are often kind of dumb. Tyler Cowen makes the same point this morning regarding airport security: "a lot of criminals are simply some mix of stupid and incompetent or poor on execution" (https://marginalrevolution.com/marginalrevolution/2024/04/wh...).

      • sjducb 9 days ago
        The ones we catch are.
        • nojvek 9 days ago
          I wonder how many millionaires there are who have done a white collar crime once, got their money, cleaned it using some business and now living off it never to do it again. Take that secret to the grave.

          Not just white collar, but many people go missing never to be found. So I assume there are likely competent murders who understand enough of biology and chemistry to not leave a forensic trail and not attract too much attention.

          • kmeisthax 8 days ago
            This has some interesting bounds on it. Smart criminals are harder to catch, but the thing about crime is that it rarely actually pays to commit crimes. Most organized crime works by scamming other people into committing the high-risk crimes for you. So at some point the smart criminals realize they're the mark and either become informants or move on to legitimate business.

            But just as petty crime has its bounds, so does legitimate business. This is why you have big tech companies that built their empires on lies and fraud. The justice system isn't optimized to catch this kind of crime, so it takes longer to get punished, and meanwhile every one of your competitors either learns to commit the same crimes or goes bankrupt. Furthermore the kinds of crimes you can commit once you've built a legitimate business are a lot more subtle and more lucrative.

            So you have a society with an honest core of people doing the actual productive work, while being scammed by incredibly corrupt CEOs, who themselves distract the honest core with fears of petty criminals so that the justice system continues to expend its budget on cheap wins against the common crook.

          • BizarroLand 8 days ago
            In America, across the board, you could be as much as 50% capable of getting away with any murder you commit, of course, assuming that you don't directly implicate yourself somehow.

            While the rate at which murders are solved or "cleared" has been declining for decades, it has now dropped to slightly below 50% in 2020 - a new historic low. And several big cities, including Chicago, have seen the number of murder cases resulting in at least one arrest dip into the low to mid-30% range.

            https://www.npr.org/2023/04/29/1172775448/people-murder-unso...

          • fennecbutt 8 days ago
            Depending on what you consider criminal that's basically everyone with more than a million dollars.

            And it's also everyone who doesn't vote to actually enforce and improve tax laws and tax brackets.

          • graemep 8 days ago
            Richard Branson was caught evading tax - his family saved him from prosecution by paying it off.

            https://slate.com/business/2014/05/richard-branson-tax-fraud...

            If Harold Shipman had stopped with the first 100 murders or so he would probably never have been caught: https://www.britannica.com/biography/Harold-Shipman

          • dzhiurgis 8 days ago
            > understand enough of biology and chemistry to not leave a forensic trail

            Usually that just creates more evidence

          • PinkSheep 8 days ago
            Diplomats and other low tier consuls.
  • surfpel 9 days ago
    > Experts in detecting audio and video fakes told The Banner in March that there was overwhelming evidence the voice is AI-generated. They noted its flat tone, unusually clean background sounds and lack of consistent breathing sounds or pauses as hallmarks of AI. They also ran the audio through several different AI-detection techniques, which consistently concluded it was a fake, though they could not be 100% sure.

    All of these problems will be resolved. In which case most people either won’t raise the question of authenticity or won’t trust audio recordings to begin with.

    Elections at all levels are at risk along with accountability more generally.

    • londons_explore 9 days ago
      Amazingly, most people do seem to trust a simple screenshot of a conversation, even though it has been easy to fake those for 20+ years.

      A screenshot is considered far more trustworthy by non-tech people than "I spoke to Fred and would you believe it, he told me X and Y were doing Z behind the bike sheds!!!"

      • vkou 9 days ago
        A screenshot or a diary entry locks you into a snapshot of a single story.

        Someone saying something about what they talked about does not. You can say one thing today and another tomorrow.

        This is why the written word has magic that the spoken word does not. And the longer the append-only chain of written words goes, the more magic it has.

      • duxup 9 days ago
        I was following a local case where some inappropriate texts were sent.

        The evidence submitted with the legal case was a cell phone with the messages app open ... and it appeared to be photocopied. Then they scrolled up ... another photocopy, and on and on and on.

        Granted further investigation can be done, but it was amusing to see.

      • DonHopkins 9 days ago
        >"he told me X and Y were doing Z behind the bike sheds!!!"

        They were arguing about what color to paint them! How scandalous!!!

      • IshKebab 9 days ago
        Yeah because it takes significantly more effort to fake a screenshot than to just make stuff up. That's entirely logical.
        • ceejayoz 9 days ago
          It really doesn't.

          https://imgur.com/RNwCHSa

          • abrichr 9 days ago
          • IshKebab 9 days ago
            Yeah but a) most people don't know that and b) that's still significantly more effort! We're comparing this to literally just saying something.
            • ceejayoz 9 days ago
              Perhaps we have differing definitions of “significant” and/or “effort”.
        • andrewstuart2 9 days ago
          I wouldn't call a quick search for "fake text message" which returns dozens of generators significantly more effort. Harder to do on the spot, maybe, but the effort is negligible.
          • 9question1 9 days ago
            You underestimate the laziness of most humans. To be clear, screenshots are still not trustworthy. But the "negligible" relatively more effort could matter in practice.

            Cryptography sometimes seem to rely on a stronger version this too. With enough computing power, you can brute force a lot. Some authentication seems just expensive enough that only a nation-state actor would have enough resources to break it, and then rests on the assumption that the small subset of people who could put in enough effort won't care enough to do anything worth being concerned about.

            This is also why, say, people lock their doors even if they have windows.

            • londons_explore 9 days ago
              I think a good chunk of non-tech people in my social circles would be appalled that such sites exist, and assume that running them would be illegal.
              • BobaFloutist 9 days ago
                I mean you can buy lockpicks online and learn how to open almost any lock on YouTube, but we still lock our doors.
    • spuz 9 days ago
      We have to realise the privilege of living during a time when a single piece of evidence could allow us to draw such definitive conclusions. That wasn't the case until audio and video recordings were invented. We've grown used to shortcutting the standard investigative process of looking at the chain of custody and looking for corroborative evidence because it's easy to jump to a conclusion when it's almost certainly correct and it means we can do less work.

      We now have to adjust our expectations back to when we had to rely on multiple pieces of evidence from multiple sources and evaluating the trustworthiness of those sources before we get out the pitchforks.

      • barbariangrunge 9 days ago
        We’ll, in those olden days, pitchforks came out an inappropriate times fairly often, sometimes literally
    • hnlmorg 8 days ago
      If the last few years have taught me anything, it’s that most people will believe what they want to believe regardless of how compelling the evidence is.

      You don’t need AI to falsify that evidence. An emotive image with a vague quote is often enough.

    • kragen 9 days ago
      google's wavenet solved all of those problems years ago
  • jsheard 9 days ago
    My condolences to all the podcasters and YouTubers who are probably going to get bombarded with extortion attempts after people use their hundreds of hours of public clean voice recordings to make a perfectly convincing voice clone, down to the tiniest quirks of pronunciation, cadence and so on. Who would have thought that would become an opsec risk.
    • okhugres 9 days ago
      > Who would have thought that would become an opsec risk.

      I did. So I’m sure many others too. They just likely met the same opposition that I did.

      Without defensive strategies today that allow those at risk to continue their risky behavior unencumbered then any warnings of tomorrow get drowned out by the demands of today.

    • qingcharles 8 days ago
      They're already being used extensively to make ads featuring content creators saying things they never said. They're not infrequent on TikTok. I guess TikTok will have to do far more verification on people who buy ads.
    • SuperNinKenDo 9 days ago
      Things are such that it's really not going to matter that much, 100s of hours vs a couple hours is probably not going to make a world of difference.

      Many professionals give talks that are recorded, whether lectures, promotional materials, Q&A, etc. People can be recorded surreptitiously, etc.

      I think the only way to have avoided this is to be so paranoid as to have never allowed anyone near you with a phone or other recording device.

      • pixl97 8 days ago
        >as to have never allowed anyone near you

        So what you're saying is this is unworkable. I mean decades ago private investigators were working with miniature digital recording devices, I can only imagine they have been reduced to sizes that are embeddable and undetectable in almost anything.

        • SuperNinKenDo 5 days ago
          Yes. I consider it totally unworkable sadly.
    • ChrisMarshallNY 8 days ago
      I believe that's already happening. I think that deepfake extortion is already a thing, and if someone is a popular influencer, then you know that they:

      A) Have money; and

      B) Really don't want even fake stuff getting out.

      For me, I couldn’t care less. I’ll upload it to YouTube. A mommy vlogger, on the other hand, might not want her squeaky-clean image soiled.

    • DontchaKnowit 9 days ago
      Honestly? The writings been on the wall for at least a decade if not longer. This shouldnt really be surprising to anyone with a large corpus of recorded material online.
      • JdeBP 8 days ago
        Far longer than a decade. We've known that this has been coming since the days of Max Headroom.

        The interesting thing is that in the Max Headroom days we thought that it would come about because of scheming corporate interests, not because of movie audiences that had pressed for ever better special effects, and fake Tom Cruises, all of these years.

      • jsheard 9 days ago
        Maybe to people who were closely following the research, but I very much doubt it was on most peoples radar until maybe a year or so ago when ElevenLabs became available to the public and it started being used in viral memes of Trump and friends.
        • DontchaKnowit 9 days ago
          Yeah I mean fair enough. Im no AI enthusiast and I knew about it a long time ago, however, I also spend a ton of time on the internet, read computer related news etc, so I spose your right.

          But I remember watching northernlion like 8 years ago and thinking "you could probably spoof this guys voice really easy with a nueral net"

  • causal 9 days ago
    Up next: Some enterprising startup founder's "AI-voice detector" landing people in prison because they can't keep up with ML advances but authorities trust them anyway.
    • dotnet00 9 days ago
      There have already been cases like professors failing entire classes for plagiarism without any critical thought because an AI claimed their work was AI generated.

      Those were reversed because they were particularly egregious, but I can completely see individual students getting falsely accused of using AI and treated as guilty until proven innocent, going totally under the radar because single incidents don't get much public scrutiny.

      • causal 9 days ago
        Considering the public's persistent faith in lie detectors for humans, I don't hold a lot of hope we will be wiser about lie detectors for computers.
        • ryandrake 9 days ago
          It's going to get worse before it (unlikely) will get better. I predict within 10 years, we'll have at least one jurisdiction experimenting with AI-generated evidence, to help prosecutors get convictions. Prosecutor will push a button and computer will spit out a sworn statement that the defendant is guilty, this will be admissible, and juries will convict based on that evidence alone.

          Drug dogs who "hit" on command are already used as probable cause generators. Breathalyzers are basically magic boxes that produce "evidence" of a crime. It's inevitable that we're going to keep using AI and computers to automate convictions.

          • causal 9 days ago
            This seemed implausible to me at first, then I remembered we've already had people submitting ChatGPT-hallucinated cases in court, so maybe not :(
            • qingcharles 8 days ago
              And that's the one we caught. 99% of the time people aren't checking citations in motions to the court. Nobody has that much time. Things are rushed. I'm sure it is happening a lot more than we think.

              I use GPT to help write wiki entries for a site I'm working on and I have to be really on my game as it will hallucinate facts. I know some have got to have slipped through.

    • qingcharles 8 days ago
      To introduce evidence at trial you need to "lay a foundation" which generally means you need someone with knowledge of the evidence to swear under oath to its veracity and provenance.
  • hdlothia 9 days ago
    I don't think we're ready for the consequences of this technology. My parents are immigrants and my first thought was this might get someone killed or locked up in the old country.

    These free generators need to include some kind of audio watermark or key to indicate they are ai imitations. At least raise the barrier for this kind of action to being able to run your own llm or something.

    • yjftsjthsd-h 9 days ago
      > These free generators need to include some kind of audio watermark or key to indicate they are ai imitations. At least raise the barrier for this kind of action to being able to run your own llm or something.

      It might be worth trying, but I'd bet that it's less than 6 months before running it locally means "download the app off the front page of your app store of choice".

      • rurp 9 days ago
        There could be a requirement that app store listed apps need to include some sort of audio watermark. While that wouldn't be perfect, since there will always be ways around it, this would still raise the barrier significantly and cut down on much of the abuse.

        Most criminals are lazy and/or not very tech savvy. Raising barriers and prosecuting the worst offenders cuts down on all sorts of malicious behavior that is technically feasible.

    • bonton89 9 days ago
      > At least raise the barrier for this kind of action to being able to run your own llm or something.

      I think that would result in the average person being less aware of the capabilities existing and therefore being less prepared to defend against it. It isn't like this would be a world law that was universally enforced anyway.

    • graemep 8 days ago
      > I don't think we're ready for the consequences of this technology.

      We are never ready for the consequences of technology. We adjust to it long after it has been invented.

    • snoman 9 days ago
      I don’t think we’re ready either but I also don’t see how you’d get ready without the pressing need to.

      That is to say: I think this was inevitable.

    • shombaboor 9 days ago
      I totally blame the companies and VCs for lack of foresight and ethics. They've effectively built a weapon without a safety.
      • jsheard 9 days ago
        It's the same pattern over and over - they develop a technology, acknowledge the risk of it being abused and the need for safeguards, but then realize that building in those safeguards will get in the way of turning it into a product and just YOLO release it into the wild anyway. The same thing happened with LLMs, which were deemed "too dangerous to release" due to the risk of producing a massive tidal wave of spam and propaganda, and yet here we are under a massive tidal wave of LLM spam and about to head into the first US election in the unrestricted LLM era. The very first paper on image generation diffusion models called out the risk of it being used for malicious purposes, such as deepfake nudes, and yet here we are in the era of one-click zero-effort deepfake nude generation services using that very technology.

        What's the point of considering potential abuses if you're just going to facilitate them regardless? If anything that's worse than not considering abuse at all, because it implies that you know what you've created will result in kids killing themselves after fake nudes of them spread around their school, or enable rampant fraud and extortion through voice cloning, but you believe that's just the price of progress.

        • freedomben 8 days ago
          What? Are you saying OpenAI, Google, Meta, etc did nothing to combat abuse and provide safeguards?

          If so, that is easily provably wrong, unless you think there's an enormous conspiracy involving thousands of people to pretend that they're working on it when they really aren't.

      • grugagag 9 days ago
        Maybe they’re supposed to be held responsible or liable?
        • hdlothia 9 days ago
          That's a good point, we might not even need new policies. I bet the detectives or a court subpoena could get records of internet history. People might just be able to sue whoever generated the deep fakes that caused damages.
    • thraway3837 9 days ago
      I'm really not sure why you're getting downvoted. It's almost as though HN readers are fully on the AI bandwagon and can't let anything bad be said about it. I assume this is the same crowd that also scoffs at any regulations in tech.
      • educasean 9 days ago
        You were right the first time: You really don't understand why HN readers are downvoting the OP comment. Yet that didn't appear to have stopped you from spinning up this AI-bandwagon-riding, regulation-hating strawman to further entrench your hatred towards.
        • hdlothia 9 days ago
          If you're a downvoter, why? I would love to hear why you disagree with my comment.
          • aingisni_del 9 days ago
            That’s not what a straw man is (it’s two words btw).

            This is a known and often repeated trend in tech: make something under the guise of disruption, with no regard for safety or regulations. Complain that those things are a hindrance to progress and then spend billions and destroy lives permanently to eventually fix the problems when the regulators issue an ultimatum that professionals warned you about.

      • bcrosby95 9 days ago
        Commenting on downvotes is pointless, and it's even more pointless the earlier in a comment's life you do it.

        I've seen wild swings in comments before based upon the time of day. E.g. mid-day upvotes, late day downvotes. Or vice versa.

  • markhaslam 9 days ago
    Here is the audio clip in question: https://www.instagram.com/reel/C2NEEDrMo8_/
    • SuperNinKenDo 9 days ago
      Definitely some hallmarks of AI if you know what you're looking for. But I have to admit, that is more convincing than I would have expected. A bit more manual intervention to change volume and balance and I might find itnsignificantly more difficult to be certain.
    • zachmu 8 days ago
      The comments are full of people buying it completely
    • inputError 9 days ago
      [dead]
  • shombaboor 9 days ago
    What is the best argument why this technology exists or should exist? It's fun?Live translation is the only one I can think of, and even then the benefits of the persons 'real' voice are scant.
    • MiguelHudnandez 9 days ago
      It's already actively in use to edit and clean up podcast recordings. This use case is basically identical to the malicious case, it's all about who's doing it and what their intent is. If it's fixing a word you stumbled on, or replacing an inaccurate quote with an accurate one, that's fine, but other things are problematic.

      Someone who is losing the ability to speak might want to have this tech so they can still have somewhat normal phone calls with their loved ones.

      I think the potential for abuse is pretty high with this tech but it's foolish to pretend we can keep it from being used.

    • breakpointalpha 9 days ago
      If I want to do a voiceover of a ten minute video, I can type out the transcript and produce a flawless one-take audio track. This saves hours of time finding a quiet place to record, saying the lines, having a good mic, doing post-recording cleanup to remove coughs or passing airplane noise, multiple takes because I goofed a word up, etc.

      I don’t know if this is the best use of the tech we’ve found yet, but it’s already a huge time saver.

      • nickthegreek 8 days ago
        I create internal training videos ant my job and that’s the use case as well. Not many people want to have their voice used and it’s easier and way faster for me to have an ai produce a voice than to get someone to do it for me.
    • grugagag 9 days ago
      Scooping up from the middle class, one class of professions at a time. But this is not just the like how the internet disrupted ‘everything’ and moved things online, this is going to take all the liveleyhoods and suck them dry filling up the technobros coffers until there’s nothing left. But there are upsides too, the potential for good applications is quite broad.
    • SuperNinKenDo 9 days ago
      Techno determinism. The belief that we can't choose not to develop a technology as a species, so various people decide they don't want the other guy to get it first. We all end up worse off. It's a kind of self-conscious, sometimes ideological, prisoner's dilemma.
    • IncreasePosts 9 days ago
      "it's cool"?
  • cbsmith 9 days ago
    Oh boy. This is the beginning, not the end.
  • educasean 9 days ago
    As these synthetic voice generations become less and less obviously detectable, I fear two things will happen.

    1. The obvious: There will be a lot of fake speeches floating around that spout fake news and hate-filled views.

    2. The less obvious: The prevalence of hatred and sensationalized rumors will embolden those who find themselves agreeing with the extreme views seemingly endorsed by some authority figure, and will add their support and authentic voices to the mix.

    Perhaps this is just an extension of what has already been happening with the internet. We will find ourselves more fragmented and divided than ever, filling our chambers with literal echos of synthetic voices.

    • FromOmelas 9 days ago
      3. A reversion to in-person interaction for anything important (exams, certifications, payments, loan applications, ...)

      Society benefited from a productivity gain by moving everything online, in a (relatively) high-trust environment. That is now becoming more expensive (due to higher % of frauds), or even infeasible.

      So, a drag on economic growth for years to come.

    • lancesells 9 days ago
      3. A distrust in everything leading to more sway in public opinion.

      4. Deniability in everthing that wasactually said. I think it's either Elon Musk or Donald Trump whose lawyers have argued in court/public that you can't be certain they actually said it.

  • sandspar 9 days ago
    The future will be stupider than we can even imagine.
  • waldrews 9 days ago
    I think the lesson here is obvious. Who among us hasn't been terrorized by PhysEd teachers, forced to run laps or climb thing or something against our will? Athletic directors must be even worse. Ban athletic directors!
  • itqwertz 9 days ago
    We are rapidly advancing towards the logical conclusion of the post-modernist question, “What is truth?”

    Mobs will always remain dumb and quick to anger, so this will not be an isolated case.

    • barbariangrunge 9 days ago
      I feel like the post modernist stance isn’t to question what constitutes truth, as the ancient Greeks and epistemology nerds attempted, but rather to posit: there is no truth. I’m not a fan of that stance, except for-fun in certain art forms; similar to how violence is fun in art but not in real life
  • wpollock 9 days ago
    >Eiswert determined that Darien had submitted the payment to the school payroll system, bypassing proper procedures.

    How did he allegedly bypass payroll procedures? An athletics director should not have passwords to the school payroll system. I wonder if social engineering was used? In any case, their security procedures need an audit!

  • xhkkffbf 9 days ago
    Don't let anyone say that the teachers aren't keeping up with technology. Certainly not the gym teachers.
  • FrustratedMonky 9 days ago
    So it begins. If any joe blow ex-athletic director can do it, think about how wide spread this could become.
  • Simulacra 8 days ago
    This will haunt that principal for years to come, the damage has already been done. Like false claims of rape, the internet will continue to keep the lie alive.
  • heavyarms 9 days ago
    There are lots of valid use cases for speech synthesis and text-to-speech technology, and there are like 1 or 2 valid/legal use cases for voice cloning that I can think of. Ignoring the moral and ethical questions, why would anybody devote time and resources building a company around a very niche solution... one in which your customer churn rate is partially dependent on users not ending up in prison.

    edit: typo

  • endisneigh 9 days ago
    Imagine if he was cleverer - was in an office, feigning a conversation with them, with kids who would be “witnesses”. Monitor the principal to the alibi isn’t solid or somehow supports it.

    Wild times.

  • theogravity 8 days ago
    Wonder what happened to the victim, Eiswert, in the end? The damage was done, and I don't think it can ever be fully rolled back, unfortunately.
    • rpjt 8 days ago
      I wonder about this too. He's already guilty in the court of public opinion.
  • beaeglebeachh 9 days ago
    Wait till the kids figure out how to do it. Principal at my school tried to have me expelled, by lying saying I was inciting violence. Soon it will be easy for kids to turn the tables on these tyrannical administrators in their insular fiefdoms.
    • dvaun 9 days ago
      While your situation is unfortunate, you can’t paint a broad stroke and assume all administrators are “tyrannical”. Most of these people are simply trying to perform their jobs.
      • beaeglebeachh 9 days ago
        They extract their salary by evicting old ladies who can't afford their property taxes.

        They are no better than Al Capone.

        • TheFreim 9 days ago
          You think school principals are the ones evicting people? Or do you think school principals are just tasked with evicting old ladies in particular?
          • filoleg 9 days ago
            Yeah, I have no idea at all how school administrators are related to evicting old ladies and collecting property taxes either.
    • PhasmaFelis 9 days ago
      I'm sorry your principal was a dick, but it's weird how you're acting like "teenagers now can fraudulently incriminate anyone they want" is somehow a good thing.
      • beaeglebeachh 9 days ago
        I think it could be a good thing. When the tyrants have to face their own behavior, they'll be forced to raise the evidentiary standards high enough their own lies won't work.
        • jbullock35 9 days ago
          > When the tyrants have to face their own behavior, they'll be forced to raise the evidentiary standards high enough their own lies won't work.

          I don't think that tyrants work this way –- at least not in America's educational system. They're very happy to employ double standards.

        • rurp 9 days ago
          That's not really how actual tyranny works. There is no requirement for equal treatment.
        • splitstud 9 days ago
          [dead]