It's not unchecked free speech. Instead, it's unchecked curation by media and social media companies with the goal of engagement.
As long as media companies get the most benefit from people engaging with content, they will continue to promote information that is damaging to society. It may even be true information but when the goal is engagement, it's purpose will be to enrage and divide because that's what's engaging.
Limiting speech will not cause this issue to go away. It's bigger than just misinformation. The core issue is the underlying system that values engagement over all things. That is, the advertising system.
Companies that make their money selling ads while providing content to engage have a perverse incentive to make society worse. This is the bad seed that needs removal.
This business model should be illegal. It's already trivially unethical.
>It's not unchecked free speech. Instead, it's unchecked curation by media and social media companies with the goal of engagement.
Unchecked free speech has always been an issue, which is why even in America, where free speech occupies one of the highest rankings of competing social virtues by dint of history, there are still a litany of narrow carve outs.
Do you lie to your business parters in the context of a transaction? We hit you with fraud, misrepresentation, or any number of torts. Do you threaten someone with bodily harm? Oh boy. Cyberbullying? Depending on your state, that might be a problem. Lying while under oath or to a federal agent? That's potentially a few years behind bars.
So yes, unrestricted 'say anything lmao' free speech does not exist. It has never existed. It will never exist.
Everyone who does us the disservice of trotting it out as if it does exist is creating a strawman which distracts us from a more honest conversation.
Which isn't 'should there be a line at all?' It's 'where should that line be drawn?'
Speech in freedom of speech refers to the communication of ideas and opinions, not all communication. The term "freedom of expression" is commonly used to clarify this. Lying to your partner is not a crime. Taking their money under those circumstances is the crime.
But again, this is just the status quo. Why is lying to someone to take their money a crime, but lying to someone to influence an election not? What makes money worth special protections, but voting not?
The real reason is because you cannot cleanly define promises when combined with other definitions nor would you neccessarily want to - surprisingly.
Say a politician promises to cut taxes or boost social spending and then a natural disaster strikes or an adversary attacks. Either war or repairs and relief consume time and money. Holding them to it would force suboptimal decisions.
Money however is fundamentally fungible and performs best when it flows. Fraud being legal would force far more caution and selectivity which would do vast systemic harm. Why invest if there is no guarantee that it isn't just a gift that may be paid back?
A dictatorship of politifact? No, tech firms are experimenting with that already. It results in things like tables of public data being classed as "disinformation".
The whole idea of fact checking is naive. Nobody is trustworthy enough to determine truth. Everyone who tries turns it into "truth is whatever powerful people say it is" (which is itself circular logic of course, but the people deciding on the meaning of truth are rarely all that bright).
Jokes aside, I think that democracies which are split up into executives, legislatives and judicatives are not well designed from an architectural standpoint.
What's missing is a society of technologically specialized gremiums that evaluate knowledge and truthfulness, and are able to either control the press or to control the legislative process.
So many laws have been created out of misinformation, stupidity, and resulting fear... So that generations to come are harmed by this shit. It's absurd.
In Germany, we technically have the "Rat der Wissenschaften" but it basically has no purpose. It's just there for nothing, and has no power over any other instance.
The irony is that in Germany the only instance that was able to do anything against the misinformation cases was actually the Bundeskartellamt, which serves the purpose of finding out wrong flows of money and does financial audits in illegal syndicates.
The stealing is what makes it a crime, not the personal benefit. If you rob someone solely to the benefit of your favorite charity, you still robbed someone.
I would assume explicit actions like mislabeling ballots or actually changing someone's vote on them is a crime. Perhaps even tricking people about polling place locations or the party of candidates. But it's obviously very dangerous ground once you venture multiple degrees of freedom off into policing conspiracy theories or political ads, given the risks of abuse. It should be just as hard as to convict people for murder via such distant effects. Especially since in voting, people have access to alternative views, including yours.
But it's still fraud even if you don't steal anything. Lying on a job application is fraud, even if you ultimately do the job perfectly well.
So why is lying on a job application for the purpose of getting a job illegal, unless that job is an elected position? (note that lying on behalf of someone else to help them get a job is also fraud, so the same question could be asked of someone lying on behalf of or in support of a candidate)
the definition of fraud follows the laws, not logic. laws aren't uniform and crime equipotent, because they model a trade-off between societal, communal and individual damage, plus a great deal of unfairness from certain topic being propped up by politicians or special interests groups
In this thread we're discussing why the law is the way it is (with the subtext that, perhaps, we should modify it). Saying that "the law is this way because that's the way the law is" is circular. Which is what I was getting at when I said "But again, this is just the status quo" a few posts upthread.
To make the question more explicit: why should the non-legal, but supposedly fundamental, right to "freedom of expression" protect your ability to lie to me about a political candidate, but not a contract?
>Rape by deception is a situation in which the perpetrator obtains the victim's agreement to engage in sexual intercourse or other sex acts, but gains it by deception such as false statements or actions.
As the wikipedia article illustrate, the scope is very narrow and depend on the country. A broad interpretation would have a significant impact.
A trivial example would be a divorce where one partner has been caught with a false statement or action. Any sexual intercourse at a date between the lie and the other person finding out would potentially be rape since consent might have changed if the person has been truthful. If both sides are cheating on each other then we would be in the weird state were both were raping each other at the same time, as both would be using deception in order to obtain the victims agreement before the act.
The wast majority of cases described in the Wikipedia article is when one party is asleep, which to me is not about deception at all but rather the state of the victim and their ability to consent. Further down the article, the California case is interesting but involve other crimes in connection to the act which muddles the definition. Last we have the Israel one with the religious aspect, and I strongly doubt a similar case would be allowed in places where such religious aspects would hold less weight. Being consistent under the Israel case, a gold digger would similar be raping it victim since they too would have lied about their interest in a long-term relationship.
If you're intending to obtain some benefit or deprive them of something, but fail to clinch it and 'take their money' as you put it, the statement still triggers liability under most theories of fraud/misrepresentation. Generally some injury needs to be suffered, though.
So no, it's not the stealing. It's the speech with bad intention.
As a sidenote, fraud and misrepresentation generally have some of the most interesting rules regarding evidence. The law really drilled down into what's a sufficient record and what's not in respect of attribution of liability in this area.
No. I'm just not trying to write out half of a legal text. In the standard tort ones it's a requirement largely because of the historical requirements behind the remedies associated with the acts.
Where the offenses aren't linked to theories of compensation or restitution, but instead are based on punishment, they're less likely to require injury and more likely to look at it as a factor in determining the magnitude of the punishment.
The rules regarding fraud in connection with TARP funds, lies to federal agents, etc. all run on this line.
Tell a federal agent you aren't guilty of something when you are? That's a charge. There's no harm to the agent. No injury. Just the lie.
> unrestricted 'say anything lmao' free speech does not exist
I am baffled by everyone here talking about algorithmic platforms like Facebook and Twitter as if they are a bastion of free speech. They are designed to promote only the most engaging content (IE most outrageous and in some cases literal misinformation meant to feed on your biases).
These services are not a free marketplace of ideas, they are companies which are designed to make money. The more eyeballs on screens, the more money they make, and it is not well reasoned discussion with a diverse representation of viewpoints that keeps people glued to their devices.
Consider how you respond to someone making an incorrect statement on Twitter. You reply to them, and by replying to them you amplify the audience of the incorrect tweet. The fact that your correction causes misinformation to spread isn't a bug, it is exactly how the platform was intended to work, because it causes "engagement."
"The solution is more free speech" doesn't work here because the platforms don't have free speech and never did to begin with. All people are asking is for those platforms which currently promote outrage above organic content to sort themselves out. Not any of these strawmen about creating a "bureau of truth" or "state censorship boards" and certainly not notions of making an American KGB which I think I saw mentioned somewhere.
I have no idea of the best way to do this. Treat the negative effects of algorithmic platforms as some sort of externality? Develop strict regulation on what types of methods to keep people hooked on social media are considered ethical? Similar to how drugs or gambling are already regulated? Just straight up ban this type of business model?
It's going to be one of the big questions of the next decade. Even something like a user generated flag where if enough people press it, a tweet has a big warning over it saying "this is a contested matter" or something would be better than the current situation.
>it is not well reasoned discussion with a diverse representation of viewpoints that keeps people glued to their devices
This sounds like Hacker News. Every platform has (or: develops) it's target audience and we also can't blame the companies behind, that they are "for profit". Ads are lucrative and relatively fair (even beneficial - e.g. small businesses can make aware of their offerings... others have pointed this out before). Over time things consolidate by themselves: "We" hang out on hacker news and other likely more proper offerings. It's sad to see people "logging into facebook" and dragging down the discussion or just being offensive because of stupidity. But you can't fix that. The discussion itself and about it (discussions about discussions - meta discussions) is important to make sure "things get exposed". Things consolidate with time, e.g. you might look back on your last 10 years, and there are countless communities, where I left a last angry comment and then left. I am sure those communities are just small and irrelevant echo chambers, shrinking, because even their loudest advocates get tired and realize over time, that something is wrong, subconsciously. While others stay, though I have the feeling that there are a lot of big and relevant communities, that I still follow, and which are growing, which gives me hope, that we are heading in the right direction.
> Do you lie to your business parters in the context of a transaction? We hit you with fraud, misrepresentation, or any number of torts. Do you threaten someone with bodily harm? Oh boy. Cyberbullying? Depending on your state, that might be a problem. Lying while under oath or to a federal agent? That's potentially a few years behind bars.
...and yet, ALL of those things happen routinely and without consequence on a COLOSSAL SCALE.
Folks on HN like to pretend that laws and courts function like clockwork or computer programs, executing flawlessly and automatically. They don't. People get away with stuff ALL THE TIME. And if it does ever end up in a court, "winning" is an iffy proposition even in the best possible circumstances.
In the case of political disinformation, the problem is even more intractable. Talking heads can lie their heads off to audiences of millions of people on incredibly powerful platforms. I don't just mean "lie", I mean deliberately say a demonstrable untruth with the intention of misleading others on serious matters and be completely unaccountable for it.
There's the recent example of Trump retweeting a completely fabricated story suggesting that Biden had members of Seal Team 6 who were involved in the OBL assassination "executed". Turns out the story was "launched" from Miami Florida at the "American Priority Conference" the week before. In a few short days, a disinformation campaign was able to fabricate a completely fake story, gain 3M youtube views, and "induce" the President to retweet it to 85 Million followers. The source story was then quickly pulled from Youtube, but everyone involved (Gary Franchi, Anna Khait, Nick Noe, and others) is totally free to do it all over again. They still got their twitter, facebook and youtube platforms, RIGHT NOW, during a super tense time right before the election.
These platforms are very much "owned" platforms. They're not like newspapers but they have the same responsibility to act with some level good human judgement. Laws aren't ever going to be able to keep up with this crap.
I think one significant difference in these cases is access to alternative channels of information. Fir example when buying a property you are very highly reliant on the information you get from the vendor in making your judgement, you can hire a surveyor etc, but the owner occupier is in a uniquely privileged position. The same often goes for a party too a business transaction, especially with respect to their future intentions. In political discourse there are generally plenty of alternative sources of information and opinion though.
Also contracts and personal relationships are between two specific parties with defined responsibilities, but voting is not as direct and specific as that.
It's not unchecked free speech. Instead, it's unchecked curation by media and social media companies with the goal of engagement.
Try teaching non-elite undergrads sometime, and particularly assignments that require some sense of epistemology, and you'll discover that the vast majority of people have pretty poor personal epistemic hygiene—it's not much required in most people, most of the time, in most jobs.
> Try teaching non-elite undergrads sometime, and particularly assignments that require some sense of epistemology, and you'll discover that the vast majority of people have pretty poor personal epistemic hygiene—it's not much required in most people, most of the time, in most jobs.
I'm not sure what you mean by "non-elite undergrads" here.
Is it "undergrads from non-elite universities", or "undergrads from non-elite backgrounds", "B-student undergrads", or perhaps something else?
Netflix doesn't sell ads (and doesn't rely on user-generated content, so they can't really threaten democracy), but if you read interviews with executives, they also seem to be optimizing for maximum time spent on the service.
Apple, too, reportedly told developers it's increasingly looking for Apple Arcade games that will "keep users hooked" over a long period. 
Consequently, I'm not convinced removing ad-supported media will fix the problem of companies optimizing for engagement at all costs. Humans are stupid, and so we're more willing to pay for services where we spend lots of time, irrespective of the quality of that time.
The difference is that TV shows and video games are clearly entertainment, so you know what to expect. It’s possible for a fictional TV show or video game to manipulate you, but it’s harder because you know what you’re experiencing isn’t real.
Social media is more dangerous because it can warp your perception of reality: what is happening in the world, what people think and care about in your neighborhood and your country; what is a scientific fact and what is up for debate; which politician is trustworthy and which isn’t. There is no boundary to what aspect of your life Facebook or Google can manipulate. With an ad business model, they are incentivized to expand the scope of manipulation further and further, since their ability to influence you is literally what they sell.
That’s why I originally brought up Netflix, even though their domain is very different than Facebook. Netflix seems to have decided that the best way to optimize for renewals is to get subscribers to spend more time on the service overall. Which isn’t unlike Facebook et al doing the same for ad views.
Sure but they would not be infested with conspiracy theories and fake accounts because those would be worthless.
So instead of our parents being lobotomized by QAnon, they would still be inviting us to their Zynga farm game so they can unlock the pumpkin patch extension. Still scammy but less destructive to democratic institutions and free will as we know it.
The ad business model is not just about selling engagement, it’s about creating an ecosystem enabling the worst players to make the most money, while the platforms at the center keep their hands clean. Google started this tradition with malware search bars and shady affiliates; Facebook and Youtube just took it to the next level. None of that shit is profitable with subscriptions.
> Sure but they would not be infested with conspiracy theories and fake accounts because those would be worthless.
Why? How are those conspiracy theories connected to advertising?
They're not worthless insofar as they cause users to spend more time on Facebook. And Facebook would want to optimize for time spent on their site regardless of whether it was to keep users paying or to show them more ads.
Conspiracy theories emerge from the complicated and low-quality ecosystem that is required to monetize free content with ads. If people pay you a subscription, you don’t need to incentivize scammers to produce content that enrages people so they can see a viagra ad. Instead you can do what Apple and Netflix do: pay content creators for quality content, and compete on quality.
Is there a market for $5/month all-you-can-eat documentaries about lizards who secretly run the world and lay their eggs in vaccines? Sure, but it won’t be a $300B company capable of destabilizing entire countries.
The settling of the US involved the creation of many, many small town newspapers. These weren't created at random but created by the most influential families of the towns, to sell the town itself and the maintain the influence of the family. Radio and TV were free like website from the start, of course (with paid cable coming much later). High end magazines survived the rise of TV and radio because their audience was higher quality.
So the audience has been the product for a long, long time and paying or not paying made little difference.
I think the underlying challenge here might be that the most "successful" businesses often make us the most addicted. I'm curious how many of the very large companies, if any, don't rely on a business model of (borderline) addiction to their products.
Which includes...the New York Times. And all other media companies.
> This business model should be illegal. It's already trivially unethical.
By this criterion, pretty much all newspapers that have ever existed should be illegal. Also pretty much all TV channels. If this is a problem, it's been a problem since long before Facebook and Twitter.
Media - especially news media - wherein the product is advertising has always had a perverse incentive structure to stir up society to get more eyeballs. Key in this is the insight that what keeps humans engaged most is angry righteousness and tribalism.
The difference is that media never had the tools that are now available to track, optimize for engagement, and also amplify that engagement with immediate social sharing.
With this toolbox, we've created something that can cause society to destroy itself - the stakes are really that high. It's not the intention of the media companies but it is the result.
Nothing in this world scares me more than particularly our tools created by first social media companies in their compacity to destroy civilization.
It's a problem. Been one since the Spanish-American war.
Some of the biggest source of advertising revenues for news media organizations comes from defense companies. The New York Times pushed for the Iraq war, for example. They fired journalists who reported dovish news, too. Many others left to report as freelancers so they could paint more accurate pictures in smaller publications.
Similarly, MSNBC will bring on ex-Pentagon officials to "analyze" current events, but they don't bring in ex-State Department officials. There are likely patterns and incentives here. "Follow the money" as they like to say.
Newspapers and TV channels are curated and the means to being a publisher or advertiser on those platforms is extremely limited compared to what the internet allows. When everyone on the planet is an unchecked publisher and/or advertiser, the problem is clearly much worse.
I'd honestly have a hard time making a case that Fox News is less net bad for society than Facebook as of today. But Fox News feels like the "end-stage" version of whatever you'd call that particular flavor of classic corruption and human-brain-hijacking, whereas Facebook has been visibly improving its algorithmic behavior control abilities, has already invented new flavors of human-brain-hijacking, and seems to be getting better and better at making us worse every day.
We've learned how to handle the old style of human-brain-hijacking. Those lessons were written in blood and war. Facebook has few parallels, only distant rumblings of far worse lessons to be learned.
> I'd honestly have a hard time making a case that Fox News is less net bad for society than Facebook as of today.
If you were to put "any media outlet" in place of "Fox News", I would agree with you.
Part of the problem is that so many people think it's only the media outlets that say things they disagree with that are the problem. It isn't. It's all of them, no matter what side they're on or what point of view they're advancing. They will all lie and manipulate when it suits them.
I think the problem is the addition of "opinionated" content media outlets publish to generate outrage. E.g. IMO the NYT is the gold standard of journalism but a lot of their opinion pieces are not even fit to print.
> It's all of them, no matter what side they're on or what point of view they're advancing.
The premise to your statement is rather telling... why should a media outlet take a side or advance a point of view? It's just not necessary or (IMO) acceptable for the main sources of information to do so. Here in New Zealand we're blessed to still have trustworthy minimally biased media. Australian media sadly seems heavily Americanised.
All media is biased. It might not be as blatant as Fox but it's still there, in the tone the reporter takes, in the types of questions they ask, in the attitude of their responses to answers, in which parts they emphasize, etc...
I agree. I kind of regret putting the "minimally biased" in my reply. I think that media can be trustworthy despite a small amount of bias, as long as the audience does a little critical thinking. At high levels of bias, they start trying to mislead you.
Edit: because good journalism involves taking steps to avoid bias.
> I think the problem is the addition of "opinionated" content media outlets publish to generate outrage. E.g. IMO the NYT is the gold standard of journalism but a lot of their opinion pieces are not even fit to print
A lot of their journalism isn’t that great anymore either. (In the last 3-5 years, there has been a massive upheaval in the ranks as revenues have declined and experienced journalists have left.)
> They will all lie and manipulate when it suits them.
Absolutely. At the same time I think it makes sense to single out Fox News at this current moment. For several years now they have they been a straight-up propaganda mouthpiece. They’re noticeably worse.
I’d call myself a Progressive, but FWIW as a habit I don’t watch any television news regularly - I read.
Point being I think it’s reasonable to say that you can’t say with a straight face that Fox News is - at this particular moment - equivalent to other networks.
Fox News isn’t designed to be even handed. “Fair and Balanced” is an inside joke. It’s designed to bring balance to a media industry that skews heavily left by offering the countervailing view on issues. (I do agree it’s gone downhill since Ailes left. Megyn Kelly has said as much.)
One thing that’s remarkable is that in the last 6 months, I’ve had Democrats confide to me that they’re harboring skepticism of the media because they noticed how much the media downplayed the violence of what was happening in their cities.
The whole media is skewing left is completely fabricated. If anything most media has traditionally been moderate conservative. It is simply that part of the republican part has moved so far to the right that Reagan would likely be kicked out today.
And this to a large degree has been driven by Fox News. It is also important to point out that they are different to most other news media in that they categorize most of their shows as entertainment not news, the reason that they can not be held accountable for knowingly telling outright lies.
You can see the distinct leftward swing in coverage of the 2019 primary. The media was completely blindsided by the victory of Biden, and the implosion of progressive superstars like Warren and Harris.
The argument that the media is left leaning seems to be based largely on the notion of "if you don't agree with everything I say and ask difficult questions you must have a left bias". Have a look at this Shapiro interview https://m.youtube.com/watch?index=616&list=LLj44HgVlqL7Qgzft.... For some context Andrew Neil the journalist being accused by Shapiro of a left wing bias, is an ultra conservative. Anyone with some knowledge of the UK media would fall over laughing hearing this accusation.
I’m not convinced that the majority of “the media” skews to the left (although some subsets, like say, the NYT opinion page probably do). CNN however is definitely left leaning, at least in the era of Trump. I don’t remember them always being this left leaning, I feel like they saw the success of Fox News and decided they needed a foil on the left.
> To put it differently: you cannot understand what’s going on in the world just reading the NYT and watching CNN
Absolutely. At the same time, regardless of what other news you consume, all Fox News is good for is taking the temperature of the American paranoid right and/or your low-information voters with authoritarian sympathies. Or folks who just like the news babes.
Or of those who have a difficult time acknowleding uncomfortable truths. Or do not wish to say certain things out loud.
And before you admonish me - the issue is not a lack of broadmindness on my part, or a lack of desire to debate/discuss. The issue is that Fox News is propaganda pushing morally and ethically outrageous policies. They're not the only one but they're by far - by far - the worst (at this time).
Propaganda is not balance. Fox News these days is propaganda. No news organization should be such. That is the truth, as unpleasant or difficult as it may be for some people to connect the dots to get to the point where they understand that.
Second. I'm pretty sure if you talk to people on the Left - the real Left, not the "center-neoliberal-not-far-right" which sorta is the placeholder for the Left in the United States (although I'm glad to see Progressives beginning to win elections), they'll disagree with you about the media bias of large media organizations.
I think that's part of what prompted the creation of The Intercept. I don't always agree w/them but they're clearly more of a Left publication.
Consider these issues:
- The Gulf War
- The invasion of Iraq (Gulf War 2)
- Until recently, police violence
- Until recently, climate change
And probably more. Plenty of the media outlets you named (and I think NY Times does great investigative reporting) often have not been alignment with the left on these issues, at least initially.
I am always down for discussion but not for obfuscation of facts, denial of the truth, or equivocation on the fundamentals of equality, respect for fellow people, and human rights. If we have to politely agree to disagree, so be it. But please consider what I say.
Oh. And it's important to distinguish between the tens of thousands of people protesting racist police violence since the murder of George Floyd, and the people who have been looting, whether out of rage, poverty, or simply opportunity.
There were lots more peaceful protestors.
In fact, that you kinda tried to imply that falsehood when talking about CNN (who probably did do something dumb) is exactly the kind of fact-distorting intellectually dishonest behavior that is so problematic. It's not specific to Fox, but it is endemic at Fox.
Lastly the recent rise in violence in some cities is worth talking about some other day, in some other conversation.
> Part of the problem is that so many people think it's only the media outlets that say things they disagree with that are the problem. It isn't. It's all of them, no matter what side they're on or what point of view they're advancing.
You can further reduce this to "all humans will lie and manipulate when it suits them". As a statement it's just as true, but it obfuscates the concept of scale : some people lie more, more deliberately and more frequently than others.
By declaring everyone an offender without further distinction, you are effectively excusing the behavior of the worst actors while minimizing the good faith efforts put forward by others.
Yeah, there’s a terrible conflict of interest between informing the public and selling advertising. There has been for a long time. It’s just worse now. We should figure out how to fix it rather than throw up our hands.
The problem is that moderation doesn't scale. Newspapers, TV, radio, film all evolved to have or to conform to some kind of standards and practices department, which played well with advertiser needs. The issue with contemporary media companies is that it's not practical to do that at Internet scale -- see all of the routine stories about moderation teams at Facebook, for instance.
Not to speak for GP, but I think what he's getting at is that the application of AI in this business model in order to drive engagement (for the purpose of selling ads) should be illegal. Or more generally, we need regulation around AI.
> The core issue is the underlying system that values engagement over all things ... the advertising system.
I don't think you're digging deep enough here. It's not the business model. It's the technology.
We've invented technology that can, to a significant degree, control people. It is as addictive as hard drugs and people will just keep coming back to their dealer for more and more digital crack.
But it's worse than simply heroin or crack cocaine. It gives the dealers not just the power to keep people coming back for more, but also gives them far, far more control over not just what those people do, but what those people think.
The large companies (Google, Facebook, Twitter, etc) aren't in the business of search or social media. Nor are they in the business of advertisement. Their business model is selling complete control over people at the population level. At some point it will stop being a business model and start being a self sustaining model of raw power that strips us of our humanity.
We need to take a hard stance to all attempts at psychological manipulation via technology. A/B testing to see how changes affect behaviour should be seen as morally repugnant as selling crack cocaine to children.
> I don't think you're digging deep enough here. It's not the business model. It's the technology.
I don't think you're digging deep enough here. It's not the technology. It's the people. Meaning not just the people running companies, but all of us.
Technology is a tool. Any tool can be used for nefarious purposes, or for beneficial purposes. Which kind it gets used for depends on the people who use it, and on the people who are creating the environment in which the people using the tools operate.
I don't dispute that most of the people who are running large companies (not even limited to social media companies) are amoral and will do anything to make more money, including manipulating other people's thoughts and opinions. But such people can only thrive in an environment where there are lots of other people who are susceptible to their manipulation. There will always be nefarious people in the world. But there is no rule that says there have to be enough other people who are susceptible to them to enable them to thrive.
I don't disagree that people who make money through lies and manipulation should be punished. But I think we all need to take a step back and ask why such people are able to thrive in our society to this extent. At some point the rest of society needs to take some responsibility for not being susceptible to liars and crooks.
These are the same word games that got us tired arguments like "guns don't kill people, people kill people." Looking only at people as individuals limits you to seeing first-order effects. The insidious effects of social media come about because large scale algorithmic optimization has found high leverage points for influencing society. While that's ultimately a consequence of individual people, you can't solve many complicated problems using a lens that can't see higher-order effects.
People seem to be fundamentally incompatible with social media, much in the same way they're incompatible with bullet wounds and drug addictions. Blame-passing word games just get us farther from taking that truth and starting to fix the world.
> Looking only at people as individuals limits you to seeing first-order effects.
Where did I say "looking only at people as individuals"? My point was precisely the opposite: that people are not just individuals, that an individual person's failure to exercise common sense and critical thinking skills, making them susceptible to manipulation, when aggregated over a large enough segment of society, has higher order effects that go way beyond the consequences of the manipulation of that individual person, because it creates an environment where nefarious people can thrive, which is bad for everyone.
Social media technology certainly makes that problem worse by giving the nefarious people more leverage. But you can't fix the problem by banning or restricting the technology; the nefarious people will always have the means to control how those rules get written so that they can continue doing what they want to do, just with different labels pasted over it to satisfy the letter of the rules. Just as has happened with past attempts to do the same thing.
> People seem to be fundamentally incompatible with social media
You are assuming that there is no way for anyone to use the tool of social media without being an addict. That is as false and pernicious an assumption as the corresponding assumptions in the case of guns and drugs, which you are also making.
With that false assumption taken away, your argument boils down to: since some people are incapable of using tools like social media, guns, and drugs responsibly, we have to ban, or at least impose draconian restrictions on, those technologies for all people. That kind of thinking is incompatible with a free society. In a free society, you penalize the people who can't act like responsible adults, not the people who can.
> In a free society, you penalize the people who can't act like responsible adults, not the people who can.
If the ills of social media are, as you say, (and I also believe), caused by the higher-order effects of many individuals "failing to exercise common sense and critical thinking," then who are we to punish?
A phenomenon like QAnon might have been started by a single nefarious person, and amplified by a small group of misinformation lovers, but it's only because of social media's leverage that millions of people have had their moral framework and way of interacting with the world corrupted so heavily.
Punishing the irresponsible was a reasonable solution for all of history where the irresponsible had a reasonable amount of societal leverage. We're no longer there.
Perhaps banning social media isn't the way. But focusing on the humans that find leverage points in the system to amplify bad messages also doesn't solve the fact that the system is designed to maximize ordinary people's ability to create self-sustaining societal doom loops.
Banning guns might be overreach. But banning something that's the functional equivalent of distributing assault rifles to millions of toddlers seems like a necessity in a free society.
I didn't say "punish", I said "penalize". Responsible adults should not have to have their society ruined because irresponsible people are manipulating and other irresponsible people are being manipulated.
> it's only because of social media's leverage
But what causes the leverage? It isn't just social media; if a billion people read on Facebook that they should drink nail polish to immunize themselves against COVID-19 (to concoct a fictional, as far as I know, example), and they do it, they suffer the consequences, not me.
What causes the leverage is that we have continued to hand more and more power to governments in the name of "fixing" problems that governments cannot fix. The result is that capturing that government power is worth so much that nefarious people are willing to spend billions to do it. Social media gives those people more leverage, yes, but if that big gob of centralized power wasn't there in the first place, it wouldn't matter.
Your proposed solution doesn't fix that problem; it makes it worse, by giving government even more power. And that just means responsible people get more penalized for the behavior of irresponsible people, because you're giving the government more power to ban tools that responsible people can make responsible use of.
> the system is designed to maximize ordinary people's ability to create self-sustaining societal doom loops
No, the system is designed to maximize the amount of power that nefarious people can capture, by centralizing that power. The solution is to de-centralize that power so it isn't there to capture. Stop depending on government fiat to fix problems.
> How do we make the masses of people less susceptible to this kind of manipulation? Without using the tools of government.
I don't know how to solve the problem for everybody. I know how I solve it for myself: by using common sense and critical thinking, combined with a lot of background knowledge from a lot of different sources. But I don't know how to magically make everyone do that. And even people who do that won't always agree; many of the questions we would all like answers to do not have simple answers that everyone can agree on. People have different goals and values and they aren't always fully compatible, and we don't have a good understanding of many important problem domains.
What I do know is that the problem is unfixable with the tools of government. So we have no choice but to look for other, non-government ways of fixing it.
Of course it is the people. People are flawed. Any Greek philosopher from 3,000 years ago could have told you so. There's no point complaining about it because you are not going to ban people anytime soon.
What we can do is to put soft and/or hard limits on what kind of behavior we are willing to tolerate in society. If a business model of providing "free" services and then subject your users to psychological manipulation for your own profit is harming us, then make that specific behavior illegal. That will deprive the abusers in industry from the ultimately amoral capital, personnel and technological resources, which will be liberated and redeployed to more constructive projects.
> Their business model is selling complete control over people at the population level. At some point it will stop being a business model and start being a self sustaining model of raw power that strips us of our humanity.
Calling it "complete control" is absurd hyperbole ans ignores that psychological manipulation isn't even remotely new. In fact the goddamned politicians implicitly called upon to ban this are psychological manipulators already. So were televangalists, every theocrat and strongman who suppressed dissent enough that his victims were stockholmed.
It cannot strip us of humanity - although it would be a good thing if it could because "humanity" in that context is a collection of bugs exploited for evil.
I agree with your diagnosis, but perhaps not your treatment plan. You seem to think that making the social networks’ business model illegal would not be a violation of free speech, but that the social networks limiting distribution of misinformation would be a violation of free speech. Am I interpreting you correctly?
No lucid assessment of “free speech” entertains a definition that includes “the use of machine learning to target messages to individuals in order to make money.” In the same way me sitting outside your house watching when you come and go to determine the best time to place a political flier on your door so you’ll see it when I want you to, surveillance capitalism it not free speech.
So you can't use machine learning.... can you use psychology to craft your message? Can you do surveys and gather demographic data to choose where to target?
You obviously think there is a line where you can no longer use tools to craft your message, but I am not sure where you would draw it.
I feel like any argument you make against machine learning is going to be able to also be used against something like the printing press. Couldn't people have said, "clearly, using a machine to make thousands of copies of your text is unfair and an abuse of free speech... you need to write your words by hand like the rest of us!"
I am sure where to draw it: at using machine learning. How did I draw this line? I saw the disparate impact machine learning has vs traditional media, by sheer measurement of friction. Traditional media requires intentionality on the part of the target: picking up a newspaper, turning on a television and tuning to a station. ML on social media exploits key weaknesses in human psychology combined with unprecedented data collection to nudge individuals to read/believe/buy. This is wrong, it’s clearly wrong, and should be illegal.
The sort of slippery-slope arguments are necessary and important but should not paralyze us from taking action against it.
Like a lot of issues you'll likely need to choose a Schelling point (arbitrary cutoff).
Some of it is simple inconveniences. It is a lot harder to learn enough psychology, understand your audience, and so on to influence them in comparison to having a program that scans every word they've put on their [social media account here] to decide how best to convince them. That would likely stop a good amount of actors.
If a quite small amount of people had access to and were using the printing press, then that might legitimately be an argument for the government in that country to ban it as it gives them more power over others. Though, once it becomes common and non-secret technology then everyone has been amplified the same way.
Now, that 'give everyone the ability to use machine learning to tailor their text to the audience' could be a route your could attempt, but I don't think it makes as much sense for machine learning even if I could imagine a society that has that.
I take your meaning as, what we see on social media is not Free Speech, it is advertising subsidized Commercially Promoted Speech. That also means that preventing the promoted distribution is not limiting Free Speech (although as a private business they could do that with some liability as well).
Basically, the people complaining that their speech is being blocked were allowed to select the terms for debate based on their chosen meanings. Maybe the meaning of Free Speech as standing on a box in the town square needs to be taken back... but the Supreme Court has ruled that money is speech so it seems like that boat has sailed (for commercial and political reasons).
I think this has a lot to do with the platforms vs. publishers issue, and the Section 230 dustup we're about to see.
IMO any platform whose owners have enough editorial control for these engagment-hacking techniques to be useful (the ability to decide what gets seen, what doesn't, and who sees what) should be treated as a publisher, not a platform.
This doesn't have to kill online communities generally. As long as we can distinguish between editorial control and freedom of association (i.e. the ability to ban rule-breakers and people we don't like) I don't see why effective moderation wouldn't be possible.
You think billionaire magnates and state actors who fund a lot of this divisive content care about advertising profits? No, in fact they have the deepest pockets and will be the last ones standing regardless on what you do with twitter and such. They don't need twitter, its focus on engagement just makes their efforts more cost effective. Cut that off and they'll refocus on buying national and local TV networks or whatever else gets eyeballs and attention.
I think you are being a bit generous here. It isn't society doing the diagnosing. It's just a group of highly privileged individuals. It's one disinformation entity demanding privileged position in the disinformation space.
All state propagandists complain about disinformation. Whether it is the bbc or chinese propaganda organizations or russian or whatever. They all claim to be worried about and fight against disinformation just like the nytimes. After all, if you are fighting disinformation, then you must not be party to disinformation. But we all know that's a lie.
> That is, the advertising system.
Nope. Disinformation/propaganda/yellow journalism/etc predates the modern age of advertisement. In the past, newspapers were funded by wealthy business people, politicians or government. And they were all created to further the aims of their creator via propaganda. Ask yourself who created the nytimes and why?
> This business model should be illegal.
Nope. You are being misled by disinformation. It's not the business model or the profit generating mechanism. It's the nature of the business itself. News/media/etc exist to manipulate people. It was created to tell people how they should think or feel about events. To guide the herd.
Also, engagement was important long before social media.
Yes, currently the 2nd amendment only guarentees that no censorship comes from government. We need a stronger guarantee: no censorship on any public discourse from any party. Not any kind, because even well intentioned censorship causes problems! Using reddit as an example: suppose there is a subreddit A that discuss news and politics, and the mods ban racist comments. If you are a racist, will you change your racist way if you found your post banned? No, of course not. You would just be indignant and find/make another subreddit B that is more tolerant of racism. The subreddit B is provided the same tools subreddit A uses for censoring racist posts, and abuse these tools to censor any voices that argue against racism. You can no longer be convinced to abandon racism because you are stuck in your comfort zone and anyone who argue against racism are just SJW or special snowflakes in your eyes and their opinions are automatically dismissed by your brain.
I come from a country where a free speech was never really a thing, and I tell you, americans, this: you don't value it enough.
Restrictions on free speech inevitably lead to some form of censorship, and any form of censorship inevitably leads to the population being subjected to some 'official' version of 'truth'. This 'truth', however, is carefully curated to coerce the population to act in a certain way.
So when Twitter or Facebook start 'fact checking' posts, you shouldn't say 'Twitter is a private company, 1st amendment doesn't apply to it', you should grab pitchforks and put their censorship efforts to rest.
Artists in the soviet era had to employ all kinds of allegories and humor to communicate their experience without being reprimanded by KGB censors. I'll take free speech over assigning any group the power to arbitrate what is considered "disinformation".
Agreed. Who snopes the snopes? Did you know Snopes has been shown to be biased since it is ran by people who lean one direction, who stops them from misinforming the public. What about when politicians are exposed and the media shields them are we just going to blindly trust a biased media source?
We need to allow the people to review all evidence and form conclusions from said evidence. I cant tell you how many times I have followed up on media articles only to find that the media just spat anything out for ads. Without all the facts it is bias and speculation.
Well, ultimately it's a question of authority. I cannot, or have no time to, validate even the simplest statements about something non-controversial, e.g. physics, but I choose to trust scientists and ignore some guy on Youtube who claims relativity is a hoax.
Removing the video from youtube by the fact-checkers would be problematic anyway, however it would be fine if there was a disclaimer on it referring a specific authority in physics that has views that disagree with it.
Can't I believe both things? I believe that Free Speech is one of the most important rights we have, but I also believe that people utilize their free speech to mislead and do harm.
While it would be nice to believe that "the solution is simple more free speech", and that the truth will win out in the end, is not fully backed up by the evidence. Lies can be carefully crafted to exploit the way human brains work, to mislead people into believing them... the truth is limited, because it has to be true.
So what do we do? I am not able to read the article, but it sounds like they are arguing for some limits on free speech. I share the concerns that people on here, like yourself, have about that. So what techniques can we use to stop misinformation that don't rely on limiting free speech? We can't just sit back and hope the truth will come through, we have too many examples in history to know that doesn't always happen.
If we just sit back and wait for the truth to win out, we might end up in a world where the people using free speech to mislead grab enough power to stop people from using the truth to fight their misinformation.
> Lies can be carefully crafted to exploit the way human brains work,
Well, sure. But the problem is the censors are also human. If crafty lies can deceive the average person, why do we believe that the authorities are any less immune?
I feel like so many times anti-libertarian arguments follow this formula. "Regular people keep making this mistake. So let's just have the government, helpfully stop them from making the mistake. Except of wait... the government is also made up of people who make mistakes." In other words, who watches the watchmen?
Even in a world filled with crafty lies, open speech is the obvious solution. It's much harder to deceive 300 million people than it is to deceive a small agency responsible for "arbitrating truth". Yeah, a lot of people will still screw up and get the wrong answer.
But when millions have access to open information, it's virtually guaranteed that at least some non-neglible subset will figure out the truth. A world with centrally managed opinion is not very robust, and likely to see truths die out completely.
I never said that the government should be the one to do something... I meant society needs to figure something out, because the evidence seems to be showing that a growing percentage of the population is being influenced by falsehoods.
I agree that a world with a centrally managed opinion is not robust and would see truth die out... my fear is that if we DONT do something about the current propaganda, our government will be taken over by people who don't believe in free speech. I want to do something, within the confines of preserving absolute free speech, to help combat that propaganda.
It isn't enough for some non-negligible percentage of the population knows the truth. Even if almost half the people can see through the lies, the other half could gain control and shut down the people who see through the lies.
> It's much harder to deceive 300 million people than it is to deceive a small agency responsible for "arbitrating truth".
In the US, you don't need to deceive 300M people. You just need to deceive enough people (voters, really) to get 270 electoral votes. In 2016, that turned out to be around 63M people.
I'm not arguing with your central point; I do agree that we don't want some central authority deciding what's true and what isn't. But 2016 showed that it was possible to deceive enough people with lies to elect someone who has not really delivered on any of his promises, and has actively hurt most of the people who voted for him. (I won't even get into the toxicity of his political platform as it's not necessary to do so.)
How do we actually combat this? "The solution to bad free speech is more free speech" did not work. I agree that "some non-negligible subset will figure out the truth"; in 2016 that was more than 65M people, but that was not enough. What other options do we have, that don't involve central fact-checking authorities, or, worse, active censorship? I really want to know what they are, because I agree that truth-arbiters and censors are unacceptable.
>> Lies can be carefully crafted to exploit the way human brains work,
> Well, sure. But the problem is the censors are also human. If crafty lies can deceive the average person, why do we believe that the authorities are any less immune?
> I feel like so many times anti-libertarian arguments follow this formula. "Regular people keep making this mistake. So let's just have the government, helpfully stop them from making the mistake. Except of wait... the government is also made up of people who make mistakes." In other words, who watches the watchmen?
If you can't design and build an iPhone from scratch, yourself, then why can Apple? If you personally can't write code that's nearly bug-free, then how can NASA?
Institutions are made up of people that make the same mistakes as the rest of us, but they can also have institutional practices that compensate and correct those mistakes. It's never perfect, but it's something an individual can't really do.
Society needs institutions whose job is to figure out what the truth is, and information dissemination channels that filter out lies and disinformation. Otherwise it'll be blinded. This work can't be mainly put on the shoulders of each individual, because they just don't have the bandwidth.
> But when millions have access to open information, it's virtually guaranteed that at least some non-neglible subset will figure out the truth.
That may not matter when the millions are robustly deceived by the lies.
The truth will probably win out, in the end, but the end might be one, ten, or a hundred years from now. If we're talking about election-influencing disinformation, that's too little, too late.
It was particularly bizarre when Facebook anointed Snopes as an official fact checker, and then Snopes proceeded to "fact check" posts by the Babylon Bee as "fake news". (The Babylon Bee is a humor web site and clearly labeled as such.)
I'm not sure they could be clearer, which is kinda the problem. I don't think these kind of satire sites should be illegal, or even necessarily removed from Facebook. But given how inevitable it seems to be that even reasonably well-informed people will get tricked, it's hard for me to look at them and see a net positive to society.
News stories from satire sites routinely get shared across social media with little context and, very importantly, without anyone clicking on them to check out the source. It may seem absurd to "fact check" them, but it really isn't, particularly ones that aren't very well-known to a wide audience -- which essentially means "anything but the Onion and even they get quoted straight-faced on occasion".
Babylon Bee and The Onion are exactly fake news. It's humor and satire and looks like news but isn't. What else would you call it?
Somehow fake news started also being applied to news that was sourced and thought to be true when reported, but turned out to be wrong when new informatiom became available. And also, for some people, any news that disagrees with your opinions.
How can you label something as "fake" if it doesn't even pretend to be real? If I buy a cheap wristwatch with a counterfeit Rolex logo then it's a fake. If I buy a cheap wristwatch from Timex then everyone knows it's not a Rolex and only an idiot would call it a "fake Rolex".
The Onion is available in newsprint, like a newspaper (or at least, it was). It has all the trappings of a newspaper, but everything is not real (but the ads). If that doesn't make it a fake newspaper, I don't know what does.
Weekly world news (is that still in print?) is maybe more fake, but they're both fake.
It's worth listening to the people from censorship-heavy countries. The US is better off for not having censorship. Free speech is messy. But it beats the alternative.
It's useful to read what the extremists have to say, if you read the extremists from both sides. You can still read Dabiq, ISIL's magazine, online. (That may have backfired. Their position was, it's a war to the death between our Islam and everybody else. The opposition agreed and crushed them.) It's worth reading what the gun rights people have to say. (Do not carry with a round in the chamber is good advice.) What the "defund the cops" people have to say. What the cop-rights people have to say. What QAnon has to say. What the "FEMA Death Train" people have to say. (Those big windowless railroad cars are car carriers. One believer followed one and put a video on Youtube, and was dismayed when it reached an unloading point and new cars came out.) What the "FEMA coffins" people were excited about. (That was a private storage yard for grave liners. Turned out FEMA doesn't stock coffins, just body bags, and not enough of them for the peak of the coronavirus epidemic.)
Within those extreme points you find the BLM people (who have had it with being shot and harassed), the white supremacists (who are mostly working-class guys who saw their way of life evaporate), the evangelicals (who right now are rudderless, having latched onto Trump, who represents their fears but not their values), and the Universal Basic Income people (that used to be called the "dole" in the UK). All have legitimate grievances. Within that perimeter lies reality. Those people are exploitable by politicians who don't have good solutions but can direct their anger.
That goes back a very long way. Read Shakespeare's version of Caesar's funeral oration.
> Universal Basic Income people (that used to be called the "dole" in the UK)
I'm much more ambivalent than I used to be about UBI, after seeing what large groups of sequestered, out-of-work, bored, and angry people are capable of doing, but just to clarify, UBI is not the same as the dole. The dole is only given to people who are unemployed, whereas UBI is given to everyone.
Probably worth keeping in mind that what you're seeing is sequestered, out-of-work, bored, angry people who still have to make ends meet in a hyper-capitalist society without some kind of guaranteed income.
In general, it's not people with time on their hands who pick up a brick and huck it through a store window; if it were, you'd see more millionaire looters. It's people who don't have anything to lose.
I think the downvotes are about your assessments on some of these issues.
But you’re absolutely right in that only by reading these varied sources can you approximately discern what’s really going on. Reading bs trains your bs detector. Only read bs and you won’t be able to distinguish it from reality
Putin, Xi etc. have heavy-handed controls over state media and Xi of course total control over social media. The US is not 'headed in this direction'.
'Free Speech' is the wrong banner for this discussion, as no-one's speech is threatened here - everyone is essentially free to say as they please publicly, and on their own web sites.
The Press is 'free' to say what they want, which includes a ton of bias, but that is what it is.
'Social Media' is a very new thing - it turns out it can be extensively influential and 'memes' from out of nowhere can grab narratives like dogs chasing each other through a crowd.
FB and Twitter are private platforms, concerned with content unlike AT&T, Amazon, email or network providers who are fully neutral - and Social Media has always been censoring content: if you attack individuals or threaten violence etc. you would have been banned 10 years ago.
The algorithms have always been favouring one thing or another using any variety of criteria, so the very nuanced question boils down to the nature of 'truthiness', the degree to which it can be assessed, and how much it can be used for algorithmic purpose - and especially, how could it be done 'fairly'.
I wonder if there are truly objective methods whereby people only ever see posts if they are shared directly by friends, thereby implying a more 'direct' model, like email, whereby if one party wants to chose with a specific, other party, well there's nary anything anyone should censor there.
Alternatively, one could contemplate anything with any political content whatsoever, to be in fact, political, and to be managed in accordance with the same rules for political spending etc. - but that might be altogether too much.
I honestly don't think that this is an existential question to the point wherein industries need to be disrupted or transformed, we are probably 'somewhere near' reasonable at this point, with some modifications we may have a system we can live with.
Finally - it should be noted that 'free expression' doesn't in any way imply 'fair expression' - like a free market, it can be leveraged and dominated by a few small voices who want to 'take control'. There's no reason that a few smart minds couldn't establish control over the narrative whereupon we'd have to ask ourselves if that's what is remotely good for society. As much as we sometimes loathe the press today, there are a lot of checks and balances, along with 'narrative driven news' the MSM are beholden to mostly facts, there's quite a lot of integrity in that system. A 'free for all' wouldn't have any such controls and any kind of real truth could be lost.
Perhaps the best solution would be to just stop using social media as we do. I did, and don't miss it at all.
I find it very interesting (but understandable) that tech people, obsessed with data and informed decision-making, are essentially stating that "this time will be different" when arguing for speech restrictions. Despite all the decades of history of these sorts of things going sideways, somehow we're still optimistic that this time we'll get it right.
I find all the pearl-clutching on HN about "free speech" to be rather boring when these have always been heavily moderated platforms run by for-profit businesses that have always given special treatment to whatever advertisers were willing to pay the most. Nobody bats an eye when they're the ones profiting off of it.
I'm sorry if that's dismissive but it's extremely frustrating hearing these kinds of things in the context of social media companies, who have never had a problem with advertisers milking FOMO and gaslighting their entire audience with this stuff for years now. (you are inadequate if you don't buy this product, you are a loser if you don't share this article right now, you are an enemy of the state if you don't vote for this candidate, etc) And actually they love all that because it drives clicks!
The uncomfortable situation they find themselves in now is: At what point is the line drawn, where companies and special interests with millions to spend on ads can no longer lie and gaslight the public? It seems to have been easier for these companies to be the ones to dismiss this when the stakes weren't as high.
> I find all the pearl-clutching on HN about "free speech" to be rather boring when these have always been heavily moderated platforms run by for-profit businesses that have always given special treatment to whatever advertisers were willing to pay the most. Nobody bats an eye when they're the ones profiting off of it.
"You say you're in favor of free speech, yet when I run into your house screaming obscenities you try to kick me out. Interesting."
HN isn't the only discussion site. Reddit isn't the only discussion site. The big social media companies don't own the Internet, and iHeartMedia doesn't own radio. It's vital we have alternatives, and laws to ensure those alternatives exist can be discussed, but saying that people don't really value free speech when they prefer moderated discussion spaces doesn't track. It's trying to equate unlike things, and create a contradiction where there is none.
i thought the point was that it's easy to pretend to be an absolutist, but I find a lot of free speech advocates turn a blind eye to censorship and deception when it's not explicitly political, but just for financial gain.
And my point is that commercial censorship can be circumvented by going to a competitor, whereas political censorship can only be circumvented by becoming a political refugee. We need to stop monopolies, but that isn't much to do with free speech law, is it?
There are speech controls everywhere except on your own private website.
The MSM press, despite being narrative driven has tons of controls, they hold each other accountable in ways, they generally stay in the domain of facts.
Social Media has always been censored, if you try to attack people or incite violence on Twitter, you'd get banned 10 years ago, nothing new about that.
YouTube has been dropping channels for a long time now, mostly anything that upsets advertisers etc..
The issue isn't so much 'Free Speech' or 'Censorship' in the classical sense - it would be if web sites were being taken down, or, if the White House for example forbade MSM to talk about certain issues, or worse, took control over MSM outlets.
Well too much free speech on Facebook allegedly caused the Rohingya genocide in Myanmar. Either that free speech sensationalized the role of Facebook in the genocide to the point where Facebook doesn't have free speech.
Or just stop using their services? I like that we've just sort of accepted that Facebook and Twitter are trans-government institution level monopolies and that there's nothing we can do about it, or have better alternatives. There's no mandatory reason to have a twitter or facebook, there's plenty of ways you can connect with someone. And I'd happily trade online messaging for personal communication especially if I'm concerned about censorship and privacy rights from that institution.
I too grew up without free speech in a dictatorship and believe fervently in free speech. However I believe too that dictating to a company what speech they can have or can't have on their platform is a violation of free speech in itself. It is their right to disagree with someone's opinion and to deny them access to their platform. If we disagree with that action we punish with our eyeballs, or our wallets or we build our own.
That these platforms are so huge is a problem for sure but doesn't automatically classify them as institutions owned by the public. There are other ways to deal with companies that get too big.
I came into the article expecting to hate it, but the author eventually kind of won me over. She doesn't think Americans should abandon free speech, but she does think we should approach it more like Germany and France.
> Germany and France have laws that are designed to prevent the widespread dissemination of hate speech and election-related disinformation. “Much of the recent authoritarian experience in Europe arose out of democracy itself,” explains Miguel Poiares Maduro [...] “The Nazis and others were originally elected. In Europe, there is historically an understanding that democracy needs to protect itself from anti-democratic ideas.”
Guess what are two most frequent reason for site blocking and prosecution of dissidents in Russia? Articles 280 (Calls to extremism), 282 (actions aimed at inciting hatred or enmity, humiliation of human dignity).
So you say on VK (one of russian social networks), "Corrupt mayor from the ruling party is a crook and must be prosecuted"... well, you are inciting hatred to the ruling party. With luck you'll get 2 years of suspended sentence.
Yeah—if there's one misgiving I have with the article, it's that they never really address the question of "who makes the rules?" It sounds like one reason Germany and France have fewer problems is that everyone there is just... more responsible.
> Two days before [France's] national election, the Russians posted online thousands of emails from En Marche!, the party of Emmanuel Macron, who was running for president. France, like several other democracies, has a blackout law that bars news coverage of a campaign for the 24 hours before an election and on Election Day. But the emails were available several hours before the blackout began. They were fair game. Yet the French media did not cover them. Le Monde, a major French newspaper, explained that the hack had “the obvious purpose of undermining the integrity of the ballot.”
Can you imagine the American media completely ignoring a major leak because it had “the obvious purpose of undermining the integrity of the ballot"? I know I can't!
But also—that blackout law is an example of a measure that seems like utter common sense! Don't allow sudden bombshells to go off right before voting starts, when there isn't enough time to calm down and examine them reasonably. America should adopt something similar.
My overall takeaway from the article was that free speech can both protect and threaten democracies. Russia is certainly an example of what can go wrong when speech is restricted, but Hitler is an equally salient example of how misinformation can be used seize power. The harsh reality is that Democracy is extremely fragile, and faces threats from both sides!
If country's own citizens do the voting, I'm personally fine with anyone influencing them. If we postulate that people are responsible and reasonable enough to have a say in the elections, it is an insult to assume that they are some weak-minded fools who can be easily swayed by a hostile foreign government.
Insulting or not, we already have Exhibit A: the US 2016 Presidential election.
In all seriousness, I think you're framing it the wrong way. It's really: they are normal human beings with fallible intellect and emotions who can be swayed by the sophisticated propaganda campaigns of a hostile foreign government.
Yeah, the US 2016 Presidential election. The losing side is so devastated by the loss, that it still can't accept that it's their own compatriots who didn't elect their rather questionable candidate, so they are desperately grasping any other explanation, why she lost.
Americans did the voting, not Putin. If $200k of ads did the job, well, Dems should have spent $201k to counter that.
Also, are you suggesting that any foreign power can cheaply puppeteer the feeble-minded US population? Of so, maybe you need some form of authoritarian rulership to protect the people. Or else, I'm sure, even Iran and North Korea will find a few million bucks to install their own presidents.
It's not an insult, the other way around, it's naive to think that the general population acts intelligently with respect to information available to them on an ad-hoc basis.
All nations media outlets are absolutely protected and controlled industries for this very reason.
Information can be wrong, totally misrepresented, hyperbolic, it can reach large numbers of individuals wherein contrasting information cannot.
The objective of a 'foreign actor' is not to 'inform' citizens, it's the opposite - it's to use 'information' possibly 'truthy' to manipulate elections towards their desired outcomes.
If a foreign state wants to release 'important information' to 'inform' citizens, then it can be released with enough time for that to be vetted, digested and disseminated properly.
More than 1/2 of the vote is based on emotional decision making around specific subjects, were the media open to influence by other parties, it would be possible to control electoral outcomes when margins are +/- just a few points, as so many elections are these days.
If Putin is behind the 'Hunter Biden leaks' - that's fine, the truth is the truth, but not 10 days before an election it isn't.
> that's fine, the truth is the truth, but not 10 days before an election it isn't.
Why not? It's like watching a romantic movie, when a good guy crashes a wedding of a loved one in a last minute, and instead of "speak now or forever hold your peace" tell him "but not in 10 days before the wedding!"
Trust me, living in Russia, I know a lot more about Putin than you do. However, the truth is the truth.
On an unrelated note: seing fisthand how Putin's propaganda is working in the internet, it is beyond laughable how much power and influence you americans ascribe to Putin. His internet 'troll' factory is not a sophisticated tool for propaganda, it's a blunt factory that simply floods every comment section with low-quality noise. Of all money that were meant to influence the elections, 99% were, without any doubt, simply stolen. Trump's victory in 2016 is not a consequence of some elections meddling, but simply a coincidence.
As one prominent member of russian opposition said (btw I know him personally, as in 'shaking hands personally'), “What is happening with ‘the investigation into Russian interference,’ is not just a disgrace but a collective eclipse of the mind.”  
I agree with regard to hate speech (although "support" is too strong a term). What the Westboro Baptist Church does is abhorrent, but I'm in favor of their right to say those things (although I don't think private companies need to give them a platform).
But this article is primarily focused on "misinformation", and I think that's more complicated. It's one thing to legitimately believe something abhorrent, and to express that abhorrent view in "good faith." Knowingly spreading something you know to be false, however, is another matter—not entirely unlike engaging in fraud, or shouting fire in a crowded theater.
Nobody in the Westboro baptist church is causing mass protests and riots either to put it in perspective. But because they are a nutjob right wing christian sect, the internet overlooks that. Because it's not part of "their tribe."
because being wrong doesn't derive people from the ability to govern themselves? We should let them determine what is legal to say for the same reasons we should let them determine what is legal, period. Speech is not magical in any way.
>How much harder would it be to fight for civil rights if that was deemed subversive or abhorent, and subversive or abhorent speech was prohibited?
If the population already deemed civil rights abhorent I'm not exactly sure the legality of it matters a great deal tbh, it's not like a lot of civil rights protest was legal to begin with
Oh cool! So you would've been okay with labeling abolitionism as hate speech? What about miscegenation?
In every era in history, people have always thought that "our era has finally got it right, we're not like those heartless savages of the past generations and we're not like those degenerates in the next generation".
I'm not really sure I follow these strange examples, how does one classify abolitionism as hate speech and gets this past judges, journalists, elected officials and everyone else?
We have laws against Holocaust denialism in Germany. If Angela Merkel tomorrow tries to use those laws to attack her political opponents everyone will declare her mad, she won't be relected, and probably sued. That's why rules can exist in a state of law, because you can't just do random crap with them
This absurdist logic doesn't just apply to speech. Why have laws against riots? Obviously every protest can be declared a riot. Why have a police at all, they can be tyrannical etc.. This is no argument
Agreed. I can’t stand this rhetoric of “I can think of a way this reasonable and useful policy might possibly be abused so let’s throw the entire thing in the trash. I refuse to mitigate any harm or accede to any policy unless it’s perfect!”
Well... then free speech doesn't exist anywhere, because notwithstanding laws against libel, slander, false advertising, perjury etc, there's really no place in the world which doesn't have some form of societal taboos, standards of politeness and courtesy, or notions of acceptable behavior which will lead to consequences if transgressed.
Edit: I see you changed "If there's speech that you receive consequences for then it isn't free" to "If there's speech that you receive legal action for then it isn't free." I'm assuming you were just trying to disambiguate, but even by that more narrow definition freedom of speech is practically nonexistent.
Well, as long as we recognize marxism as morally equivalent to nazism and, while suppressing the openly far-right movements/publications/elected politicians, suppress the openly far-left ones.
...just kidding, I am a free speech absolutist. I feel like the leftist sudden dislike on free speech is similar to that of trumpist dislike of the media in 2015. They know that they are extremist, and they know they are wrong.
We used to have free speech in the 1960s in Canada but we removed this. We created an opening to start policing speech.
Why are Canadian's so polite? Because you can go to prison for up to 2 years though more often than not it's just compensation. AKA "including compensation for injury to dignity, feelings and self-respect"
Hurting someone's feelings will send you to a non-judicial court where you do not have innocent until proven guilty. You must prove you didn't do it. Though it seems literally the only way to not be charged is to threaten taking it to the supreme court.
Worse yet, it has also created the 'you're a racist' thing. In order to silence your political opponents out of fear of being brought up on these charges. You get called a racist.
Coming from a country where free speech "was never really a thing" implies that your government either stops you from conveying X, retaliates, or is unwilling to protect you from another entity doing the same.
I find it strange that anyone in that situation would equate it with social media sites removing or flagging content published on their platform.
It's ironic that Emily Bazelon, the author of this essay, is the granddaughter of the late influential federal judge David Bazelon. Judge Bazelon was a well-known progressive and a well-known free speech proponent. Although his granddaughter shares his broadly progressive worldview, her position on this issue is different.
The evolution from David to Emily reflects that of the US left as a whole. Most people like to praise free speech as a theory. In practice, it is a tool for those who don't occupy the commanding heights of a culture to push back against those who do. When the New Left was ascendant in the 1960s and 1970s, promotion of free speech was an important component of its rise to cultural power. Now that its intellectual descendants occupy the commanding heights, they view it as a threat rather than an asset.
> In practice, it is a tool for those who don't occupy the commanding heights of a culture to push back against those who do.
But isn't the article arguing the exact opposite?
It warns that those in power now use the shield of "free speech" to push propaganda and lies in order to undermine truth and democracy and thereby attempt to preserve power. Which is not something anyone saw coming, and which deserves to be treated as a serious danger.
This is completely and utterly different from using free speech to promote civil rights, transparency, etc. The New Left wasn't weaponizing free speech to spread disinformation, so the two situations would appear to be completely distinct.
I don't know what to make of the idea of the powerless censoring the powerful to protect truth and democracy. If you can censor your opponents, you are per definition powerful. If you are worried about your opponents use of the "shield of free speech" to protect themselves from you, you are worried about limits on your power.
> The New Left wasn't weaponizing free speech to spread disinformation
Yes, they were. They were advocating a way of organizing society that had already killed tens of millions of people and would go on to kill tens of millions more. And they were relying on their audience not being aware of the death toll.
One thing that occurred to me while I listening to this article last night: I would not mind laws which restrict obvious and provable lies, in circumstances where the individual knew they were lying. You would need to adopt an extremely high standard for prosecution, similar to libel—no suing someone for being wrong, or having a strange belief, or saying something that's misleading but has a shred of truth.
But, I don't think it's good for society that I can create a fake website purporting to be a way to vote online, and send it to members of an opposing political party to suppress their vote. If I know that I'm lying, there should be laws against that.
If we don't do anything, our problems are going to get so much worse as deep fakes improve and doctored footage becomes indistinguishable from reality.
The only way that argument makes sense is if one views their own political in-group as being the arbiters of true - never wrong, never taking the wrong side, never abusing power. If history has shown us anything it's that humans with power are really bad at those things.
What comes across as so hypocritical is that the people calling for restrictions on free speech were the same ones who benefitted from free speech in the past as they sought to push back against the establishment.
How about no one should eat the same food group all the time? I get news from here (leftist), reddit (averaged to moderate when all my subs are taken into consideration), Google News (more leftist than not), and 9GAG/Imgur when they post political social stuff. Fairly well rounded.
> It warns that those in power now use the shield of "free speech" to push propaganda and lies in order to undermine truth and democracy and thereby attempt to preserve power.
Except with new media no one can really push information, it is mostly up to the recommendation algorithms to propagate the messages, which is optimized for creating more engagement and ad dollars. This, singlehandedly damages more of democratic processes, which depend on healthy discourse, than any questionable content put into it.
> now use the shield of "free speech" to push propaganda and lies in order to undermine truth
Is there a form of free speech, contents of which you disagree that doesn’t push propaganda or undermine some definition of your truth? In other words, in this version of narrative warfare, every opponent narrative is already “propaganda”, every disagreement is “destroying democracy”.
Except truth is hardly an out there objectivity to which propgandic words magically render our eyes blind. It is a process of dialectic; disagreements that can be integrated together and form a more comprehensive picture of our musings in an infinite problem space.
It is the false sense of certainty, including utopian ideas of its attainability with likes of “fact checks” and judicious restriction of speech that destroy this process, and with it any hope of integrating opposing worldviews without wanting to hurt those who hold them.
Except metaphysics of truth has been discussed for centuries and things are more complicated than that. Truth on matters most interesting to us is mostly transjective (neither objective nor subjective but depends on the interaction between the agent and the arena), because which truth you pursue is as important as what the thing is in itself, and the information we could seek about most things in themselves is inexhaustibly large.
“Earth is spherically shaped” is not an “objective truth” if you are calculating the response time of your radio transmission of your satellite over a ground station up north, you need to get more precise than a “sphere”.
> Aren't "fact checks" part of the free discussion?
Certainly they are allowed, but they are posturing an institutional authority they don’t have. It is not a magical epistemology machine that you can put in bleeding edge scientific research in and it will spew “truth”s out.
They are attempts to monopolize truth finding process, cheat the way out of dialectic with more institutions. There is no board or organization of “fact checkers”, no standards, no transparency. It is a branding gimmick.
> “Earth is spherically shaped” is not an “objective truth” if you are calculating the response time of your radio transmission of your satellite over a ground station up north, you need to get more precise than a “sphere”.
> The young specialist in English Lit, having quoted me, went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern "knowledge" is that it is wrong. The young man then quoted with approval what Socrates had said on learning that the Delphic oracle had proclaimed him the wisest man in Greece. "If I am the wisest man," said Socrates, "it is because I alone know that I know nothing." the implication was that I was very foolish because I was under the impression I knew a great deal.
> My answer to him was, "John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."
"Truth is hard, let's go Postmodern!" isn't a valid conclusion. It's an escape hatch, a refusal to make a best effort and, when it's about something that matters, a horribly dangerous practice which essentially cedes the field to the idiots who have no such scruples. "VACCINES BAD! VACCINES AUTISM!" is idiocy, it's dangerous, and it must be countered, not implicitly allowed to pass because what is "Facts" anyway, man?
> "Truth is hard, let's go Postmodern!" isn't a valid conclusion. It's an escape hatch, a refusal to make a best effort and, when it's about something that matters, a horribly dangerous practice which essentially cedes the field to the idiots who have no such scruples.
Postmodernism is not the only critique of Kant. Although you're fighting a strawman, I agree with your criticism of postmodernism as used today, but that is not what I was talking about.
I am arguing indeed for making the best effort and especially about something that really matters. And it turns out "this is the objective truth" is one of the other escape hatches, especially since most of the questions that really matter are far more complex than the shape of our planet. Ethics, meaning, policy etc. they all require an ongoing process of being less wrong, which best works in opponent processing. There is rarely a terminal value for truth for these questions, there is always going to be an ongoing tension because the part of the reality we are trying to capture is inexhaustably complex. People who wrote the religious texts thought they were able to capture and codify the algorithm for "the right way", but we now see how inadequate they can be for our 21st century context. Anytime someone thought they had the key algorithm for utopia, they turned out to be the most evil machineries of our history sooner or later.
> But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."
I think that guy is the most wrong. Because "what are we going to use this information for, what adjacent information that we haven't yet might be interesting, are we framing the question right" is the most critical parts of the query that goes unasked. If we are talking about walking on the street, the flat earth guy is just as correct as the sphere guy. Notice I am not saying his epistemology is correct, nor he is entitled to his own opinion and we should be pluralistic. I am saying as long as both guys only walk on the street, the difference opinions are completely inconsequential.
Now expand this to a question no one has a clear answer for. E.g. economic policy; some want it sphere, some want it flat. No doubt newspapers will be filled with people "it is flat, this is the objective truth". That is the use of going "hang on a second, we don't do objective truth here, we humbly pursue being less wrong, now dialogue with us, don't rhetorize your talking points, just honestly talk to us". Sadly, nowadays mass of the public discourse happens over channels that doesn't reward that but rhetoric and likes.
Your whole position assumes that the other person has some kernel of truth to what they're saying. A flat Earth assumption can be right in some circumstances, but the "VACCINES CAUSE AUTISM" assumption is just insanity. It's wrong, and if your epistemology won't let you say it's wrong, your epistemology is broken, not vaccines.
> Your whole position assumes that the other person has some kernel of truth to what they're saying.
Exactly. They might have been fed garbage propositions that explain their garbage conclusions, but calling them insane, crazy, <pick-your-favorite-slur> is just a cop out of discomfort of the dialogue and co-existence, through the way of sub-humanizing them.
> It's wrong, and if your epistemology won't let you say it's wrong, your epistemology is broken, not vaccines.
I’d say on the contrary, “epistemologies” that demand purity and certainty (ideology is a better term for this) don’t tend to explain reality well and because of that they don’t survive in the long run. They are inherently maladaptive.
Existence itself is subjective. Reality itself is objective, but anything we touch has a hint of subjectivity present, inherent to our existence. The true falsehood is that there is an objective reality that we can perceive.
This is very similar to Mao imprisoning people for "threatening the revolution". The revolution is good when it serves them but not when it threatens their hold on power. They now want to ban free speech because it is against their power interests.
The commenter you are quoting is praising free speech. Yes, it is a good thing.
To belabor the point, what's ironic is that in the 1990s it was the right, 'the religious right', that wanted to censor free speech -- such as the speech of rap artists. Now it is the left that attacks free speech and wants to censor rap artists.
Previously it was the church that wanted to censor free speech. Now it is progressives, who 'f------ love science', who want to censor uncomfortable scientific results.
This quote from the article describes the dynamic in the US pretty accurately, in my opinion.
> The conspiracy theories, the lies, the distortions, the overwhelming amount of information, the anger encoded in it — these all serve to create chaos and confusion and make people, even nonpartisans, exhausted, skeptical and cynical about politics. The spewing of falsehoods isn’t meant to win any battle of ideas. Its goal is to prevent the actual battle from being fought, by causing us to simply give up.
If that’s what awaits the future “heights of culture,” if that’s how you think people will ultimately get influence in the future, I don’t think there will be much of a culture left to have lofty opinions about freedom or speech.
Agreed. And I'm concerned about the future if Biden wins, as seems likely. Today many progressives recognize at least the practical value of free speech, because the alternative is Trump and McConnell deciding what you're allowed to say. But if Democrats take Congress and the presidency, it's going to be very tempting for them to try to shut down opposing views. And the first amendment may not be a shield; with control over the courts it can be reinterpreted into irrelevancy.
No part of the 60s New Left occupies the commanding heights outside of a few Ramparts Magazine born-again Catholic neocons. This is a fantasy of right-wingers who try to roll the US black civil rights struggle and working-class socialism into a big ball of University Judeo-Bolshevism.
edit: That gay people, black people, and women have to be considered now is not left-wing politics, which are about class. That people other than white men have a voice is a rational, liberal outgrowth of the civil rights movement. Other than that, mainstream politics moved aggressively to the right until the bottom fell out of the capitalist utopian theory in 2008.
Given how much the issue of free speech comes up here on HN -- especially regarding Twitter, Facebook, and politics -- I think this is a really important article for people to read.
It's long, but is extremely nuanced and shows that the issue is far more complex than just "the solution to offensive speech is more speech".
One key takeaway is in the middle:
> [Free speech is] a fundamentally optimistic vision: Good ideas win. The better argument will prove persuasive. There’s a countertradition, however. It’s alert to the ways in which demagogic leaders or movements can use propaganda, an older term that can be synonymous with disinformation. A crude authoritarian censors free speech. A clever one invokes it to play a trick, twisting facts to turn a mob on a subordinated group and, in the end, silence as well as endanger its members. Looking back at the rise of fascism and the Holocaust in her 1951 book “The Origins of Totalitarianism,” the political philosopher Hannah Arendt focused on the use of propaganda to “make people believe the most fantastic statements one day, and trust that if the next day they were given irrefutable proof of their falsehood, they would take refuge in cynicism.”
> In other words, good ideas do not necessarily triumph in the marketplace of ideas. “Free speech threatens democracy as much as it also provides for its flourishing,” the philosopher Jason Stanley and the linguist David Beaver argue in their forthcoming book, “The Politics of Language.”
But most of all the article shows that the history of free speech in the US is not simple at all -- and that our current view of it is very different from the period of 1949-1987 when broadcasters were subject to the "fairness doctrine", which I think most people today aren't even aware of.
> [Free speech is] a fundamentally optimistic vision: Good ideas win. The better argument will prove persuasive.
Wrong, wrong, wrong! Free speech is pessimistic as heck. It's saying that good ideas don't necessarily win, but if you allow authoritarians to engage in arbitrary censorship then bad ideas will always win. Censoring free speech is not just a "crude authoritarian's" move; it's what all authoritarians do whenever they feel they can get away with it. And censorship won't save you from propaganda either: there's a lot of crude propaganda in every totalitarian regime, happily coexisting with censorship of every other point of view!
Free speech is NOT a problem: it's a very real safeguard against critical threats. If you treat it like one, you haven't seriously engaged with the issue.
But he lived well before the age of the internet and maybe, you know, just maybe when the world changes we should adapt to it rather than to dogmatically hold that what one person thought a couple of hundred years ago should be enshrined and never ever be changed lest the sky falls down.
I completely agree that citing authority is not an argument. I only contend here that the GP made an inaccurate summary in dismissing Mill's argument; and that argument is only one chapter, not the whole book as you might think from the way it's cited. You might find it pretty on point: he wrote in Victorian times which were also a local maximum of social pressure to profess the correct thoughts. Also a time with unprecedented new media (though I don't recall that topic coming up in this chapter).
One of the problems with suppressing criticism of even things that are completely true and right, as Mill points out, is that people come to believe the true thing in only a hollow way. I've been thinking that may have happened to the doctrine of free speech itself in the U.S. -- becoming just a slogan kids learn to parrot in school. Hollow it enough and it's easily lost.
(Added: I found I could unflag your comment. I think the snark was undeserved but it wasn't nearly enough for flagging.)
I think you're well past that stage already. Plenty of people parrot the founders as though they had some kind of divine insight into what makes a good state, rather than to see political systems as subsequent attempts to learn from mistakes made in the past. It is very well possible that 'free speech' should be sacrosanct, but the US is doing a piss-poor job of showing that that is the case.
Political systems come and they go, so far the 'free speech' countries are not doing remarkably better than the countries with some limitations on free speech and it would be good to recognize this and to see what can be learned from each other rather than to put dogma first.
Variation in laws and norms is a valuable teacher, and there's a lot of shallow faith in the civic religion, agreed on that much. I'm kind of too lazy to argue with you here about the rest.
The affordances of social media could be much better designed for preserving the memetic edge the more true ought to have over the more false; OTOH the way Twitter and Facebook are going about this attempt at reform is heavy-handed folly that's already backfiring. (http://www.islandone.org/Foresight/WebEnhance/HPEK1.html from the 1980s shows by example that it is possible to think ahead about societal consequences of the design of communications media. That particular paper focused on more scholarly publishing, but it's not like that's unimportant either.)
I think the problem is advertising. Advertising causes media to focus on the controversy because it is what drives engagement. Business-as-usual would not drive engagement nearly as much as outrage does.
Exactly, in fact it doesn't even matter what comes out of it there is simply no alternative to Free Speech that could work. Free Speech at least can work that's more than any other concept has to offer.
But there's no guarantee that it will. When people argue that free discourse will eventually always end up at the "best" place (whatever they define "best" to be) it almost sounds like some kind of appeal to a diety.
I wonder if the argument you've highlighted is a good idea, or a particularly dangerous one. Maybe we should discuss it and find out. Or maybe we should call what you're saying 'propaganda' and ban it outright, since it might call for 'radical' and 'potentially destabilizing' 'changes to our democracy'.
It would be too bad if you could only present it as an opinion, in limited arenas far away from anyone who might disagree. I wonder who the next 'protected minority' will be, and what, exactly, they will need to be 'protected' from.
Aren't you glad we have freedom of speech, so that you can make this thought of yours known?
In practice, free speech is easy to lose. Say we start with equal free speech for both sides of some spectrum. As some ideas from one spectrum take hold, they are selected for. After a while, the ideas of one spectrum slightly overtake the ideas of another, then it's easy for that side to now try and silence the other side and lose that freedom of speech.
It's kind of like that common saying that pure evil is the belief that there is nothing left to know. That's what losing free speech does. It's society thinking "our system right now is the best" and not bothering to try and improve things.
Since the point clearly went over your head or you just decided to give that snotty reply to a strawman you came up with: parent was saying good ideas don't win if demagogues can hold a monopoly on the marketplace of ideas in the same way that, say, the ISP industry can hold broadly anti-consumer policies that customers don't do anything about because it is locally not a free market.
To expand it further, I would argue stuff like Twitter and Facebook that optimize for engagement are already not free speech since they are designed for only the most outrageous (and likely false) content to flourish.
There's no straw man in the argument you're replying to. All that was said was "maybe the idea that things should be censored is a bad idea and should be censored." It is a very clever way to demonstrate the danger of what you're potentially prepared to embrace, and how it can be used against you.
Nobody can hold a monopoly on the marketplace of ideas. Nobody. That genie is out of the bottle.
No, it is a straw man. The idea that crazygringo highlighted was simply the notion that maybe the best ideas don't win in a free marketplace of ideas. The straw man presented was that you wouldn't be able to discuss this at all if there was no free speech.
But there are regulations on free speech and they are sensible. My understanding of speech regulations in the USA is that they primarily need to be focused on time, place and manner restrictions unless they fall into some exceptions that typically don't matter (edit: typically don't matter in political discussions).
A sensible manner restriction: you will generally be told to move along if you are shouting your message at the top of your lungs in a public place. Especially so if it is the middle of the night.
How you speak your message can be regulated in a more easily balanced way, although it is somewhat dangerous as well. This leads to potential place, manner restrictions online. A manner restriction can easily be content neutral.
A manner restriction: you could maybe regulate the use of sock puppets or fake accounts.
Here you see it is a strawman argument to say we could not have a discussion regarding whether good ideas will win in a marketplace where sock puppets are banned.
No one cared about free speech until Trump was elected. Propaganda isn't anything new, it's used by demagogues to blame society's woes on [insert scapegoat here]
Rather than address society's woes (there are so many in the US) We're just told it's somehow different this time, because it's digital. This just doesn't hold up when you see that this story has played out many times in the past, it's predictable to anyone paying attention.
I had to read your post and the GP post multiple times and I'm still confused. It doesn't seem like you're talking at all to what they said but are stating a very well worn argument for free speech, that is, if some speech is to be curtailed, then all speech is under threat of being curtailed.
But the GP was precisely talking about how even though many would say "we have freedom of speech", what that actually means in practice falls far short of the mark. We "have freedom of speech" in the sense that many people hold the ideology. But holding an ideology is very different from having that ideology realized.
> The point has been about whether more powerful figures can use their platform to drown out the free speech of others
The article isn't focusing on that aspect, though. It's suggesting that falsehoods and propaganda can flood the zone with so much shit that people are just tired of trying to sort though it to figure out what's actually true and what isn't. The modern (post-modern?) demagogue doesn't bother with curtailing free speech, they just put out many false narratives like Putin does.
"The spewing of falsehoods isn’t meant to win any battle of ideas. Its goal is to prevent the actual battle from being fought, by causing us to simply give up."
Not only is our view of free speech different today, but the US has a long history of censoring minority groups. Take the Hays code for instance which made it de facto against the law for movies to portray gay men in a positive way or "race mixing" at all. That was around until nearly the 70s. That's still within living memory .
Liberals fought against the Hays Code. And now they condone Twitter and Facebook on the same premise as those who defended the Hays Code: that its private actors voluntarily policing speech that’s harmful to society.
I think this argument proves too much. Clearly there is speech for which you support Twitter and Facebook suppression, and there is speech for which people here do not. We generally oppose any restraints that operate in the overt service of bigotry, and some of us don't oppose private restraints on unhinged conspiracy theories.
I also strongly object to the notion that liberals somehow own this, when clearly both sides of the spectrum instrumentalize speech and its suppression when it suits them.
> "I also strongly object to the notion that liberals somehow own this, when clearly both sides of the spectrum instrumentalize speech and its suppression when it suits them."
Both sides might do it but it's against the tenets of Enlightment liberalism to do so, so of course those claiming to be liberals justifiably get called out for it. If a person eats meat, they don't get to call themselves a vegan; if a person is okay with suppression of speech when it benefits them, they don't get to call themselves liberals.
Respectfully, I think that's a silly argument. "Liberal" isn't a label any modern (20th or 21st century) liberal chose for themselves. By way of example: "enlightenment liberalism" doesn't tell us much about whether property taxes should fund schools or whether teachers should earn merit pay, but the term "liberal" or "conservative" strongly suggests what someone believes about those issues. It's about as persuasive as coming up with some definition of "conservative" that conservatives fail to meet.
In a discussion like this, about American policy, the right thing to do is just to accept the working definition Americans use; otherwise, all we'll do is argue about semantics, and the debate we're having over social media sites suppressing things isn't about semantics. It's substantive.
Expanding on what katbyte said, the Hays code was adopted by every major studio in the US to replace state run censors. They did this because of the same battle going on right now in social media: "if we self regulate, then we won't need government regulation placed on us." When an entire industry agrees to follow the code and has enforcement options available and does it with the threat of government action if they don't follow it, the only difference between it and a real law are the name. The consequences to ignoring it were that your funding was dropped, the offending scenes were removed from the film, or you were kicked out and blacklisted from the industry. They were apparently pretty strict about it. You can tell that by how stringently it was followed for decades.
And the big players had a stronger monopoly than seems possible today. Downloading obscure foreign movies isn't quite as easy as Netflix, but in 1960 what wasn't on TV or a few screens was just about unobtainable for almost everyone.
(I guess paying for these downloads runs into a similar situation, mastercard & friends choose to ban things they aren't legally required to.)
At the time, movies weren't protected by the First Amendment, so many states and cities had active censorship boards and there was discussion of federal regulation. The Hays code was an attempt to establish nationwide standards that would satisfy the censors.
> [Free speech is] a fundamentally optimistic vision: Good ideas win.
I disagree with this statement. Free speech has no say on who "wins". Free speech is based on the principle that everyone has a right to be vocal about what they believe, even if you disagree with it. The alternative is giving up the power to others to decide who can speak and for what reason, whether you agree with it or not.
Many people don't want to hear this, but if hypothetically the majority of a population believes in and talks nonsense, then that's what the population wants and that population should live with those consequences. What we have here is not a political problem, but a human problem, and involving politics as a bandaid has shown time and time again that those doing the censoring don't do it for the "common good". Corruption and self-interest is attractive to people in those positions, again, because they are humans and what we have is a human problem, not a political problem.
The difference between the former and latter manifestation of human problems is that in one a small group of imperfect humans make the rules for the rest, whereas in the other no one decides who can say what and people get the consequences they bequeath upon themselves.
> Good ideas win. The better argument will prove persuasive
We don't value free speech because "good ideas win", there's nothing "optimistic" about it, we do know that bad ideas also win, sometimes on the back of/with the help with said free speech (history has showed us that), so tying the idea of free speech with its perceived usefulness/effectiveness is a fallacy, that's not the reason why we should value it. We should value free speech for its sake only.
I agree. You can extend this idea to the idea of freedom of religion - the goal isn't to have some Free Market of religion where the "good ones" win and the bad ones die off, it's just a fundamental human right. Same as speech.
> Looking back at the rise of fascism and the Holocaust in her 1951 book...
She learned the wrong lessons from her study of the rise of fascism. The correct lesson is 'if you botch the economy badly enough, voters will explore every option to try and get some relief'. Including voting in Nazis. The Nazis (with a parallel to Trump) are a sign of some large group feeling profound economic distress and being rather unhappy with politics as usual.
And there is a basic premise in the middle of that article that you quote: that the author is morally pure enough to determine what is good and bad in the marketplace of ideas. That possibility was tested extensively in the 20th century. There isn't anyone who can do that. They tried lots of people, none of them worked out. If we create a method for anointing some truths reliable and some 'twisted' then it is going to become controlled by corrupt people and then do more harm than good. Free speech is by far the most reliable way of identifying bad ideas as bad.
>Free speech is by far the most reliable way of identifying bad ideas as bad.
... but you just spent an entire paragraph arguing that it's impossible for any person to distinguish between good and bad in the marketplace of ideas, and that all attempts inevitably do more harm than good.
How then would it be possible to identify bad ideas given freedom of speech? Surely any attempt to do so would suffer the same bad consequences.
There is a spectrum of reliability. A panel of experts deciding on truth or fakeness of news is on the bad end of the spectrum, everyone figuring out what they think is most likely true is further towards the good end of the spectrum.
But nowhere on the spectrum is good enough to uncritically trust news you read on the internet. That can't be achieved. There is a choice of misinformation - either that approved by an eventally corrupt panel of experts, or those approved by the opinions of People of Average Intelligence, or that approved by people you personally like.
Truth isn't the result of consensus, it's the result of knowledge. "everyone figuring out what they think is most likely true" leads to millions of people being propagandized into believing 5G towers cause coronavirus.
On average, the panel of experts is still going to be more reliable.
Perhaps I'm misinterpreting this, but I wonder if the author is suggesting ("dog whistling") that in our (US) current two party system, that only one side is guilty of it? (I guess my question is actually: I wonder how paragraphs like this are unpacked in the minds of readers, depending on their particular worldviews. Written English is so brutally flawed as a communication medium).
Or maybe this is the nuance but I'm not picking it up:
> It’s alert to the ways in which demagogic leaders or movements can use propaganda, an older term that can be synonymous with disinformation.
Trump (primarily, but certainly not solely).
> A crude authoritarian censors free speech.
"Crude authoritarian" typically unpacks to Trump in most people's minds, but it's mostly "the left" doing the censorship these days.
> A clever one invokes it to play a trick, twisting facts to turn a mob on a subordinated group and, in the end, silence as well as endanger its members.
I interpret this as "the left" again, but I wonder if that's what was intended?
> Looking back at the rise of fascism and the Holocaust in her 1951 book “The Origins of Totalitarianism,” the political philosopher Hannah Arendt focused on the use of propaganda to “make people believe the most fantastic statements one day, and trust that if the next day they were given irrefutable proof of their falsehood, they would take refuge in cynicism.”
This feels like it is pointed at Trump ("the rise of fascism", "the most fantastic statements"), and fair enough, but what many people don't realize is that both sides do this, regularly. What most people don't realize though, is that "if the next day they were given irrefutable proof of their falsehood" doesn't happen with mainstream falsehoods, because the disproofs aren't published in the mainstream. Why would they be? To find disproofs of many mainstream facts, one must read "alternative media" - but, due to a decade of skilful propaganda, these sources now intuitively evaluate in most people's minds as False by Definition.
Or perhaps the author is saying this only with respect to history, and I'm getting all worked up about nothing.
The biggest issue right now (other than the widespread lack of appreciation for the importance of truly free speech, and how biased and simplistic mainstream speech is) in my opinion is the lack of realization of the degree to which the ability to restrict free speech now lies in the hands of a very small set of elite companies, who are arguably not entirely politically unbiased. The number of people on HN who seem unable to distinguish between the broad general principle of free speech versus the first amendment is very concerning.
EDIT: Wow, the silent majority is fast on the gun today! But alas, the reasons shall forever remain a mystery...
Because you interpreted the neutral words of the article as left and right? It doesn't say what political views the crude authoritarian has.
You injected that in, and then claim it's the author dog whistling. (By the way, this is actually, literally "begging the question" - "It must be a dog whistle because I read it this way even though it doesn't use those words or make that claim at all!")
That's why you're being downvoted. There's a strong reason the author didn't use the words "Trump" or "left".
> You injected that in, and then claim it's the author dog whistling. (By the way, this is actually, literally "begging the question" - "It must be a dog whistle because I read it this way even though it doesn't use those words or make that claim at all!")
An unexpected take on my words.
What I actually did/said:
>> Perhaps I'm misinterpreting this, but I wonder if the author is suggesting ("dog whistling") that in our (US) current two party system, that only one side is guilty of it?
>> Or perhaps the author is saying this only with respect to history, and I'm getting all worked up about nothing.
If you look more closely, you may notice that I am very explicitly pointing out that I am speculating about what the author is getting at, rather than, as you (incorrectly) say I say:
- and then [claim it's (it is)] the author dog whistling
- It [must be] a dog whistle
- ...because [I read it this way] even though it doesn't use those words or make that claim at all (this one is rather ironic)
> There's a strong reason the author didn't use the words "Trump" or "left".
Assertions like this are suggestive of mind reading ability, and also that political rhetoric that does not say things explicitly does not exist.
Something else I said:
>> I guess my question is actually: I wonder how paragraphs like this are unpacked in the minds of readers, depending on their particular worldviews. Written English is so brutally flawed as a communication medium.
This "how...are unpacked in the mind of the readers" is interesting, and I think might explain why near every single comment I make gets downvoted. I've always thought that it was people not liking my political views, but I think what it actually is, is that people aren't able to read text literally. As I was writing my message, I was "reading between the lines", with suspicion, looking for rhetoric - but I was doing this with full conscious intention, and my statements were explicitly speculative - I was wondering if the author was implying (which often does indeed happen) the things I sensed. If a logical, unbiased person (if there is such a person anymore) considers the article and writing techniques* - I could easily excerpt several examples) in it's entirety, my suspicion doesn't seem terribly inappropriate.
Your assessment (and I expect others) on the other hand, is that I have(!) read and said very specific things, that I haven't actually said/done. Not only did you not "respond to the strongest plausible interpretation of what someone says", you grossly misinterpreted it, and then asserted that misinterpretation as if it was a fact (because to you, it is just that).
Of course, there are very stressful times, the world is a complete gongshow, it's hard not to get at each others throats - "these things happen", so I'm not bent out of shape by it. Rather, I'm actually quite relieved, because I now feel quite a bit less confused and frustrated about the world around me, and how people within it behave.
The article literally starts out by pointing out that the left brainstormed ideas for what to do if Trump contests the results of the election, and that the right reacted to this brainstorming by arguing it's part of a broader pattern.
There is no way to disentangle the left vs right dynamic in the free speech discussion right now, which is probably why this always gets so heated. People are constantly trying to remove this dynamic because flat out stating one side is "for censorship" while the other isn't rubs up against much of the recent history surrounding free speech and it makes people feel really uncomfortable.
IMHO free speech needs accountability and massive increases in education resources to be effective.
Without accountability, the loudest voices can set what is true to many. This congregates power into the already powerful, until new means of information dissemination are created to allow new voices. However those too will need the same accountability or will fall to the same issues. You can see this with the rise of every new medium for information. From newspapers to books to movies and music to each new major social media platform.
Without education resources, there is a much lowered threshold for people accepting information provided to them as truth. We need to globally, but especially in America, increase education of the sciences and critical thinking. Maybe it's too late for entire generations today but the future can be very bright.
Unfortunately the premise of the title will most likely provoke an emotional response from many, to which the concept of unfettered free speech is inalienable, even the discussion of the problematic side effects it can create.
"Free speech" cannot be held to account for anything, only people can be held to account for speaking. Accountability to whom?
Every "whom" has an interest. Any way for any entity to hold any other to any artificial consequence for saying anything at all will assuredly be perverted. This is why free speech is held as an absolutist value. Any compromise will inevitably lead to abuse. There can be no legitimate mechanism to enforce this. There can be no criteria required to say whatever you want or learn whatever you want. Anything less than everyone having the right to say anything for any reason is an unacceptable compromise. You as a free individual cannot allow the normalization of speech policing to occur. If you do so you will inevitably be victimized.
Nobody said anything about prosecution for incorrect views. Neither I, nor the article, advocate for jail. Accountability can take many forms , none of them need to be dire. Many can just be tantamount to better transparency so we know who to hold accountable in the first place, and the requirement for posting sources in political ads for further reading.
That's a fair response. I do get a bit excitable on this issue, because I consider the ability to express unpopular opinions to be one of the greatest sources of progress in history, from "what if the earth goes around the sun?" to "what if owning people is actually bad?" Yes, that means a lot of bad ideas are going to be out there, which is a price I firmly believe we should be willing to pay.
And I think the point stands that even benign-seeming regulations are going to be enforced by those in power, and are going to be particularly susceptible to selective application. For example, using hate speech or defamation laws to punish criticism of police officers: https://www.aclu.org/blog/free-speech/internet-speech/new-ha...
>Yes, that means a lot of bad ideas are going to be out there, which is a price I firmly believe we should be willing to pay.
This is an easy out, as it assumes bad ideas are just "out there" like a noble gas, not interacting with anything around them. But speech doesn't exist in a vacuum, it always has consequences, and very often exists explicitly to enact change or call to action.
Do you believe the effect of the spread and adoption of bad ideas is a price we should always be willing to pay?
For instance, a political party spreading lies and conspiracy theories about a religious and ethnic minority, or widespread disbelief in the legitimacy of modern science and medicine, or historical revisionism and the denial of genocides and atrocities?
It's easy enough to argue free speech as an absolute if your model is people giving speeches at Speaker's Corner or arguments over philosophical and political abstractions at a coffeeshop, but it seems less cut and dry, pun intended, when the machetes come out.
And regarding your point about regulations being abused, yes, all regulations and laws can be abused. But abuse is a known problem and remedies exist for it - such as the lawsuit being filed by the ACLU in the link you posted.
> "Do you believe the effect of the spread and adoption of bad ideas is a price we should always be willing to pay?"
You mean like freedom of religion and religious tolerance, including for atheism? A few centuries ago, effectively 100% of the population was religious and such beliefs were anathema. Should society have suppressed the spread and adoption of that "bad idea" and remained religion-bound instead of becoming secular?
To endorse suppression of "bad ideas" is conservatism in a nutshell: The ideas we have today are surely right; anything that differs from them is dangerous and needs to be suppressed.
Do you believe the effect of the spread and adoption of bad ideas is a price we should always be willing to pay?
I would say that we should "always" allow bad ideas in the same sense that we should "always" avoid torture. In the sense that you can construct hypotheticals where not doing so would lead to catastrophic results, but it's necessary to have a very strong presumption in favor because anything else has a high likelihood of abuse.
a political party spreading lies and conspiracy theories
Like the allegations of Russian collusion?
widespread disbelief in the legitimacy of modern science and medicine
Like Kamala Harris saying not to take a vaccine if Trump recommends it?
all regulations and laws can be abused
Yes, and I would claim that this is even more of a problem for laws about speech, because they're inherently highly subjective. According to some, citing FBI crime statistics can be "hate speech".
The person gave examples of partisan things, that could happen, as an example of how arguments in favor of censorship could be abused.
That the point. You seem like you want to ban "lies and conspiracies", and I will point out that this same justification could be used to do horrible things, such as ban discussion of russian conspiracies.
That is why the person you are responding to said that "they're inherently highly subjective".
Also, I would like to point out that you seemed like you were dog whistling towards certain partisan ideas. Because although you spoke in generalities, you were clearing hinting at generalities that are often used to accuse certain political groups.
You might have good intentions here, but you also have a setup for a motte-and-bailey argument. It starts with "surely the enforcement would be benign" but who holds the powerful to account when it actually comes time to reign the powerful in?
Maybe they'll merely be blacklisted from lucrative or prestigious positions in government or industry?
Surely the same could be said for your argument though where you're taking it to an authoritarian extreme?
The opposite of free speech isn't regulated speech for everyone like a distopian future. It could be as simple as requirements for more transparency in certain types of speech like political ads and what qualifies as news versus opinion.
One of the major points of the article is that the spirit of free speech has been corrupted because it has allowed it to be coopted by political and corporate parties that can drown out the voices of the people with no trail back to who's presenting the version of reality to you.
We can regulate categories of media separately of individual rights. Many countries do this successfully without being authoritarian regimes
* Having to delineate between news and opinion more clearly. Fox news is a high profile example, but many news sources do this. The inter mixing of news and opinion without clearly defining what is opinion, leads to the spread of opinion as truth.
* encouraging social media sites to require political ads be transparent of who funds them and make it clear to a viewer.
* require political ads to provide sources for their claims.
* Bring back some form of the fairness doctrine to prevent hyper partisan news.
Accountability to me means, there should be transparency to the consumer of news of where the content is coming from, differentiating news and opinion ,and providing them with the direct means to learn more.
A large part of the article is about how there's no easy paper trail for the lay person to verify their source.
I would like to take the emotion and deceit out of politics and news. Free speech is great, but it hasn't scaled well with the unfettered ability for deceit to be part of it. There should be solutions to provide more free speech while allowing people to curtail the inherent deceit that it also allows, by arming them with the mental tools to do so.
Those are good starts. I'd add that one of the biggest problem with free speech currently is that we allow actors to speak and publish content that we don't even know are real Citizen of the US. And we allow people to speak and publish content at scales that are unprecedented.
I think this is a big issue that somehow people don't talk about as much. It isn't free speech that's so much being attacked, more so we are trying to define the framework of free speech on the internet.
There is one perspective that could say the internet is like anything else, and so free speech applies the same. Except the internet isn't like anything else. It hides people behind anonymity, it allows actors that arn't american or even real to present themselves as if they were and speak to US affairs. It makes it trivial to decimate false claims to a greater audience at almost no cost. And the platforms of information built around it have designed automated systems to burry high quality discourse and instead promote sensational headlines.
To me, this is similar to problems of spam, phishing, fraud and abuse that say Amazon faces on its store front, like counterfeit, fake reviews, bot accounts, money laundering, etc. Democracy on the internet faces similar problems of fraud and abuse. And just like Amazon needs a framework to mitigate those fraud and abuses, our democracy needs it as well.
I really like those. Verifiability and paper trail seem very important.
I’d recommending not using the term accountability if these are the things you mean. Generally it has the connotation of being responsible for outcomes.
I also think that there is possibly a false dichotomy between free speech and news here. As if regulating news is somehow the same thing as curtailing free speech. I’m not sure it needs to be so stark.
If we introduced some or all of these regulations, we could simply attach them to the use of ‘news’ as a product description. I.e. If you don’t do these things, you can’t sell a product called news.
This would be similar to not being able to call yourself a medical doctor or police officer without he right credentials.
As far as I can see free speech is a red-herring. We should preserve it strongly, but also introduce a category of media product that has certain guarantees or protections.
The main point of the article is that the free speech laws in this country have clouded the divide between free speech for individuals, and the rights for media companies and political parties to spread misinformation as they see fit.
What you're proposing in your last paragraph is what the article is proposing too. The issue is that under the guise of an absolutist take of free speech, we've now put all free speech under the same banner.
To apply protections to the categories of media would require rethinking what free speech means and who it applies to.
IMHO free speech is great. It just needs certain measures around transparency and corporate lobbying to prevent it being brought down with the bad actors
I actually don’t see the article proposing what I am proposing. For the most part it is a good analysis, but I don’t see much about defining what ‘news’.
The New York Times isn’t the fountain of garbage that Fox News can be, but neither is it any more principled. It’s just usually better.
They have a vested interest in this being about social media rather than news, and the article focuses tech rather than news when it comes to solutions. I didn’t see where they addressed the issue of what to do about Fox News, for example.
It's not just underfunded in rural America. It's underfunded in urban America also.
The world is a smaller place than you think. You could get on a plane and be in one of those re-education camps tomorrow if allowed foreigners in it. It's a real thing that is happening now. In the name of education. Because those people are expressing beliefs that the authorities don't condone.
Rural America is where people who have different beliefs from you are. Which is really what this is about. Otherwise you would have included all of the urban and suburban education underfunding.
First, TCF isn't non-partisan; it's a progressive institution.
More importantly, TCF's "underfunding" metrics assume the conclusion of the debate you're having here. They're generated via correlation of expenditure to 3rd and 5th grade reading and math scores. But if your argument is that increased school funding is an inefficient or even ineffective way to increase 3rd and 5th grade reading and math scores (or, more subtly, if you think those narrow measures aren't durable through 12th grade) then the TCF study isn't really a rebuttal to that argument at all, is it?
It seems like the author is bending over backwards to advocate _some_ censorship over unfettered free speech.
Yet, I don't see free speech as being the issue here at all. The truth is that when a significant percentage of the population holds regressive views, they'll seek out and amplify people who pander to those views to the exclusion of all others.
This has gotten more irritating for the moderate mainstream with the advent of internet, but the root of the problem always is the presence of a core critical mass of the electorate that espouses those views.
I really hope people take this article seriously and not just knee jerk react to the idea that unobstructed free speech is totally perfect. The idea that there should be no consequences for spewing vitriol and using lies to back it up is insane. It's been demonstrated time and time again that people can't fact check every single statement and this fact is used by malicious actors to manipulate people to their ends.
Its also really strange because for every other system we have, we accept legislatively enforced boundaries. There are constraints around the manufacturing of drugs, food products, vehicles, etc. We have regulations around pollution.
Even for abstract ideas, we socially agree on norms. Nobody would think it's acceptable to just called somebody the n word (or some other derogatory word) in the work place. Healthy companies tend to get rid of 'brilliant jerks' who yell or bully other employees, despite their technical competence. Why? Because we understand that this attitude creates a corrosive environment that is fundamentally not beneficial for the good of the group. We even understand this in personal relationships. We break up with people who lie to us and manipulate us.
How do you arbitrate truth? What if someone starts spreading the idea that gravity does not exist and that the reason you are stuck to the earth is that the ground is accelerating upward towards you at 9.8 m/s^2?
Who gets to decide whether lies or truth is being spread here? I don't think anyone is qualified to make that call.
Sure, those are examples, however what about people like rush limbaugh? Michael richards should be in trouble for what he did and he suffered the consequences, but somebody like rush limbaugh has an active listener base where he essentially lies about every group under the sun and there are no consequences. Here is his Wikipedia page which lists all his views, many of which are just abhorrent and factually incorrect (he doesn't belief that we damaged the ozone layer, this is a non political example)
It doesn't seem like there were consequences for any of these the people who spread lies (in pizzagate people were arrested, but not the spreaders of the rumors).
Imo, we should be intolerant of the intolerant as they say. I have nothing against obscene language, pornographic ideas, dark humor, etc. But speech has influence and the second speech is used to infringe on the rights of people, it should really be stamped out.
I've seen some names you've named, and I don't like or follow those individuals, but I assume you don't like what they have to say.
So as a thought experiment, ask yourself, is there anyone you like and follow that you believe is feeding you misinformation? Or are you the only person that is not basing your worldview on misleading information (or, maybe not alone, but others very much like you)?
If the answer is no (and how could it be otherwise) then you start to see the problem: where is the practical distinction between people spreading misinformation and people simply disagreeing with you? In every disagreement over fact, one person is wrong. One person is basing their view on some flawed premise or information. How is banning misinformation any different, tangibly, than banning dissent?
I don't like what they have to say because its inherently not fact based. I am totally susceptible to bias and I do not claim to be otherwise, but there are certain ground rules we can set. Rush Limbaugh always starts off his rants and problematic messages with straight up incorrect statements. As in statements that are verifiably false. He then constructs arguments around these false ideas. The issue is, to the average viewer its difficult to know that those ideas are false (how are you gonna do that after working for 40+ hours a day doing manual labor, commuting, cooking, taking care of a kid, etc).
and now guess what, oklahoma for example is experiencing increasing covid cases and now local officials are advocating for lockdowns, which is something scientists were advocating from the start. He was clearly wrong, however people DO listen to him. I know because I've lived with people who do and think he knows everything.
We even have a president who dog whistled a terror plot against the governor of Michigan. Trump is a white supremacist and white supremacism is inherently a non fact based ideological stance, yet it leads to stuff like the terror plot.
I am advocating for clamping down of purposeful disinformation. America itself knows how powerful purposeful disinformation is given its experiences with it during the cold war. What we should probably do is punish purposeful disinformation that has an intent to harm a specific group, the general public, or for the gain of an entity at the expense of the public (tobacco companies and polluters come to mind here. Lots of disinformation regarding health effects). Its generally very difficult to prove things like hate crimes (which the spreading of disinformation to harm a group is) and even cases like the michael richards incident wouldn't involve a punitive action. He was angry because he was bombing his set and said something wrong out of a moment frustration. He might truly be racist (or not, who knows other than him), but he doesn't seem to be actively working to advance his life at the expense of other's. He didn't actively work to disenfranchise a group of people, unlike people like David Duke who actively works to disenfranchise specific groups of people and has never suffered the consequences of it in America (he went to jail for tax fraud, but not for his awful actions funnily enough).
Well you didn't answer my questions, which is OK, you appear to be partisan, which is also OK. What you're advocating I very much disagree with, even though I'm not sympathetic to any of the names you named I think it is very short sighted to advocate restricting speech of any kind and will inevitably lead to victimization of more people than any speech could. Also, particularly, I think classifying any sort of speech as a hate crime like you have done is irresponsible at the very least.
> We even have a president who dog whistled a terror plot against the governor of Michigan. Trump is a white supremacist and white supremacism is inherently a non fact based ideological stance, yet it leads to stuff like the terror plot.
It wasn’t a white supremacist plot. It was a bunch of anarchists (notice the flag in the background) who think Trump is a tyrant, who hate the police and everything related to the government and one of them even attended a blm rally.
> One of alleged plotters, 23-year-old Daniel Harris, attended a Black Lives Matter protest in June, telling the Oakland County Times he was upset about the killing of George Floyd and police violence." 
A good example of how every side has its disinformation.
If we look at the issue from the other side, the problem with censorship is the central idea of a civilizing mission on the belief that government and only government can really artfully determine who ought to speak to the masses in the interest of the expansion of knowledge.
The misguided belief in the superiority of government wisdom about who should speak to many has happened before. With radio it was unlicensed radio stations that corrupted youths with rebellious thoughts and strange music. With TV and movies it was commercial publisher corrupting youths with violence and nudity. Now with the Internet it is bubbles that corrupt youths with disinformation and falsehoods.
The enthusiasm for radio censorship is dead. TV censorship died relative recently. Internet censorship however is starting to gain popularity, but I strongly suspect it will crash in a few decades just like radio and TV did. Future people will look back on the misguided belief in the superiority of a handful companies and wonder how it ever could gain popularity.
There are not a lot of causes that I'm willing to die for, but freedom of speech is one of them. It is the most fundamental human right from which all other human rights are derived. The moment you lose your absolute right to question authority is the moment you lose the ability to protect your other rights.
That's ironic coming from the nyt, with their heavily biased front page. Even people who agree 100% with the nyt's point of view should be offended by it. I'm more interested in facts than confirmation bias of what I believe. I don't consider them a reliable source of news.
Note that an article can be disinformation even if all the information in the article is correct - by simply cherry-picking which facts to present. An example from several years ago: "Half of All Corporations Paid No Income Tax" which was factually correct. Another fact omitted from the many articles on this was "half of all corporations lost money". Income tax is not owed when losing money.
What we sign up for is the burden and responsibility to educate ourselves and thoroughly, skeptically vet information ourselves. Interestingly, we have this burden anyway regardless, and censoring information only increases that burden and makes it more difficult to verify information.
The 'educate myself' is not that hard, the much harder problem is to get the truth out to those who have already been sold on propaganda. Facts rarely make people change their mind. As they say, one can only lead a horse to the water.
1. What should be the role, if any, of governments in censorship? The US supposedly has a rule of "none". But there are exceptions. Some kinds of porno. Threats against individuals. Incitement to riot. All of which can be over-used to chill speech.
2. What should be the role of private companies? This is tied in with the role of monopolies. If there were 50 Facebook or Twitter like services, none with more than 20% market share, this would not be a problem. Maybe this is an antitrust problem.
3. Should anonymous speech be as protected as thoroughly as non-anonymous speech? The US has a tradition of protecting anonymous speech. This is partly because the Federalist Papers, and "Common Sense", were published under pseudonyms. Much of the problems with "fake news", from whatever direction, come from concealed sources. Spam, in all media, comes from anonymous sources. Of course, if you publish under your own name, you may face harassment and reprisals.
I disagree. I read it, and the article brings up the usual 'platforming' and fake-news arguments which are unconvincing. Benjamin Franklin himself published plenty of fake news, but the benefits have always and likely will always outweigh the risks.
> Somewhere along the way, the conservative majority has lost sight of an essential point: The purpose of free speech is to further democratic participation.
Is that actually true? It seems critical to her argument, but this statement was presented without much evidence (just a quote from a law professor) - but is that actually the purpose of the first amendment?
Apparently the ideas in the first amendment were influenced by the enlightenment thinkers philosophies from Europe such as John Locke and Cesare Beccaria. It's said that Thomas Jefferson influenced James Madison heavily as well in having him come up with the bill of rights.
So already you can read up on those people and their phylosophies to get a better idea of the concepts behind freedom of speech.
Similarly, about 200 ideas were proposed by the various states to be included in it, and Madison selected 10 from those ideas. It can be interesting to see the ones that weren't selected. And there were influenced from states bill of rights, the English bill of rights and the Magna Carta as well.
For Freedom of Speech I found:
> He studied at Princeton where a great focus was placed on speech and debate. He also studied the Greeks, who are known for valuing freedom of speech, too—that was the premise of Socrates' and/or Plato's work.
This was his first draft for it:
> The civil rights of none shall be abridged on account of religious belief or worship, nor shall any national religion be established, nor shall the full and equal rights of conscience be in any manner, or on any pretext, infringed. The people shall not be deprived or abridged of their right to speak, to write, or to publish their sentiments; and the freedom of the press, as one of the great bulwarks of liberty, shall be inviolable. The people shall not be restrained from peaceably assembling and consulting for their common good; nor from applying to the Legislature by petitions, or remonstrances, for redress of their grievances.
Various dictionaries say:
> Free speech means the free and public expression of opinions without censorship, interference and restraint by the government
> Free press means the right of individuals to express themselves through publication and dissemination of information, ideas and opinions without interference, constraint or prosecution by the government.
And there was a legislative case where Madison argued against the narrower interpretation from common law of what is considered speech. So at least we believe it includes a broader set of speech then that.
The main issue that's unclear, and might have been unclear on purpose, is that the constitution says: "the freedom of speech" emphasise on "the". The presence of the "the" article suggests it refers to the existing freedom of speech that existed at the time. And debates about how wide or narrow was that at that time still linger.
> Similarly, about 200 ideas were proposed by the various states to be included in it, and Madison selected 10 from those ideas.
I don't know where you got this, but I wanted to let you know it is very much incorrect. The bill of rights was originally submitted to congress as 17 articles, passed by congress as 12 articles, 11 of which have been ratified by the states. 10 ratified in 1789 and one, the 27th amendment, in 1992.
As to whether 200 ideas were mulled over, I don't know, I'm sure a lot was discussed. There were a lot of sources for all of the ideas that went into the constitution, and British law at the time was probably the biggest influence.
> When in 1789, he outlined 12 amendments, it was after reviewing over 200 ideas proposed by different state conventions. Out of these, ultimately 10 were selected, edited, and finally accepted as the Bill of Rights.
What I find interesting about that is that it's presented as an originalist argument, by a presumably progressive writer, against the conservative majority on the Supreme Court. Usually it's the other way around, and conservatives are amenable to originalist arguments.
That's interesting, since most founding fathers were progressive in their own right. And most ideas related to the founding of the USA revolved around the ideas of the enlightenment, which were all progressive and liberal in nature, in fact they pretty much coined both terms.
The promise of the internet and social media was that it would broaden people's horizons. That has certainly failed so far, but I wonder if it is at all possible to create a social media platform that drives genuine, deep interaction.
I think it would have to be based on curated, pseudo-anonymous conversations with some strong ground rules.
Information is cheap. As recently as 20 years ago, the best source of general knowledge was a set of encyclopedias, at home if you could afford it, or at a library if you could not. Wikipedia is far from perfect, but it is generally as good as encyclopedias were, and far more broad. You could tell a similar story about the news, and many other things.
What does it mean when information is cheap? I'm not an expert, but I think it means that propaganda is cheap too. I wish people were better able to recognize propaganda and brain washing.
The truth does not battle falsehoods with facts, but with emotions. Hate, anger, fear, and distrust are used to prepare the soil for the seeds of propaganda.
It seems like the brain is hardwired to find 'interesting' things, like counterfactuals. Conspiracy theories are the ultimate counterfactual. I admit that I find conspiracy theories to be interesting as well; but I use them to inform my opinion about motivation, not facts.
> "In the name of Allah, the most merciful, the most merciful, (...) to Macron, the leader of the infidels, I ' executed one of your hellhounds who dared to belittle Muhammad, calm his fellow human beings before a harsh punishment is inflicted on you.
The core issue is actually by allowing companies to harvest hyper individualized information about you, so then they can then design algorithms to maximize engagement by exploiting our psychology, and thus harvest our attention and clicks...
Literally if we just started to pay for stuff we’d see this problem improved. The consequences of the current model is that the internet has become the modern equivalent of a welfare state, in the sense that our experiences are sponsored by our wealthy corporate benefactors. Clearly not a good recipe for civic engagement.
So, as consumers we should pay for things if we want to have nice stuff, and also businesses simply shouldn’t be able to follow literally every person in the world around, as they stroll around the internet, and write down and share what they’re looking at. That could be done by creating liability around data collection/protection, as in the case if HIPPA, or outright banning the technology that enables ad tech at scale. I’m sure people here have much better ideas than this, I’m just offering an ethos.
Also the internet itself should be treated as a public utility. The sad thing is by not paying, we’re still paying, just in less direct ways, since the ad tech that sponsors our internet lives still has to get money from somewhere...
I'm worried that this situation is going to get a great deal worse. A flurry of disinformation and online noise is somewhat manageable so long as it is coming from humans, who may be paid by a state or an enterprise to drive engagement with controversial topics.
It's going to get a million times worse when this gets automated.
Right now the quantity of disinformation is practically limited by the human capacity to post stuff online, but GPT-3 style automation will remove this limitation, and we will live in a world of all noise, no signal. Our knee-jerk adherence to "free speech" is opening up a dangerous scenario where we are more vulnerable than anyone else to what is effectively a DDOS on our minds.
I explored this topic in more detail in an essay of mine, where I lay out that a world of Anonymity + GPT-3 can undermine free speech by drowning out real human views and opinions with machine-generated, human-looking propaganda. https://jayriverlong.github.io/2020/07/24/gpt3.html
Interesting essay Jay. The top comment in this thread is by throwaway13337. You see this when people have a hot take they want to post but aren't ready to own it in front of their peers. I'd bet on average comments from accounts with identifiable personal info are much more tame and civil. But if everyone is too scared to talk about something for fear of getting cancelled then anonymity is the natural answer.
Maybe in the future if something is written by a bot it will need to be disclosed that a real human did not %100 write it. I mean I just looked at your blog and your twitter and I feel like a devious enough actor could use AI to help them create just as genuine of an online presence as you have, even down to the profile pic.
I actually wanted to raise this discussion on HN and created a thread  but it didn't get any traction. Maybe it could be a future topic for you.
If things continue the way they are we'll all be paranoid about everything being fake, like an internet Truman show. The sad part is you know it's fake and there's nothing you can do about it. So basically we have to go back to meatspace for genuine human interaction which is not the absolute worst I guess as long as there's not a deadly pandemic going on.
I think there's a solution in pseudonymity: it has the liberating effects of anonymity, while providing some accountability that there's a real human behind the digital face.
For example, suppose there's some repository of hashes of Personal IDs. If you want to sign up somewhere, you have to submit a Personal ID that corresponds to such a hash -- the Personal ID doesn't get stored anywhere, but it authenticates that you're a real human. (Obviously this is a naive solution and something more clever would be necessary, but it's just an example.)
"The age of disinformation" is the crucial fulcrum on which the entire premise rests. There is not more disinformation today than there was in days past. In fact, it is easier to check and see if something is true (not using self professed fact checkers and the priesthood of truth and fact, but by looking around and using your mental faculties) than it was when I was a teenager.
How many of you are old enough to remember bullshit like "if you ask a cop if they're a cop they have to tell you" or "you heard about the kid who's intestines got sucked out of his butthole in a swimming pool"? I remember people telling you things and you just went with it. It used to be very easy to lie. People made stuff up all the time, and said things that someone else made up that they believed.
To use the European context and Nazi Germany example, there is a big difference between then and now. That time was very much like my adolescence was with regard to access to information. Disinformation can spread wildly in an environment where people cannot verify it.
Currently, I could probably unilaterally get the equivalent of a university education in any field for free if I were so inclined, just by searching around on the internet. We do not live in an age of disinformation, we live in an age of unbelievable information availability, power structures and hierarchies once existed to enable the dissemination of information, and the ones that currently exist are beginning to show their cracks, simply because information can be effortlessly obtained at miniscule cost. I honestly believe we are living through social upheaval as a direct result of this revolution, and we haven't even seen the beginning. Making the majority literate was very, very hard. Making the next few generations well educated is something so easy it is probably best to just let people do it for themselves. I truly believe we will not need formal education within 100 years, people will just want to learn things and learn them at a whim.
I don't usually like to attribute motive, but I find it suspicious that it is the very institutions and organizations hammering away at this premise that stand to lose the most from this information revolution we are beginning to undergo. It is very convenient when the old entities with a grip on what people know and don't know happen to be the ones telling you not to trust your own judgment and defer to their own.
It's not just the "right to free speech" anymore. It's now also the "right to have your speech broadcasted to a selection of people in a way that depends on what they might think is relevant to them." So complicated stuff.
Traditional media companies have a motive for social media to implement prohibitive self policing measures. Youtube recently caved and gave media companies an extra boost in their trending algorithm and exceptions to expletive language, which made vloggers upset: https://www.bbc.com/news/technology-44279189
Traditional media tends to stick to big three excuses to implement these measures; 'think of the children', terrorism, and cyber bulling.
It also includes things which are technically true but lack context, or aren't relevant to the policies or decisions we want.
"George Floyd was a drug using criminal" is true, and therefore becomes a talking point in conservative circles to justify the police brutality he suffered, even though it's irrelevant to why others are upset at his treatment.
I think it's a more useful framework because half truths are way more dangerous than outright lies.
I don’t agree with your perspective about relevance. Details are an important part of journalism. You might not find some detail salient along some particular line of argument, but I still want to know.
I think you're making an interesting point that warrants a little further exploration: "disinformation" is nothing more than you using information available to you to determine what you think is true that the people who disagree with you would not like you to consider in your assessment. What the whole "control disinformation" argument boils down to, in my opinion, is an attempt to stifle people's own faculty of reason and deliberation by restricting their access to information. Who is to decide what information is relevant to you? Why is it that some people want to do that because you did not come to the conclusion they think you ought to?
One bright spot is the rise of some independents on YouTube and other social media. I find some far more reasoned and informed than most journalists. I think they are on the positive end of the smeared bell curve created by technology. They gain wider reach and viewers benefit from better voices.
> Facts and transparency are the intended pillars of the modern First Amendment
I don’t think we can ascribe intentions that easily. Speech is also the way people process their emotions, clarify their thoughts, get into dialogue with other people and form a dialectic to discover the truth. If people can’t fight their with words, they will fight with their swords, because they will lose hope of integrating their world-view with others.
I am also bothered the word facts getting thrown out that easily. We are all exploring an infinite problem space with limited knowledge and processing power we have. This has 2 consequences a) no one can have the epistemological monopoly on truth and b) one could definitely deceive themselves and others with factual information that otherwise doesn’t give the complete picture.
Which means, there is no terminal place for reality, and we always need to be in the process of integrating new data, including the perspective of others.
Author bends over backwards to ultimately manufacture consent for censorship, but the problem has never been about misinformation. It is not even about the volume of content to sift true with new media. It is the new platforms themselves with their perverse incentinves.
When we read twitter, we think we are interacting with people. But just like fish not seeing the water, we don’t see the medium, and the fact that the largest entity we are interacting with is the AI that keeps curating the content we see for one and only goal: engagement.
Behind the curtains, the biggest applied psychology experiment is being run by the best paid PhDs of the world; your every interaction is processed through the lens of every single heuristic you have and their failure modes (fallacies, biases) are exploited in the name of engagement.
The result is of course we going crazy, because no one, not even the most self-disciplined stoic, can defend against prolonged exposure to this machine and preserve their rationality.
So to assume the answer to this madness is to restrict speech is not only futile but also harmful. It is empowering the filtering role these platforms play, and will definitely drive us crazier.
Instead I propose this solution; marketplace of ideas now has to incorporate the marketplace for AIs that curate those ideas to be complete. Make it mandatory for any sufficiently large content curator to have alternative recommendation engines users can pick from. Then watch people chosing ones that doesn’t make them enraged at the expense of engagement, that rewards curious, honest intellectual discourse, that doesn’t reward impulsivity etc.
Those recommendation engines are the hidden parties in our speech, constantly whispering “have you heard so and so said this” and carrying around the most enraging but engaging stuff, because it can make a few cents showing you ads everytime you let it speak. We need to make and pick ones that has the best interest of people in mind, not just shareholders’.
> Instead I propose this solution; marketplace of ideas now has to incorporate the marketplace for AIs that curate those ideas to be complete.
I don't know about this. Interesting proposal. I don't normally take a cynical view, but humans are emotional creatures, otherwise these dishonest mechanisms of generating engagement wouldn't work. In such a system it is likely that the most emotionally motivating algorithms would win.
Beyond that, why does one company need to make multiple algorithms? Why do we need a law to enforce this? Why don't we just have any company able to make any product they like and let people decide?
We have that now of course. And the problem I laid out with this is plainly obvious on the internet today. People are picking the algorithm that generates in them the most emotional reaction. But I suggest that these algorithms fail in the marketplace by a different mechanism: they inevitably lead to the state we are in now, where people become disillusioned with the control of information flow (and other problems associated with the curation algorithms), and people move to other websites with other algorithms.
In the past couple of years we have seen an explosion of alternatives, some echo chambers, some failed but genuine attempts, some sizeable competitors, some in the pipe. There is a large and fast growing diaspora from the big websites. Essentially what I'm saying is, what you want is happening now without any intervention, and everything is as it should be.
> Beyond that, why does one company need to make multiple algorithms? Why do we need a law to enforce this?
I meant they are enforced to open their content for licensing.
The market we have today is for services that are actually a bundle of things; youtube is both a video infrastructure and a recommendation engine. Twitter is both an infrastructure for textual content, for a network of people and a recommendation engine based on both.
It is hard to say if they are chosen for their recommendation engines or their content/network. The latter is very prone for monopolistic dynamics, the former is being victims of that power law distribution in my opinion.
I don't want to be as engaged as possible with twitter, but twitter does. If the new "virtuous-recommendation-engine" promised they are not there for maximum money but maximum wellbeing and sensemaking of their users, I might switch to that, ask my relatives to switch to that, force my kids to switch to that.
Well there are things like mastodon, for example, they serve the first 2 purposes (content and networking) without any real tampering with recommendations. I use it, it is very popular. UI wise it is almost identical, UX wise it has some unique behaviors that come with very powerful advantages IMO. It also doesn't have any design features built with a goal of maximum engagement.
For the most part, I'd say people have found recommendation engines to be nuisances at best, tools of control to many. The idea of the network effect seems to suggest that the largest draw with these sites is the content. Of course, the recommendation engine has it's effect on the perceived content, and it's possibly addictive nature has an effect on the available content, so the interaction there is still fuzzy. As another comment here pointed out, people see the algorithm the same way a fish sees the water it is in. They only see content, in their mind the site is the content, they don't see an engine.
My idea of a "maximum wellbeing" algo for recommendations is a chronological timeline. It is up to me to engage with what I want to engage with. Maybe that isn't ideal for applications similar to YouTube. I've seen a pretty simple ranking algorithm used in a project called Lemmy (FOSS federated community oriented link aggregator) that I think is phenomenal.
> In the 2010 Citizens United decision, the court’s conservative majority opened the door to allowing corporations (and unions) to spend unlimited amounts on political advocacy, as long as they donated to interest groups and political-action committees rather than to campaigns.
Unsurprising to see Emily Bazelon deeply distort Citizens United. That case wasn’t about “unlimited money.” The entity that was prosecuted was a small 501(c)(4) non-profit (the same category as the NAACP or ACLU), who made a movie critical of Hilary Clinton too close to the election. The government’s lawyer admitted, that under the government’s theory of the first amendment, nothing would preclude the government from prosecuting a corporation that spent corporate treasury money to publish a book: https://www.ifs.org/blog/citizens-united-its-all-about-the-b....
In fact, had Citizens United gone the other way, the Trump administration would be able to tell Google and Twitter they can’t use corporate funds to moderate political content. There is a reason the ACLU was in the conservative side in that case.
That the media has for a decade created a Twilight-Zone reality to make the public believe that Citizens United was about money and not a political movie should call into question the article’s whole premise. Do you really trust these gatekeepers of “the Truth” to yield the government’s power to police speech?
That's the thing. The people railing against disinformation are responsible for as much of it as anyone else. They have no moral authority to put themselves in charge of the speech of everyone else. Their illiberal desire to be in control of others speech should be terrifying to anyone who cares about the future of western civilization. Many people are cheering for such control because they think it will only be applied to the "bad people", it's breathtakingly shortsighted.
The wikipedia summary (https://en.wikipedia.org/wiki/Citizens_United_v._FEC) is pretty good. The government can't prohibit anyone (people, corporations, unions, etc) from making political speech, so long as it's not directed by the candidate. So I can't donate a billion dollars to a candidate's campaign, but I can run a billion dollars of ads supporting him (or against his opponent), so long as I don't receive direction/instruction from the candidate. Or, in this case, make and advertise a movie. The question you have to ask yourself is whether the alternative would be worse.
That wasn’t the “outcome” of Citizens United. The outcome was that the government can’t punish a small non-profit from making a movie critical of a candidate within 90 days of an election.
You can read whatever you want into the implications of the rule articulated in the case. Folks on the other side can say that, if the decision had gone the other way, nothing would stop the government from regulating how the New York Times uses its corporate funds.
But it’s important to look at the actual case, legally because it is really the only thing that’s decided, but practically because it is an actual conduct the government decided to prosecute. The government didn’t think the law just covers billion dollar advocacy campaigns. It prosecuted a small non-profit entity for publishing a movie. It’s not a “parade of horribles” hypothetical consequence, it actually happened. The “alternative” would be allowing the government to prosecute this non-profit for publishing a political movie.
Maybe outcome is not the right word then...but I was trying to capture the concept of downstream consequences. In the same way that you could say that an "outcome" (or whatever word is better) of Title VII of the Civil Rights Act is that employers cannot fire workers for being gay or transgender, even though those intentions weren't even on the radar when the law was written. On the third hand, it seems wrong to call Citizens United an "outcome" when it's more like a clarification of an existing right, as opposed to the granting of a new one.
I also don't disagree with the court's ruling, I was just trying to phrase my response in as neutral a way as possible.
I know this isn’t what you meant, but social media changed quite a bit between 2008 and 2016. In 2008 both twitter and Facebook were in their infancy. If anybody called them “great democratizers” it was probably more about their promise than their results.
I kinda disagree with her characterization of the whole "coup attempt" thing, which she presents as "totally fabricated", "without evidence", etc. Which is true - some people sitting in a room wargaming isn't a coup or a plot to take over the country in and of itself. It's not a crime to do that. But the steelman "extremist" (her word) argument is that the publication of the WaPo essay (and Hillary Clinton saying Biden "should not concede under any circumstances", and this essay itself, etc) represents the elite using free speech to establish a Schelling point, or common knowledge, of how to dispute the election. By now anyone with any political awareness knows, and knows that anyone-with-any-political-awareness knows, that the election results won't be known for weeks or months after the election. And that mail-in ballots are going to decide the election. And that Trump is suppressing votes. It's a kind of manufacturing of consent for the intellectual elite who hate Trump. But to them this sort of thing is totally invisible - that kind of speech is good and important for democracy, because it establishes a collective template for coordination-free cooperation to ensure Trump is not reelected. It's the other kind of speech that is bad.
This is another "we have to kill free speech to save it" spiel. I don't find these arguments convincing and the use of Hannah Arendt is particularly egregious. Hannah Arendt was not (and never claimed to be) a philosopher. She was a political theorist who tried, in her own words, to "look at politics with eyes unclouded by philosophy".
> Looking back at the rise of fascism and the Holocaust in her 1951 book “The Origins of Totalitarianism,” the political philosopher Hannah Arendt focused on the use of propaganda to “make people believe the most fantastic statements one day, and trust that if the next day they were given irrefutable proof of their falsehood, they would take refuge in cynicism.”
The author uses this quote to stand for "in other words, good ideas do not necessarily triumph in the marketplace of ideas". But that's not at all what Arendt was saying. She believed that, in the end, the truth will come out:
> Under normal circumstances the liar is defeated by reality, for which there is no substitute; no matter how large the tissue of falsehood that an experienced liar has to offer, it will never be large enough, even if he enlists the help of computers, to cover the immensity of factuality. The liar, who may get away with any number of single falsehoods, will find it impossible to get away with lying on principle. This is one of the lessons that could be learned from the totalitarian experiments and the totalitarian rulers' frightening confidence in the power of lying--in their ability, for instance, to rewrite history again and again to adapt the past to the "political line" of the present moment or to eliminate data that did not fit their ideology. Thus, in a socialist economy, they would deny that unemployment existed, the unemployed person simply becoming a non-person
> The result of such experiments when undertaken by those in possession of the means of violence are terrible enough, but lasting deception is not among them.
The power required to eradicate the truth, she writes, "would amount to omnipotence". Hitler, Stalin, and other totalitarian leaders undermined their own governments in their attempt to destroy the truth and rule through terror. For Arendt, all political power flows from consensus. She believed, contrary to Mao's quip about "political power growing from the barrel of a gun," that violence and coercion cannot create political power but can only undermine it.
That sounds optimistic to me. By encouraging the citizenry to simultaneously believe everything and nothing, these regimes undermined themselves.
The article sees strong defense of free speech as naivete:
> It’s a fundamentally optimistic vision: Good ideas win. The better argument will prove persuasive.
I don't agree with this interpretation of modern liberalism. The modern liberal position on free speech is not optimistic, it's resigned. The argument is not "everything that rises must converge" but, rather, we are stupid, limited beings, so what right have we to interfere with other stupid, limited beings? This position amounts to a recognition of the limitations of human rationality, not a celebration of it.
It has been personally sad to me to witness the left turning on free speech. When I was a kid, it was the lefties who defended free speech. It was Chomsky who said "with regard to freedom of speech there are basically two positions: you defend it vigorously for views you hate, or you reject it and prefer Stalinist/fascist standards". Now that I'm a little older, I realize that, mostly, people just defend free speech when they stand to benefit from it (i.e. when they aren't in the position to censor).
These days the left has a lot of cultural power and many on the left are bowing to the temptation to use that power to censor the opposition.
Sometime around 2014-2016, it seems like the progressive left collectively realized that they were losing certain battles in the information war on the Internet, and opted to pursue censorship, deplatforming, and outright violence (e.g., "punch nazis") instead.
That's not to say that they were losing every battle, but the fact that they were losing some battles--that some portions of their narrative were just too easy to falsify in an era where any 4channer could grab some data from the DoJ website and make a memeable infographic--that was enough to cause the turn against free speech:
> Yet key parts of the intersectional narrative are not born out by data. It is now a standard trope, implanted in freshmen summer reading lists through the works of Ta-Nehesi Coates and others, that whites pose a severe, if not mortal, threat to blacks. That may have once been true, but it is no longer so today. Just this month, the Bureau of Justice Statistics released its 2018 survey of criminal victimization. According to the study, there were 593,598 interracial violent victimizations (excluding homicide) between blacks and whites last year, including white-on-black and black-on-white attacks. Blacks committed 537,204 of those interracial felonies, or 90 percent, and whites committed 56,394 of them, or less than 10 percent. That ratio is becoming more skewed, despite the Democratic claim of Trump-inspired white violence. In 2012-13, blacks committed 85 percent of all interracial victimizations between blacks and whites; whites committed 15 percent. From 2015 to 2018, the total number of white victims and the incidence of white victimization have grown as well.
I should also point out that the NYTimes ironically has its own well-documented history of disinformation: famously downplaying both the Holocaust and Stalin's purges as they occurred.