One reason why Facebook keeps failing in increasing "meaningful social interaction" could be because they at the same time tries to sell ads.
It seems like it would be easy to simply exclude "reshares" from users news feed. They could try to: "Sort by date" and "only posts posted/shared by my friend directly". Personally I think that would remove much of the toxicity. Secondly, simply ban news sites from the platform.
Facebook is trying to fix an algorithm that can't possibly work, because it has two opposing jobs, increase time spend on the platform and at the same time it has to downplay the content people engages with the most.
I think another factor is that they’re trying to quickly turn a massive ship that has, for years, self-selected users intent on heading in the direction of outrage and toxicity. The people who are the most “engaged” with the platform are the people who get a dopamine rush from arguing and baiting and instigating. The nice people who just want to see what their friends are up to have either left or significantly scaled back their use.
I think that makes any algorithmic “fix” much harder. Or perhaps even impossible.
I think the solutions you’ve suggested would have worked had they been implemented five years ago. I’m not so sure they would work today.
Anecdotally this is what I've been seeing over the years. Most of my friends and acquaintances seem to have left Facebook behind and moved to Insta. The people that stayed on FB are mostly those who like to express strong opinions and engage in pointless discussions.
Living in a country of 1.5 million people, it often feels devastating how big a part of political/intellectual discussion in our (tiny) language actually takes place in Facebook. I have worked in journalism, so maybe it's sour grapes in a way, but: most of the great writers are there. All the politicians are there. Our prime minister first announces crucial things on her FB wall, etc.
In short, if you want to participate in the public sphere of our tiny society that yet speaks its own tiny language, you'd better have a FB account. The account provided by an American Megacorp, that is. This actually feels really strange.
I've been more or less off FB for 5-6 years, occasionally feeling lonely and intellectually isolated. I would love to see things like the Fediverse/Mastodon or tilde.town take off for speakers of our language, but I'm having doubts whether this could ever happen.
For the lingua francas, there will always be a considerable chunk of alternative culture and great, bright discussion outside dominant platforms (howdy, HN!). For smaller cultures, this relationship status is surely complicated. FB is just so damn big.
I feel you can apply the maxim 'in politics nothing happens by accident' to Facebook despite their earnest attempts to claim unintended consequences. They are a highly important component of our divide and rule political era on multiple levels and sadly are doing an excellent job.
True. In the early 2010s, I used to ponder whether journalism is actually dying -- meaning, now that we have social media, traditional gatekeeper-journalism  is simply not needed any more. During the passed decade, it has become obvious that as long as a social network is also a megacorporation, this idea won't work. We still need the better parts of traditional journalism to check what's going on behind the curtains of profit-driven social networks. Really messy, but also truly interesting times for journalism for sure.
The huge challenge for journalism is the reality that professionals are almost always paid to promote the agenda of their employer and advertisers.
Independent, investigative reporting is very rare in corporate media these days. Many (Chris Hedges, ex NYT foreign correspondent is a good example) set up as independents in the golden era of blogging prior to the social media/smartphone tsunamai.
I would argue people such as Matt Taibbi, Matt Stoller, Michael Tracey, Chris Hedges in the US are the true gatekeepers right now based on my understanding of the planet. These people appear to survive by publishing books and keeping up a strong social media presence critiquing the platforms that host them, and with companies like substack paying them to promote their platforms.
Pointless to some, I'm sure, but if they're that engaged it's meaningful to them. Viewed through that lens, then attempting to externally manipulate and cajole their conversations was never going to have a positive outcome or even impact on the rest of the userbase.
It feels like this is a core point that's somehow got lost here.
There's a lot of assumptions in these discussions that connections and sharing are only good and useful if the results look a certain way. People sharing dog photos: good. Sharing approved mainstream news stories: good. People posting personal politics: bad. People disagreeing with each other: also bad. Etc. But it's a purely subjective take. Maybe our society needs to talk about politics way more than it does. Historically, instead of talking there's often just been fighting, and everyone agrees that talking is far healthier.
Facebook's original working assumption was that connecting people and letting them communicate and share is inherently a valuable thing. Is that actually wrong? Clearly, there is enormous demand to discuss politics that was unleashed by the internet, whereas before then only a small number of people had the access to present their opinions to those around them via opt-in channels like TV, the press, etc. Other people had to make do with phone calls. Something once restricted to a very small group is now available to all. Unsurprisingly, the sort of people who feel ill when they see the results are those whose views are the same as those who previously had a monopoly on group communication of various kinds. The ones most engaged are the ones who feel the media ignore their worldviews.
It's pretty dangerous to just say, well, if people are arguing about stuff that the establishment wouldn't let them argue about before, then it's "toxic". It's certainly risky to blame social media as the cause rather than merely a symptom of disagreements that would be there anyway. Far better to have "toxic" social media than a revolution!
Indeed, the medium can alter the message. For example, supposedly any extra lag in voice communication makes participants think the other person is angry.
We also see groups that never had close proximity now interacting directly and finding that they disagree pretty strongly on fundamentals.
So there may be ways to facilitate the same discussions, but more productively and humanely. And it simply might not be possible for people with diametrically opposed beliefs to do this -- there may need to be a game of a few layers of telephone, or at least an active moderator as in a formal debate, to help translate meaning and intent across the epistemological divide.
I don't know if it's a problem with education but more likely, deeply held implicit assumptions that aren't being articulated.
Most political disagreements fall into this category. E.g. arguments about COVID stuff are usually rooted in differences in assumptions about things like the trustworthiness of the establishment, the corrupting strength of profit, whether events are best explained as conspiracy or cockup, etc. Because they're sort of all-encompassing and vague people don't or can't easily spell them out, leading to the "talking at" not "talking with" problem. But that isn't to do with Facebook. It's inherent to humanity.
I see the same on WhatsApp too. Family members blindly forwarding whatever outrage meme of the moment lands in their message queue. Especially for family outside the US who use WhatsApp as their primary way to communicate with groups.
At this point the exact posts are just shared around platforms. Tweets posted on HN. Tweets posted to reddit. Facebook memes on instagram. Ticktock posts to instagram and reddit. Doesn't matter what platform you go on, the same fodder ends up everywhere.
Well, that and rationalization. Everyone's subject to rationalization.
You can say 'maintaining the status quo because fear and outrage boost engagement'…
…and they can think 'we represent the social engagement of ALL people, how dare you tell us not to support genocide fanciers when our data shows that's a solid 27% of our actual platform'…
…and then they can say 'we are trying to turn the massive ship, Senator' and tell each other 'stay the course, we have a responsibility to ALL our users', and the whole time, engagement will still be king.
People don't always know why they're doing what they're doing, and even if they do, they won't always tell you.
Right on. I've seen the posts/posters in many local fora in the UK have two defining characteristics. They are judgemental, and they are nosy in equal measure. 
I put the recent increase in this down to people's familiarity with the technology, having grown up with it -- those people only 10/15 years older have seen life before and after the revolution, and behave differently. Specifically, when local fora started to make an appearance, they were a bit more diffident in their postings.
 (I don't want to start splitting hairs on which might be greater. :-))
I forgot about the incessant hand-wringing and virtue signalling posts (typically made after someone reports a crime such as a street robbery), and these are also made from behind anonymous usernames. (any social capital created anonymously like this is useless in real life, so the motivation must be for some other reason such as entertainment or personal satisfaction)
Note, this is not me passing judgement on people -- quite the opposite -- the whole point is that they can't and wouldn't be like this in real life, but the medium of the forum allows for it, which makes this cocktail of usage characteristics on these fora a recipe for quite strange reading, over time.
Not really: ads have a budget of attention within an envelope that Engagement (the team whose work is described here) is trying to maximise. There’s detailed estimates to how much ads take away but it’s minimised and isolated from that conversation completely.
There are many issues, some addressed there; some more nuanced (figuring out that is a good post or comment is genuinely hard with crude metrics because clickbait and flamewars look very much like compelling content); some that I can’t mention publicly.
There are non-scalable ideas for solutions (my PhD was around complex network so check Jure Leskovec’s research for ideas like isolating cliques and defining them as meaningful or not). One idea that I’ve floated, but less than I should, and many other have too, but more than they should (or at least not articulated well around this problem of sorting good from bad content to inform the News Feed algorithm): have secret negative and positive reactions per post and comment.
The fact that Likes are public makes them performative, both very compelling (it’s who you want your friends to see in you) and counter-productive. But giving people the ability to say: "look, I will comment on every flame bait that I see, because I can’t help myself, but please, remove from from my feed" would have helped argue against what is still a very linear model of weighting acting that are uniformly seen as "engagement".
Source: I was the DS looking into those questions in 2015.
While I'm sure there are some people at fb genuinely trying to "fix" toxicity, overall the issue is that they're not trying to fix it. They want to do the minimum possible to protect their image / bottom line, they don't care if it actually works. Actually fixing toxicity is one way to improve their image, but appearing to try can be good enough.
It's really difficult to make that kind of change from the rank and file. Once revenue is involved, it's amazing how nervous people get and nobody wants to approve some change that might lose as little as fractions of a percent of the money stream. There has to be a clear, explicit mandate from leadership that acknowledges there will be a revenue hit, and absolves any project members from bad outcomes / or rewards them for participation. I.e. it's hopeless :-)
I think all of the problems they have are earned. They've got, what, 3 billion users? How many 9s (99.99999) effective does the solution need to be to wrangle edge cases? Moderation is an eternal issue.
Yup, which points out the fundamental problem: Facebook is a product of its incentives. Its incentive is to maximize engagement. Why? Because more engagement equates to more ad revenue. The externalities of this behavior do not align with what society seeks. Therefore, if we wish to fundamentally transform Facebook, then we must change the incentives. The incentives are not going to change on their own. Enough time has passed to demonstrate that the market will not correct for this. This can be deemed a market failure. This means some intervention by the state must be made. I propose that having the platform free should be illegal. Instead, the company should be forced to charge customers a recurring fee, sufficient enough to cover operation costs, and to no longer obtain revenue through advertising. This, I believe, will shift the incentives to produce the behavior we as a society seek. Facebook will no longer optimize for engagement, as that no longer drives revenue, and must instead compete on something else, like quality of experience on the platform. Of course, such policy would have to be applied across the board for all social networks as to not undermine Facebook's existence with a competitor offering the same service for free.
True. There is nothing that would prevent a software company within Sweden, let's say, from opening a free Facebook competitor. Given the reality, such a law as I proposed would be seen as unfair from Facebook's perspective. In many ways, I would say I agree with them. To remedy this, the law could be constructed so that it is only illegal within the confines of the United States. In other words, a paid subscription is mandatory within the United States, while it can be offered for free for anyone outside.
This is my take as well. It's the business model itself that's toxic, or at least predatory, as the necessary outcome of it is to lean as hard as possible on known frailties of human behavior and psychology in order to cultivate advertising placement and revenue.
There's no tinkering around the edges of that control loop that's going to fix the problem. What we observe as the deleterious effects on society are in fact systematic (as in, "design failures") faults in the thesis of Facebook's business model.
> Facebook is trying to fix an algorithm that can't possibly work, because it has two opposing jobs, increase time spend on the platform and at the same time it has to downplay the content people engages with the most.
Would it be fair to say they're trying to increase time spent by existing users on the site? Does Facebook have such a huge number of users there's no growth from building a better product? I personally think the product is awful.
I have an account, but I only log in often enough to keep it active. If I had a nice, reverse chronological feed of things posted (not shared) by my friends I would read it.
Maybe the subset of people like me is too small to worry about, but there's definitely some of us where the current algorithms work against engagement.
>How many users does Facebook have? With roughly 2.89 billion monthly active users as of the second quarter of 2021
>MAU: A registered active user who logged in and visited Facebook through the website, mobile app, or Messenger application in the last 30 days as of the date of measurement.
It would be physically possible to have better engagement, yes. But they're already the most wildly used website/software platform/media outlet in the history of humanity. Given from how much regulatory attention they're attracting, you could make a solid argument as a PM that it would be better for the company to reduce MAUs and spike user logins. From how Congress is talking, they're not far from just ordering Facebook to be shut down, which will reduce MAU to 0.
The Facebook Integrity team agrees with you, Zuck just doesn't want to give up the cash.
"Early tests showed how reducing [downstream MSI- the likelihood of creating long chains of reshares] of the algorithm for civic and health information helped reduce the proliferation of false content. Facebook made the change for those categories in the spring of 2020.
When Ms. Stepanov presented Mr. Zuckerberg with the integrity team's proposal to expand that change beyond civic and health content -- and a few countries such as Ethiopia and Myanmar where changes were already being made -- Mr. Zuckerberg said he didn't want to pursue it if it reduced user engagement, according to the documents."
The article makes it seem like people at Facebook have ideas on how to potentially thread the needle, but Zuckerberg (and probably a handful of other senior executives) keep putting the kibosh on it.
We know there are some people with pretty idiosyncratic political beliefs and unsavory connections on Facebook's board and within its senior executive leadership. It's not hard to imagine that maybe they like it being an engine for stoking outrage and perpetuating disinformation for reasons unrelated to the financials or for any intelligible design/engineering logic. It doesn't even necessarily have to be malicious. They could literally just be blinkered by a weird ideological belief in "absolute" free-speech.
But I think big tech didn't use their power years ago when advertisers started demanding their ads don't appear next to X, where X is the offensive thing/position/etc.
Big tech has power but it folded and immediately started policing when this happened. Instead of a united front each of them caved to advertisers and now the content on each platform must be kosher according to the ethics that the platforms' rulers have.
Big tech never had that power versus advertisers. There's no possible scenario where major mass market advertisers like Coca-Cola or Toyota would ever tolerate having their ads displayed on highly offensive content, like say a Facebook group for white supremacists. If the social media companies hadn't imposed stricter content censorship then the advertisers would have simply dropped those channels to protect their brand images.
> Data scientists on that integrity team—whose job is to improve the quality and trustworthiness of content on the platform—worked on a number of potential changes to curb the tendency of the overhauled algorithm to reward outrage and lies. Mr. Zuckerberg resisted some of the proposed fixes, the documents show, because he was worried they might hurt the company’s other objective—making users engage more with Facebook.
Delete your facebook account. Zuck and co. are poisoning their users for profit.
Am I in the super-minority if I say I've unfollowed about 80% of my FB friends? I've never seen a political or outrage post in the last 3 years since I just unfollowed everyone but my closest friends and family.
I unfollowed everybody, joined a bunch of local special interest groups and met a ton of new people who lived in the same city. Facebook is actually an incredible tool to connect with other people.
But the more I saw how FB behaved, the less I wanted to be involved in any way with their business. I ended up deleting my FB account, which even now 3 years later I feel was a huge loss for me. But I just can't do FB anymore knowing how they're driving outrage and political/societal issues in the name of "user engagement".
I wish there were some alternative, but there isn't. It genuinely feels like I've cut myself off from modern city life without it and it frustrates me so much.
It is supposed to make you feel that way. That is on FB's agenda to achieve, in order to bind you to their (dis-)services.
Real friends will not require you to use FB and sacrifice your privacy to communicate with them. You might feel a loss, but there are alternative ways of communicating. Get as many friends as possible over to messengers, which are not owned by FB. Give people a call every now and then. Write them SMS, write them e-mail. All of that is better than communication on FB services. Real friends will initiate as well. If you are not on FB, they will have to contact you through other means. If you are not worth the "trouble" or effort in their eyes, then good riddance, you do not need them.
> If you are not worth the "trouble" or effort in their eyes, then good riddance, you do not need them.
Relationships are a spectrum, and your friends move around on that spectrum over time. Social media gives you the ability to maintain some kind of connection to people even though maybe at a certain point in time, those friends are distant from you on the spectrum. But that can change at any moment and having that line of communication makes it that much easier to reconnect at any given moment.
Sending a text or calling someone randomly is not always easy to do, especially if it's someone you haven't spoken to in a while. Whereas commenting on a post or replying to a story has much less friction and can result in reconnecting with people you otherwise wouldn't.
I guess you missed the part about meeting new people? I'm in contact with people I know over other apps. But there's no replacement for that discoverability of local groups and events or facilitating the meeting with new people.
many years ago you would similarly be excluded from some social stuff if you didn't smoke. That is one of the best business models around - tying, or better even locking in, people's socializing to your product. Humans, being a social animal, have very hard time getting out of such a trap.
As someone in that cohort who was also there for the "glory" days of early FB (when you had to have a college email to sign up), I have to agree that at this point the idea of making friends on FB weirds me out. How? Why?
FB is what my racist parents use to circle jerk conspiracy theories with their friends in Cabo. Who in their right mind wants that anymore? (This is a rhetorical question, I understand there are still tons of people using FB, presumably because of inertia).
It's not like I'd make friends on FB itself. For example, there'd be an impromptu meeting of random strangers in town for whatever event and you'd make friends through that. This was only a few years ago.
Or some woman randomly posted that she was going to the movies in like half an hour and would anybody be interested in joining? We became friends from that and ended up doing a lot of things together, and I know I never would've met her otherwise because of how our lives didn't overlap at all before meeting her.
FB can enable a lot of cool meetings with new people that I haven't been able to find anywhere else.
FB is, in the end, a tool. It's what you make of it. If your circle of contacts is tied up in conspiracy Trumpian bullshit, then sure, FB is a cesspool. But I was a part of plenty of special interest groups filled with incredibly friendly and helpful people. It's like a completely different world.
> As someone in that cohort who was also there for the "glory" days of early FB (when you had to have a college email to sign up), I have to agree that at this point the idea of making friends on FB weirds me out. How? Why?
I was a freshman when Facebook was introduced and required a .edu email. I've used it on and off since the beginning. I've made lots of friends there - both local and in other states/countries. It usually comes down to interactions in groups for niche interests. Why is this so striking to you?
I just don't know people that still use Facebook with any regularity. I understand selection bias is at play (I'm inclined to hang out with people like myself, and I think Facebook is a flaming trash-pit).
I made friends on Facebook years ago. I felt fundamental changes in both the user-base and use-cases that I don't find enjoyable. I assume most people under the age of 40 feel similarly, and those that don't aren't the type of person I'd want to be around.
I'll rephrase: It's not that I assume nobody is making friends on Facebook in this day and age, just that I'm less likely to want to befriend them myself and I was applying this personal assumption unilaterally.
I completely understand avoiding Facebook; I haven't deleted my account yet, but I've been off it for the past year and a half. I also understand it not being your choice for a people meeting platform. But there is such a variety of users there - I think it's a bit silly for you to think that there's no one there you'd be willing to befriend on Facebook.
I specifically said "less likely". Less is not always "none". I may be willing to befriend .00001% of their userbase today compared to .0001% 10 years ago.
Let's bring this back: Of course there are interesting, worthwhile people on Facebook (presumably you are, and I"m taking time to talk to you). But who cares?
If they're in communities that I'm actively engaging, then I'll find them through those channels. If they're not...I don't care. There are billions of people I will never interact with. And that's fine.
I don't want Facebook. The cons outweigh the pros. This doesn't mean there aren't any pros. I realize now that other people still like Facebook for purposes I don't have.
I ended up in the same place, by a different route.
Because I spend too much time on HN, I was convinced that the correct answer was to delete my FB account. So I did.
But I really missed out on some family interactions, and after about a year I caved and created a new account. But I have been super careful about who is allowed to be my friend on FB. And for the few people I care about who still insist on posting political nonsense, I muted them. My feed on Facebook is basically inert, I only see posts about family things, pictures of grandkids, that kind of stuff. No politics at all. I'm glad to have it back.
I am more careful now to appreciate that HN commenters lean heavily toward the extremes, and do more due diligence in my own decision making to ensure I'm being sensible, not reactionary, when I follow advice that originates here.
I did that too. Until I realized most of the group that remained overlapped with friends/family I communicated with in Telegram, WhatsApp and SMS chats. So there was no point to keeping a Facebook account.
Addition/edit: Furthermore I closed my account because I truly and sincerely believe that mass social media has a more negative impact on society than addiction, untreated mental illness, and all forms of abuse (child, domestic, etc) put together.
That was my experience as well. Facebook was just reduced to people I talked to on other channels every week, so I ended up not logging on to Facebook anyway. When GDPR rolled in and Facebook asked to accept the updated terms and conditions I just clicked "no" and Facebook told me that I'd have to close my account.
> Am I in the super-minority if I say I've unfollowed about 80% of my FB friends? I've never seen a political or outrage post in the last 3 years since I just unfollowed everyone but my closest friends and family.
Yes but when I started unfollowing people who were over sharing political or outrage posts, that included my family, at which point I thought why bother using it at all.
Probably? I know that you're capable of doing that, but most of our friends and family log onto Facebook to zone out, not to curate their feed. Their feeds will continue to be full of garbage, not even counting the ads that show up that often target people for political ends.
I would much rather Facebook removed link posts entirely, and also provided an easy toggle option to disable shared content in the feed.
I have several hundred friends added (think that's average) but I've never needed to do this because none of them seem to be interested in politics. Even the ones that have studied it at university only mention it occasionally and very briefly. However, my parent's friends seem to comment on every viral politics thing they can find.
Is this any more helpful than saying "throw away your iPhone" or "uninstall Windows"? For the record, I fully agree that Facebook is poisoned, but frankly this is true of any company with a board of shareholders. The vast majority of people just don't care, like how we failed to get America to care about recycling. Habits, vices and consumerism are what drive this machine, even the VC one that funds the very website we use to discuss this stuff. I reckon the majority of users are just fat and happy, with no real incentive to leave. That's how TikTok and Twitter have stayed successful, though it's often at the cost of the user.
It's more like saying "don't smoke." You'll live a longer healthier life if you choose not to engage with this particular vice. We managed to get many Americans to give up cigarettes, and even created a market for cessation products like Nicorette. So I do believe there is a market solution here but it needs to be coupled with the sort of public education I don't have a lot of hope for.
Excerpts from one of the leaked documents. Disturbing.:
> Research conducted in the EU reveals that political parties "feel strongly that the change to the algorithm has forced them to skew negative in their communications on Facebook, with the downstream effect of leading them into more extreme policy positions." For example, in Poland, "one party’s social media management team estimates that they have shifted the proportion of their posts from 50/50 positive/negative to 80% negative and 20% positive, explicitly as a function of the change to the algorithm… Many parties, including those that have shifted strongly to the negative, worry about the long-term effects on democracy." We have heard similar feedback from parties in India and Taiwan.
> News publishers, too, are concerned about the incentives MSI created. We received direct feedback from BuzzFeed CEO Jonah Peretti that his team feels "MSI ranking isn’t actually rewarding content that drives meaningful social interactions. They feel pressure to make bad content or underperform." Peretti related that "when we create meaningful content, it doesn’t get rewarded, "but more sensationalist and divisive content (such as "fad/junky science", "extremely disturbing news", "gross" images, and content exploiting racial divisions) is more successful.
> For example, in Poland, "one party’s social media management team estimates that they have shifted the proportion of their posts from 50/50 positive/negative to 80% negative and 20% positive, explicitly as a function of the change to the algorithm… Many parties, including those that have shifted strongly to the negative, worry about the long-term effects on democracy."
Not only democracy but, eventually, world peace. There are an increasing number of countries turning toward authoritarianism and the effects here could not only polarize political discourse, but also international relations.
> the change to the algorithm has forced them to skew negative in their communications on Facebook
I suspect that one of the reasons for this is that negative reactions to posts (like anger) actually have a positive effect on engagement. Facebook never really implemented any kind of downvote like other platforms have. Downvotes tend to suppress content that people don't want to see.
It all comes down to every PM's favorite metric: engagement.
When users get into a heated flamewar, that's high-engagement. So you want to do everything you can to ensure that the most possible users get sucked into flamewars. I feel like this should be banned, but it's unclear what the best approach would be. Mandating chronological feed order?
> it's unclear what the best approach would be. Mandating chronological feed order?
I'd use the laws around tobacco as a framework.
Ban the opening of social media accounts for children under 13; restrict it for adolescents. Ban marketing for social media over a certain size (the caveat to preserve the threat of new entrants). Require warning labels; for minors, the warning should be attested to by a parent or guardian.
Require public disclosure of any sorting, filtering and moderation algorithms and policies. Require a public comment period for modifications to the foregoing. Require open access to researchers, where Facebook retains data control but let researchers query it (subject to oversight).
Let users tosue social media platforms for damage they intentionally or negligently cause. Create a fund social media companies must contribute a fraction of revenue to which subsidies mental-health services for people harmed by their social media use.
Don't the constant flamewars push people away from Facebook, thereby reducing engagement in the long run? I quit Facebook several years ago because I honestly just didn't like the experience. It was miserable and I didn't enjoy going to it any longer.
I have the phone numbers of my closest friends and family, why wouldn't I just call or text when I want to talk to them?
Facebook is following the same playbook that a lot of mobile games are doing. They don't really care about user attrition because most users are low-value. If you're just checking Facebook once or twice a day, you're probably not worth the server time cost to Facebook. What they really are trying to do is target high-engagement users ("whales" in mobile game developer slang) and make the service more addictive for them.
the problem with this analogy is that when you lose a lot of users, you lose the network effect. so even if a user is low-value in terms of their actual engagement, they bring value by luring others in
This is completely anecdotal but at least in my friends circle in the US a lot of people have stopped using Facebook (Messenger is different) or they use it for very specific purposes like Marketplace, specific events, or major life updates. None of those are really engagement driving activities. It seems like most of Facebook's growth is coming from developing nations and older generations who are just now finding about it or haven't bothered moving on to other forms of social media.
So basically "How much daily misery can we put our users through algorithmically before they break down and uninstall"? Why does anyone who is sufficiently aware of this still use Facebook? Not even as an ethical question, why use something that makes your life miserable?
This reminds me of Jonathan Haidt's "Righteous Minds". One of the key points in the book is that "righteousness binds and blinds". As in, the 'righteousness binds' part is there's a social bond between people who believe the same thing. (And "blinds" is that the righteousness will blind you to certain truths).
I think flamewars are like that. You gotta leave a comment otherwise you feel your side loses out.
'The long run' looks like it's going to play out over three generations, not unlike Big Tobacco.
Not only can you get rich in that sort of time frame, you can retire and let your kids take over. The grandkids will be proper monsters who don't work at all, so they will probably cash out before the lawsuits get teeth.
Chronological feed wouldn't help. Friend of a friend pushes "bad stuff". Friend comments to tell him it's stupid. They get into argument.
Each of your friend's comments would still give FB a "chronological" reason to bump this post to the top of your feed, enticing you to get into the mix and increase the viral engagement of the original "bad stuff".
The big tent for everyone sites are toxic because people don't interact as closely in real life with people from different groups. We group for a reason. Within those groups common understanding on certain ideas is the connection. With that connection you let down your guard, you can be this side of yourself freely. In a big crowd you have to wear all faces to relate to everyone.
Anecdotally, I've seen far more polemic and vitriolic posts on so-called alt-tech social media sites compared to even Twitter and Facebook, which are horrendous in this regard. Sites like Gab, Voat, Ruqqus, etc seem to be filled with the absolute most toxic, angry, and hateful discourse imaginable.
I've noticed sites that tend to be for something other than "just chatting" tend to do better: many hobby specific forums, HN, hobbyist subreddits, etc. all seem to do better discourse-wise when focused on something specific, but I'm not quite sure why that'd be the case.
Niche communities are easier for a small handful of volunteers or employees to moderate. There's simply less shit to clean up. Compare that to Facebook, Google, etc; all of whom have to hire armies of moderators who will burn out in a span of months. The attrition rate is very high, and there's just too many moderators to have consistent platform policy enforcement.
Furthermore, if someone is just there to be toxic, you have an easy justification to shut them down: "I'm sorry, but this is an underwater basketweaving Discord, please stop spamming #offtopic with why you think we should call in the National Guard to fight the war on Christmas." Many small communities have very strict regulation of political speech purely because it causes most of their problems.
Alt-tech is trying to collect all of the toxic bullshit in one place, so obviously they're going to be way more toxic despite their small size. It's also possible for niche communities to have their own toxicity problems, too. It's more that large communities are just inherently more difficult to govern.
My recommendation for making Facebook a healthier place: unfollow everyone and everything. There are scripts that will automate this process for you.
Once you have a completely blank slate Facebook feed, you're free. You can proactively look up your friends' activity if you want to, but otherwise you just get the good parts of facebook (like event invites and a chat app) and great meme groups like "This cat is CHONKY" and "Foods with threatening auras."
I can absolutely confirm this, I fell down an unfortunate rabbit hole of posting about politics on Reddit (fairly mainstream politics, but politics nonetheless). There's nothing quite like the dopamine rush of righteous anger, and mixing that with the disinhibition of just being another pseudonym it's quite easy to understand why "enrangement is engagement" is such a powerful and insideous tool to drive people's use of social media. I was a shitehawk and I'm not afraid to admit it.
I'd never advocate something as obviously ridiculous as requiring ID to post on social media or the other sort of rubbish people outside of the tech industry come up with to deal with this problem, though I'd also point out that quitting Reddit did wonders for my mental health and generally made me less of an arse. Facebook strikes me as even worse, even more tuned towards pissing you off and preying on that human need to have the last word.
As I recall, when Facebook was first becoming a thing, a major narrative was that the quality of discourse and level of civility would be so much better than anonymous platforms - after all, they're your friends, of course they're gonna be on their best behavior. Seems that didn't happen, at least partially because Facebook figured out that civility is boring, doesn't get clicks.
In a way one's politics has usurped one's religion. People are getting upset at others having differing political opinions the way they get upset at others having different religious beliefs. And they hold their beliefs as sacrosanct. Their identities are inextricably linked to being a Democrat/Republican/etc. It's profoundly sad.
That's what I called it just before I deleted Twitter and Facebook.
I made a dummy Twitter account to be logged into, that has no friends, just to be able to see links people post. There's no such thing as a dummy Facebook account and I've not looked at anything on Facebook since. The only account there is their shadow account on me, something Twitter has no interest in doing, and which Facebook must do so it can accurately assign accounts to every living human and avoid dummy/sockpuppet accounts.
For sure, but it is also that outrage is hugely profitable. We need a digital advertising tax on revenue. By making all these internet companies so profitable, we've given them perverse incentives to make it worse.
Good luck supporting a $trillion valuation on the basis of a "healthier place".
The business of social media platforms is to maximize profit by monetizing their user base. Friedman instructed MBA students that they must pursue this diligently, using any and all legal tricks in the book.
Yes sure, as good managers they need to protect long-term shareholder value. They should, in general, want to strike the right balance that will be maximally extractive without incuring uneconomic fines, or unduly damaging the reputation of the platform. With all the top brains in the payroll this is an optimisation problem that seems easy to crack.
For as long society thinks this business model is kosher there is really nothing to do about it. If we want to ever turn a page and start worrying about more serious problems we simply need to move on to open source decentralized media.
yes, it is my view too that solving the "social media" fiasco is a prerequisite (or at least major facilitator) of all the real and urgent social problems.
what I meant to say it is pointless to wait for facebook to fix itself as that cannot be done in the universe these folks are operating in.
It would have to be either:
i) a heavy (Chinese style) regulatory hammer descenting on them (0% chance in the oligopoly-captured US political system) or
ii) any decent person (anyone that thinks there is such a thing as society, has any affinity with humanism and its values, respecting and empowering people as persons) simply abandons them and takes the discussion elsewhere. The critical mass for seeding viable alternatives is much smaller than the bulk of average users
I always feel like people never discuss the elephant in the room, which is that whoever you are, your feed is dictated by the people and pages you have chosen to follow. It doesn't matter what the platform is. If you give people the flexibility to craft their own feed, and they want to craft a feed full of hate and lies and nonsense, how can we ever stop them?
Meanwhile, my feed has zero politics and is comprised almost entirely of retro video game stuff, plus a few local NYC groups. I kind of love it!
1) I set my location to the country Chad so I don't see popular trends that aren't very interesting, or serve as sources of outrage.
2) I use the word-based content filters to omit annoying politicians or topics from my feed. So much more peaceful now that I don't have to see people's outrage at the latest political enfant terrible.
3) I primarily follow primarily historians who make fantastic long threads, film critics I respect, movie/art channels. No crazy politics stuff. Few of my friends are on it so I don't have to see their stuff either.
If I want the news, I just read a few major newspapers. It works well enough for me!
It used to be fun when it was just my friends, sure once in the while you get that annoying person who’s trying to live a completely different life on social media but overall funny memes, shitposting with friends, checking in at places. Until Facebook implemented the ultimate evil suggestion feature and brought in news articles then everything went to hell.
You make Facebook a healthier place by reducing connection and reducing velocity. The things that Facebook will absolutely never be willing to do.
You reduce connection and velocity by ending feeds that show you most of everything, and switch to making people manually seek out what they want to get updates on. People will overwhelmingly be inclined to seek out updates and connections that they get enjoyment from, rather than having agitation/reaction/trigger porn shoveled on their heads all day long.
Tear down that wall.
This would ultimately be labeled as encouraging echo chambers / bubbles. People are typically happiest in their own bubbles, and certainly the majority of the time. That was true 20 years ago. It was true 50 years ago. It will always be true.
If you make a very far left liberal type watch nothing but FoxNews as their information source, they will not enjoy it. That person is going to be happiest, by a dramatic margin, in their own bubble, swimming in their own worldview. That isn't a defect of humanity, it's perfectly ok. It's healthy to enjoy swimming in your own beliefs. People will not all agree with each other, what matters is civility, not agreement.
Stop being psychotic authoritarians and trying to force people to exist in a way they do not want to, an unnatural way. People want to live in their echo chambers, whatever that chamber happens to be. It's healthy to prefer one's own echo chamber the majority of the time (and yes, it's occasionally also good to step outside of that echo chamber). Stop trying to control them, let them live as they see fit (so long as they are not attempting to harm others), as they are happiest.
Stop pushing connections that should not exist. Not all people should be connected. Not all people will get along. Not all people will like each other. Not all people will agree on ideology/beliefs/whatever. That's perfectly ok, it's best that some people are apart, including friends from the distant past who may not like each other in the present. Facebook does the opposite and attempts to drive everyone to be connected, which was more or less their original mission. That's a mission that can only cause strife. A thousand friends? Bullshit. That's fake. That's 950 people that will just largely be annoying at best, with feeds you'll derive very little value from (or negative value). It's obvious what the outcome inherently has to be in quasi forcing so many fake connections.
> switch to making people manually seek out what they want to get updates on
Twitter lists have been my go-to solution for a while now. Yes, I still get spammed by 'promoted tweets' but it's a small price to pay for keeping tabs on my interests and also: the interests of others (which is the concerning one). Keeping tabs on the very people you most despise. What is the saying: `Keep your friends close, but your enemies closer`? I do that all day on Twitter.
Facebook is fundamentally a dead platform, as far as I can tell. I've got over 500 "friends" and I see almost nothing on the news feed, to the point where it remains static aside from the shifting ads, for days at a time.
There's a chilling effect at some level with the way that Facebook has a defacto "real name" policy, combined with the fact that it's your real-life family and friends and normies that you are connected with. Only the most disagreeable bother posting outside of the realm of baby pictures and recipes and small-business self-promotion.
Facebook agrees with you, per the article. The article places a lot of the blame on the changes to the algorithms in January 2018, which at the time Zuck portrayed as a sacrifice: "Now, I want to be clear: by making these changes, I expect the time people spend on Facebook and some measures of engagement will go down," he wrote on Facebook. "But I also expect the time you do spend on Facebook will be more valuable. And if we do the right thing, I believe that will be good for our community and our business over the long term too."
"Facebook training videos and internal memos show another reason for the change -- the company's growing concern about a decline in user engagement, which typically refers to actions like commenting on or sharing posts.Comments, likes and reshares declined through 2017, while "original broadcast" posts -- the paragraph and photo a person might post when a dog dies -- continued a yearslong decline that no intervention seemed able to stop, according to the internal memos. The fear was that eventually users might stop using Facebook altogether."
So they made changes to the algorithm and ended up making things worse, by emphasizing politics, hate-clicks, and arguments. This shows the limits of algorithms: there is a reason that actual news organizations rely on humans to decide on what information to present to you, not some machine learning black-box.
Facebook lost at being a social media platform long ago with serving ads and trying to be a social media platform and news media platform at the same time. It just does not work.
That seems to be the big issue with people trying to create a social platform. You can't have a social platform that also has ads and news going around like wildfire. Ads ruin the experience and make it feel less "social" and news well just look at Facebook and Twitter and what news sharing has done there.
People want to be able to chat and have a feed among their friends and family. Take out all the business, celebrity pages and all other pages. Just have a friends list, chat and a feed.
I look back on the AIM days and how simple it was. It was just a friends list, chats and chatrooms based on topics from what I can remember. There weren't any algorithms altering anything. There wasn't all this extra stuff like following or friending businesses, magazines/newspapers, celebrities or anything else.
I have no idea why a social platform today needs all this extra garbage like businesses, celebs etc that has nothing to do with being a social platform. I think at that point it just becomes a hub for information and not a place to socialize, chat and share among your friends and family.
People communicate to say things that they think are important. We should not be surprised when a medium that allows more broadcast-level communication becomes dominated by those things rather than "how was your day?"
Ads are a canard. The earliest non-ad-driven social networks: USENET, mailing lists etc, had the same problems.
USENET had an entirely different set of users than today's anonymous trolls, which minimized flame-fests. Sure, disagreement drove much conversation on the groups, but it was generally topic-driven and constructive, targeting ideas much more than people. Vitriol back then was the exception rather than the rule. Those groups (aside from alt.*) where denizens crossed the line often added moderation, which usually tamped down the flames.
No, I think things are MUCH worse now than USENET ever was.
Usenet moderation debates? probably before that. Perhaps the "editors" columns of society journals.
If i recall, there were some laws about what could be printed shortly after printing presses began to proliferate that went beyond "don't say anything bad about lawmakers or the church" and ventured into early libel/slander by attempting to say "dont be controversial". which is pretty much what everyone is wishing for now, when they're demanding various forms of censorship and filtering performed pre-consumer.
The amount of hatred on facebook is staggering. I just group text or call when it comes to friends and family. A lot of us are on discord these days because we want to communicate without all of the noise or drama.
again and again, user-controlled 'knobs' controlling content presentation/filtering would solve many 'platform content' issues, but platform providers consistently remove user control over content moderation instead of simply providing appropriate controls.
UPDATE: Yea, the point about how those with the most influence on the platform are getting the least moderation was very compelling. So much of the problem really is just how secretive the whole thing is. I understand the nuance, but timely independent (and validated) review seems necessary for high profile cases.
>Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook.
This is why The Daily Wire and other such outrage aggregators are consistently on Facebook's Top 10.
Not coincidentally, it's also been the source of fuel for the anti-vaccine movement that is resulting in overflowing ICUs and well over a thousand deaths per day.
Masks, social distancing, vaccines--are a political issue and FB's feed that prioritizes "engagement" (or enragement) causes false medical information to float to the top and get the most comments/shares.
FB got more engagement but in the process became the main outlet for medical misinformation.
And don't get me started on how Twitter consistently pushes those "influencers" on the left to the top of the trending feed. "Duty to Warn", "Palmer Report", "Gravel Institute", "BrooklynDad", and others. In many ways I feel like those smug hot-takes are even more corrosive than what the Daily Wire does.
For sure, and as much as I loathe Ben Shapiro's smugness, and by extension, his "news" outlet, I'd be remiss if I didn't also point out how it's happening elsewhere from the other side of the aisle, too.
>the proliferation and promotion of medical misinformation.
I can see that working both ways. The anti-vax folks are never going to believe the 'get vaccinated or you'll die' sort of rhetoric. Better to use real numbers. There's been enough time and cases to make good estimates.
Do masks work very well? What kinds of masks? Where do people actually get COVID (home? bars? schools? hiking in the forest?). What are the actual results from non-vax drugs or treatments? How long are the vaccinations likely to be useful? Did early large-scale vaccination simply cause forced evolution of variants?
There's a shroud of mystery throughout this situation with a need by some people to simply shout down to the masses. None of this is that complicated and medical leadership is some mixture of secretive and incompetent, and I'm not sure what the strongest tendency is.
> Do masks work very well? What kinds of masks? Where do people actually get COVID (home? bars? schools? hiking in the forest?). What are the actual results from non-vax drugs or treatments? How long are the vaccinations likely to be useful? Did early large-scale vaccination simply cause forced evolution of variants?
These questions are always asked in bad faith, if they're even asked at all rather than just taking the word of right-wing talking heads at face value. Anti-vaxxers and COVID deniers don't care what the science says and willfully ignore peer-reviewed studies.
I'd say not. I'd genuinely like to know if a cloth bandana has any value, if grocery stores are dangerous, is the mass of people actually better off for an early and hard vaccine regime. It's partly intellectual interest and partly an attempt to guide my own behavior.
Having said that, everyone is too invested in their theories to be skeptical.
Better than nothing, but there are better options.
> if grocery stores are dangerous
I don't think it's possible to make a concrete answer to that question. The "danger" of any given place is going to just be a placement on a spectrum based on the airflow and how crowded it is. Where the line is to define something as "dangerous" is going to be pretty subjective. It's like showing a white-to-black gradient and asking where "dark grey" ends and "grey" begins.
But in your other post, you said:
> What are the actual results from non-vax drugs or treatments?
This is one where there is very clear scientific consensus, yet people ignore it. Take of course Ivermectin, where it's been proven that while it does somehow kill COVID in vitro, the dosage needed to make it work in vivo is higher than the maximum recommended dose.
> How long are the vaccinations likely to be useful?
At the time of the invention of the vaccine, this was unknown, but I've always failed to see how it would be a valid criticism against the vaccine enough to choose to skip the vaccine entirely. Whether it would require a booster after 6 months or if it lasted a lifetime, they were proven to significantly reduce hospitalizations and death from COVID.
> Did early large-scale vaccination simply cause forced evolution of variants?
Anyone with a basic knowledge of microbiology acknowledges how quickly bacteria and viruses can evolve (Hence why doctors always plead for patients to complete their anti-biotic prescriptions and not stop taking them once symptoms subside!). Evolution of variants was inevitable. But if we had achieved 90%+ vaccination rates quickly enough, we could have mostly eradicated it before a variant could have taken hold.
And we totally could have. It's really disheartening to see the graph of COVID cases and deaths this year. From January to July, the number was plummeting as vaccinations were being distributed, but some time around June/July, we reached the point where everyone who wanted a vaccine had one, and it was only the anti-vaxxers that were holding out. Eventually, Delta happened, and cases/deaths have been skyrocketing since the beginning of August.
Of course, over 90% of the people being hospitalized or dying from COVID now are unvaccinated (There's even a subreddit dedicated to anti-vaxxers dying to COVID, https://old.reddit.com/r/HermanCainAward) , but that still isn't convincing them.
At this point, I'm convinced absolutely nothing will convince an anti-vaxxer. Data doesn't matter to them. Peer-reviewed studies don't matter to them. They think the CDC and FDA are corrupt and everything they spout is a lie. If they get COVID, they'll talk about (but not follow through with) suing the hospital for malpractice when the hospital doesn't give them Ivermectin because Joe Rogan told them Ivermectin cures COVID.
I'd love to talk to the folks that work at the outrage porn outlets. Are they true believers doing whatever it takes? Actually think they print news? Just in it for the money?
How honed is the writing and how is that done? How do advertisers (the point of the exercise after all) respond? How has it all changed over time? How do people become writers, how much of it is done programatically?
It's a small thing, but something I'm struck by is how little international news is out there anymore.
Exactly, you can probably add Washington Post, CNN and The New York Times to the list. If not for the continuous lies and “outrage engineering” by mainstream media BLM riots wouldn’t have happened and hundreds of small businesses wouldn't have been destroyed in 2020, not to mention all the innocent people injured and killed for fake political cause.
Facebook and other social media are definitely partly responsible for the outrage culture and divisiveness plaguing the country.
But I assume it’s a difficult problem for algorithms to detect political agenda, as it seems to pervade every aspect of news these days.
Here is the interesting bit: "Mr. Zuckerberg announced he was changing Facebook product managers’ goal from helping people find relevant content to helping them interact more with friends and family."
What this did not take into account -- and I suppose it is easy to see in hindsight -- is that the Buzzfeed social justice stuff is proselytized by people you know. We have all heard various people "going no contact" with "toxic" (as in "masculinity") on Facebook and other social media. When people get into social justice, they constantly hammer the people they know about it, doing "call-outs" and "checking privilege." This does not happen to strangers, it happens to people they know ... just like a recent convert to a religion can become an irritant with their newfound beliefs and their attempts to work it into conversation on almost any pretext.
Here is another type of example for you: check your various feeds about COVID. One group is "they are killing us with their refusal to vaccinate" and the other is "they are killing us with these vaccines." Nobody is convincing anybody else, it is just yelling across the aisle. More sharing between people you know just means more yelling.
Facebook focused on the easy part -- who, upon which they already have a firm grasp -- rather than the hard part, what, namely trying to automate some kind of machine understanding of content. It's the what which is divisive and angry, not the who. My friend's wife shares photos of her garden and her chicken, fine. But she goes on and on and on about COVID, about four-fifths of her posts are about it. I'm already vaccinated, I am tired of hearing about it. And the people who are not vaccinated by now are not going to be convinced by yelling at them constantly, but it sure feels good to yell about the bad people, whoever they are.
We saw this all before with the various Christian fears about D&D in the 1980s and the Satanic panics. It's just a new flavor of people yelling at those nearby: friends and family. Now it is mediated by Facebook, Instagram, and the like.
Respectfully I disagree. The opinion section is fundamentally different (and damaging) to the news reporting, and shouldn't exist. But opinion content is many times more popular than straight news, it seems people like being told what to think.
WSJ's editorial pages have a long and storied history of supporting conservative positions that are utterly disconnected from reality, for example on climate change or the Iraq war. These are not trivial matters that we should quickly forgive them over.
Is anyone else finding this WSJ reporting a bit overwrought? They got access to some internal documents and decided to make hay while the sun shines, but most of it is frankly not very interesting. WSJ pretty clearly have an axe to grind and are building a narrative in an unbalanced way.
I’m very tempted to agree in general, and I’ve seen a lot of bad takes that cured my Gell-Mann amnesia but on this particular take, I think they are sensible. I would know because all this was my job in 2015. It’s partial because it’s based on contained leaks, there’s couple of inaccuracies, but not much more than what your team mates would make.
I think the reason why it adds up is that it’s starting from a good point of view: Why is that job hard? There seem to be no easy fixes, why? What the people trying to fix are struggling with?
That’s universally a good question to ask anyone and a great way to get a nuanced point of view — far better than the usual outrage at “Why is Facebook so terrible?!” which is best described as scapegoating to make an obvious reference to Thiel’s PhD advisor.