> “Our work is not fake news,” Filippo Menczer, a professor of informatics and computer science at Indiana University and a coauthor of the study, told BuzzFeed News. “In the moment that we found the error, we immediately contacted the editors to retract it. It was our own initiative. We were not trying to trick anyone. This is how science works.”
>> we immediately contacted the editors to retract it.
And that's how the most effective fake news works. A tiny fraction of the people who see (and spread) the original article will ever see the retraction. And ever fewer will spread it.
That’s true, but it’s an unavoidable consequence of research publication. You don’t produce new science and hold onto it for years waiting to see if someone challenges your non-published result.
Alternatively, I think fake news is artificially engineered to achieve the effect. So unless this group was deliberately producing fake conclusions I personally wouldn’t call them fake news.
I'm not even talking about research publication. I'm talking about today's mainstream press. The algorithm there is simple: issue a bombastic, click grabbing "story", let people generate traffic (and ad revenue) for a few days. Then either change the article, or delete it, or issue a milquetoast "retraction".
A team I ran at one of my past jobs was interviewed by NYTimes once. What was published was extremely editorialized to drive clicks, and as a result bore little resemblance to what people actually said. We didn't even bother asking for a retraction, but after that incident I've been extremely distrustful of basically everything I read in the media, no matter the source, unless I see direct, unedited evidence. And even then I'm distrustful if evidence appears to be taken out of context, which it is at least 90% of the time. I just wish the "journalists" would stop killing their own profession, and start behaving like adults. The only prominent voice I trust these days is Glenn Greenwald.
I don't know about that. I would say most people know that the anti-vaccine research was rescinded. And especially if you search for info you are more likely to read about the fact that it was wrong than not. If the impact is large enough there will be a maybe-not-equal, but opposite reaction.
They weren't that careful. It needs to be standard to require "independent" (as far as that is possible) replication of all studies at least once before they are taken seriously. If that means publishing half the current number of original research papers, that is fine.
This simple practice would take care of all sorts of issues (bugs, fraud, bias, random fluctuations).
Well, the way research is usually run they look for "statistical significance" and try to have "power" of ~80% to find it.[1] The power is controlled by the sample size, which is a big determinant of how much is spent on the study so it is chosen to be the lowest acceptable value.
Anyway, my point is if there is an 80% chance of getting a result and you run two trials with those same odds, then only about 64% of the studies will replicate with the correct result (in R):
p = 0.8
res = t(replicate(1e5, sample(0:1, 2, prob = c(1 - p, p), replace = T)))
# 0 = Replicated wrong result
# 1 = Non-replicated result
# 2 = Replicated correct result
> table(rowSums(res))
0 1 2
3944 32135 63921
So, the standard practice is designed to generate a large percentage of conflicting results.
[1] 80% is the target, but is probably usually optimistic
How do you independently replicate research that hasn't been published yet?
And when it's finally published, do you publish both papers and credit both teams? If not what's the incentive for the replicators to consider a non (yet) influential work?
Both projects are funded simultaneously as part of the same grant. If it is worth doing once, it is worth replicating. Vice versa: If it isn't worth funding the replication then the project isn't worth doing.
Obviously pre-registration is required if both groups will follow the same plan.
If not enough is known to do that, then there needs to be a pilot study done to figure it out. Basically what is being published now is a bunch of pilot studies.
He never said replicate before publish. He said replicate before taking seriously.
Of course, it’s not in the hands of the author that they be taken seriously. Essentially what op is saying is the collective community needs to be more scrutinizing, a statement laced with many levels of irony.
Honestly after a group publishes it’s necessarily someone else’s responsibility to reproduce. So publish quickly, and hope it gets enough attention that a reproduction comes quickly.
Honestly this seems like more of a media issue. They take a story and run with it and don’t care if it’s right.
I think the title of this article is a bit unfair; pretty much all of our understanding of science is going to be wrong at some level at some point in time, either due to mistakes in the experimentation, or in this case a software bug; good scientists realize this and issue corrections or retract the paper. Doesn't "fake news" imply dishonesty?
I really don’t believe it’s fair to call journal articles or letters published in peer-reviewed academic journals, “Fake News” if they turn out to be wrong, or if the authors retract the paper. Part of the larger process of scrutinizing scientific research, is the interactions among scientists, even with competing ideas, in such publications. Where I think we cross the bounds, into “Fake News” territory, is when, immediately upon publication in a Journal, a University PR department, pushes out a release to all the media, often with WILD claims, not representative of the research itself. Or publications such as Scientific American, pick up the paper, and push out a poor Laypersons analysis, without any critical review.
Not necessarily, which is why I've always disliked the term, even apart from how it's used so often as to be meaningless rhetoric. Plenty of people might take "fake" to mean false, regardless of intent. And I would also argue that "fake news" can also be interpreted to mean that something mundane/previously known is falsely insinuated to be shocking "news" e.g. "Barack Obama discovered to have lived with single white teen mom", which is technically true (his mother was white and living alone at the time of his birth), but the intent of the headline is obviously to imply scandal.
That said, I don't think it's normal for a "software bug" to result in such an egregious error as to require a retraction of the study. I'd like to know why the peer review process failed to detect the problem.
>That said, I don't think it's normal for a "software bug" to result in such an egregious error as to require a retraction of the study. I'd like to know why the peer review process failed to detect the problem.
Only because most papers never get the work checked at that level. Software bugs causing errors in studies is extremely common. There's a reason a tiny tiny fraction of researchers share their code without kicking and screaming, and it's not because they'll be scooped on their next paper if they do. (I'm slightly jaded here)
In what field? I work in research and people love to share their software and have you cite their tools in your papers. Software has no chance of being popular or vetted enough to become a standard without sharing it.
I think the answer is that peer review is essentially broken for finding specific technical errors, and that most errors like this are never acknowledged and don’t lead to retractions, even when they are found which they often are not.
> That said, I don't think it's normal for a "software bug" to result in such an egregious error as to require a retraction of the study. I'd like to know why the peer review process failed to detect the problem.
A Bug in FMRI Software Could Invalidate 15 Years of Brain Research [1]
That is one good example, and there are countless more. Most people realize how the existence of “fake news” begs the question of what constitutes “real news”, but it’s become frighteningly apparent that very few acknowledge that trust requires more than mutual responsibility. Most importantly, it requires universal forgiveness and a right to interrogate faith. The acceptance of existing “fake news” is not a good sign.
I don't know, I might push back here. While fake news can certainly be interpreted as news that is false, that is not how it is usually used. There is definitely an insinuation, and while the vanilla reading of the definitions of "fake" and "news" don't contain this insinuation, it exists none the less.
That being said, as a writer I might not be able to resist pointing out the irony that the paper on fake news was fake news :). I really can't fault the author.
fake news is defined as news the speaker doesn’t like and wishes to discredit, so it has nothing to do with the news itself, but rather with the speaker.
That is certainly not the definition of fake news. Fake news is news that is reporting on things which are not factual as though they are. What you are talking about is simply a dishonest speaker, which obviously has everything to do with the speaker.
I think it’s pretty cool that computing science is a unique exception to this rule. Because computing semantics were constructed from first principles, they can be completely understood in a way that things we didn’t construct, like genes, can’t be. Computing scientists are just people so of course they can make errors, but with diligence those errors can be made extremely rare. Knuth is a living existence proof for this claim.
Errors are common in mathematics in practice, though, and even expected to a degree because errors in a deep theorem or proof can be extremely subtle. A complex monograph might contain dozens of proofs and lemmas, all of which are completely interdependent so one mistake can be a major problem requiring significant revisions.
Part of the purpose for publishing in the first place is to see if other people can find mistakes in your logic. This is particularly the case at the real frontiers of science and theoretical mathematics, where truly new ground is being broken. Computer-aided theorem provers would like to improve this, but they are currently fairly specialized tools with limited use.
There are certainly some examples. However a lot of older mathematics was actually quite sloppy formally speaking.
It was the demands of early primitive and unreliable computing hardware that forced developers to focus on provable software correctness: it was the best way to find hardware bugs! Dijkstra has a great monograph on developing early interrupt handling correctly. More people should read it, since as anyone who has worked with Unix signals knows, it's easy to get wrong in subtle and nasty ways.
Edit: Also to speak to the pure math point, Dijkstra's official title after graduating was "Mathematical Engineer."
I'm aware, but I'm more talking about theoretical computer science than practical software and systems engineering. Methods for formal verification and design of software are of course interesting, but quite different from formal verification of theory. What I wanted to point out is that even a very formal field (advanced, pure math) is still plagued by very mundane and very human logical errors. Further, these errors are not easily spotted by machine, at least not with current technology.
Regarding Dijkstra, I personally consider TCS to be a branch of applied math or maybe applied logic. What most programmers do in practice, however, is more akin to engineering (or I'd say, is a type of engineering).
I've been skeptical of the news for far longer than this recent "fake news" fad. So to me it is great that people are finally getting skeptical.
Just read the news on any political topic to see they have been getting away with reporting rumors from "unnamed officials" as news for decades, with no one even recording the track record of these anonymous sources. So we have no way to figure out if they are reliable or not.
It reminds me of when wikipedia became a thing and no one trusted it since "anyone could edit it". I was like "you should be skeptical of the regular encyclopedia too."
You can be cheeky and still take the matter seriously. Just 'cause the paper describes (or, used to describe?) a serious problem, doesn't mean one has to tread carefully around such an obvious foundation for situational humor.
I don't believe that is true. I've witnessed too much satire of that sort become reality, both offline and online. Of course, I can't speak to the rate at which it is harmful.
I believe you're exaggerating the threat a laugh at the "fake-newsness" of a scientific study about fake news can bring.
One can't account for everyone, but there's no conversation to be had ever, about *anything, if you automatically presume that the other person is not at least somewhat serious about the subject matter and does not care about it at least a little bit.
I'm sure grown men and women will not lose their capacity for critical thinking after a laugh. If anything, the humor's role is to relieve tension. Maybe it would enhance the conversation.
Not that there is any, at the moment. The paper got retracted, and we know why, and we know the reason wasn't a matter of falsification, but in the following of the ethics of science. The way I see it, there are other subjects around the news that are worth considering – for example, as someone in the comments has already mentioned, how come the bug hasn't been noticed before.
I believe we have very different personal experiences with this sort of thing. I'm jealous of you! I am constantly surprised at how many people are very much not serious about a subject -- even when they think they are -- because they are more interested in winning than learning.
I am, of course, guilty of this all the time. The nuanced difference is when the individual no longer cares about correcting that vice (I know because, well, they've said so).
You're right, I am exaggerating the danger. I will assert, however, that the danger is nonzero and deserves more than dismissal. Mostly I urge thought about how the weight of these sorts of jokes can subtly affect the perceptions of people rather than insisting on the triviality of a single event.
This is so deliciously meta... news about the spread of fake news appears to itself be fake news, and the spread of said fake news displays all the properties described in the paper.
Wrong science is not "fake news". A retraction by an author isn't a refutation by an antagonist. The conflation of the two, however, is definitely on the spectrum.
They messed up and told everyone so. That's pretty much the opposite of propaganda falsehoods.
Given that retractions have less publicity than their initial publication, I don't see why an unscrupulous entity (talking in the hypothetical and not about the authors) can't use a publication-followed-by-a-retraction to smear something/someone while also trying to maintain legitimacy.
The authors get some points for issuing a retraction, but they lose some for not doing proper due diligence first. Especially when they likely published their paper to cash in on the zeitgeist.
I think the key point here is that nobody knows precisely what fake news is, so we all create our own often times incompatible definitions. This probably goes back to the source of the word. The terms usage seems to have been organized by mainstream media outlets to attack smaller sites. If that sounds controversial check out the trends on the term [1]. It was a basically nonexistent term, in spite of what it means having been a thing for many years, which then spiked to ubiquity everywhere in a matter of days. That reeks of collusion.
And so does it's demise. The term backfired and began to be used against mainstream sites when they ran articles that were 'factually challenged'. And though 'fake news' has hardly faded, now that the big players have almost entirely stopped using the term - as quickly as they had chosen to start using it in the first place, it's trending back to 0. You even now have articles such as CNN suggesting we "ban the term 'fake news'" [2], a WaPo columnist suggesting, "It’s time to retire the tainted term 'fake news.'", and so on. [3].
The moral of the story being just yet another telling of Frankenstein. 'Fake news' as a term probably did well in focus tests, but once the term was used in the wild it took on a life of its own leading those that created it to desire nothing other than its extermination.
That wired article points to an October 12th article[0] from the Washington Post as first using the term. The term doesn't seem to have been popularized until November 12th, when it takes off[1]. That's a full month later, but given the timing I think it's obvious why people suddenly started talking about it. Trump doesn't seem to have tweeted the term until December 10th[2]. At this point however I think he basically owns it. The British Government agrees with me that the term should be avoided[3]. It seems clear to me that no matter who started it, the word is in essence a propaganda term for 'propaganda.'
The "news" was the "science journalism", not the "science". Science journalism can often be "fake news"... and a refutation by an antagonist isn't necessary for "fake news".
Many of the original news stories about the paper were the "fake news".
Really? Prop-a-gan-da. It's spelled exactly how it sounds.
The term "fake news" is borderline infantile. Maybe we should just redirect en.wikipedia.org to simple.wikipedia.org. Hey, it's simpler right? In fact, let's phase out English from public discourse and all restrict ourselves to Ogden's Basic English. We'll have to rename "fake news" though, 'fake' isn't on that list. It's now "false news."
This vocabulary level and overall comprehension of the situation are suitable for a child or teen making an old-timey trollface meme about the scenario, not a data scientist.
Anyone else here not even bother reading about new psychology findings? Ever since learning about the "replication crisis" and seeing how so much popular psychology has fallen victim to it (e.g. priming, ego depletion, power poses, impicit bias, stereotype threat)
I wonder if news organizations will bring in rules not to report on them until there's been a number of successful replications. They just mislead people.
It's confirmation bias. When you want something to be true because it confirms your biases, most people are not going to actively try to refute it. This is one big argument for avoiding ideological homogeneity in academia. Homogeneity reduces the quality of work since ideas that are not properly supported are less challenged than they would be in cases where ones idea goes against the biases of other researchers.
Consider the replication crisis in psychology as another embodiment of this. This was all really started when a replication study attempted to replicate a number of impactful studies from top tier psychology journals. It turns out that 64% of all studies, including 74% of social psychology studies, could not be replicated. That means if somebody actually tried to replicate any given study, they'd more likely than not find it was dodgy. But nobody did this, because there was not much motivation to do it. Refuting others' studies hurts them, likely creates enemies for you, and doesn't do all that much for your own career.
It's a messed up system that basically needs ideological heterogeneity to create that motivation needed to ensure good quality. But such heterogeneity is practically nonexistent in many soft sciences now a days, and to some degree the problem is even starting to seep into the hard sciences.
It takes a lot of work to come through other people’s work in detail. Even revisiting your old papers that you yourself wrote can have you scratching your head and reaching for your notes from the time.
Maybe the authors had some problems using these methods for another more recent project, and found the error. Retraction doesn’t mean ‘it’s wrong please pretend this never happened,’ they will pull the paper as it contains an error, fix it, then resubmit it.
Seems exactly how researchers should behave in this case? Putting the search for truth ahead of potential reputation hits. In my eyes at least, this makes me more likely to trust them, as I know they're standing behind their work.
Now the question lies in how far said retraction will be spread. How many of the news sites who posted about this paper in the past will post about the retraction as well?
That's a huge issue with finding credible media sources nowadays. Depressingly few will admit they're wrong/update an incorrect story, and those that do will barely advertise that.
The notice as it appears in the original journal can be found here (though it lacks the context about how this study was popularly shared/written about): https://www.nature.com/articles/s41562-018-0507-0
Much of the threads on this HN article seem to focus on different perspectives on the term itself, and what it means. Its contemporary usage primarily indicates websites which purport to be local news organizations which instead write fabricated articles with specific deceptive intent. Unfortunately, some have taken to hijacking the term to describe "articles/publications that I choose not to believe, or that I have come to believe are not credible."
I think it's this ambiguity (depending on the speaker) that causes some distaste for this term. But I think we should instead choose to embrace the former meaning and reject the latter. Or if absolutely, positively necessary, consider the latter to be the true meaning and come up with a new term to describe the former.
I don't think we need to throw up our hands in the face of this lexical challenge. To do so would forfeit all debates to some epistemological quandary.
“websites which purport to be local news organizations which instead write fabricated articles with specific deceptive intent”
Great summary. My perception is that was the original usage, and I am okay with it. I wouldn’t say it’s the common usage now since the term was hijacked by the president to mean the latter definition, and his reach and influence beats that of those using the term in the original sense. I’m not sure if it can be rescued at this point, so I prefer the solution of ditching it.
The phrase lost any realistic meaning after it was hijacked by Trump. This was his intent, since it originally meant the complete horseshit that propagandists were publishing to support Trump, spread by targeted Facebook news articles.
I don’t expect to hear the term much by the time we don’t hear about him much.
Where do you envision that term being applied exactly?
It would make sense if the term was only leveled against specific news that is not based on openly verifiable fact. However, some individuals use it to refer to entire news organizations, seemingly ones that oppose their views, while not applying it to other organizations that clearly have the same or lower standards for verifiability.
https://www.buzzfeednews.com/article/stephaniemlee/fake-news...
And that's how the most effective fake news works. A tiny fraction of the people who see (and spread) the original article will ever see the retraction. And ever fewer will spread it.
Alternatively, I think fake news is artificially engineered to achieve the effect. So unless this group was deliberately producing fake conclusions I personally wouldn’t call them fake news.
A team I ran at one of my past jobs was interviewed by NYTimes once. What was published was extremely editorialized to drive clicks, and as a result bore little resemblance to what people actually said. We didn't even bother asking for a retraction, but after that incident I've been extremely distrustful of basically everything I read in the media, no matter the source, unless I see direct, unedited evidence. And even then I'm distrustful if evidence appears to be taken out of context, which it is at least 90% of the time. I just wish the "journalists" would stop killing their own profession, and start behaving like adults. The only prominent voice I trust these days is Glenn Greenwald.
This simple practice would take care of all sorts of issues (bugs, fraud, bias, random fluctuations).
Anyway, my point is if there is an 80% chance of getting a result and you run two trials with those same odds, then only about 64% of the studies will replicate with the correct result (in R):
So, the standard practice is designed to generate a large percentage of conflicting results.[1] 80% is the target, but is probably usually optimistic
And when it's finally published, do you publish both papers and credit both teams? If not what's the incentive for the replicators to consider a non (yet) influential work?
And really, what's better? 400 studies that don't replicate, or 200 that do?
If not enough is known to do that, then there needs to be a pilot study done to figure it out. Basically what is being published now is a bunch of pilot studies.
Of course, it’s not in the hands of the author that they be taken seriously. Essentially what op is saying is the collective community needs to be more scrutinizing, a statement laced with many levels of irony.
Honestly this seems like more of a media issue. They take a story and run with it and don’t care if it’s right.
That said, I don't think it's normal for a "software bug" to result in such an egregious error as to require a retraction of the study. I'd like to know why the peer review process failed to detect the problem.
Only because most papers never get the work checked at that level. Software bugs causing errors in studies is extremely common. There's a reason a tiny tiny fraction of researchers share their code without kicking and screaming, and it's not because they'll be scooped on their next paper if they do. (I'm slightly jaded here)
A Bug in FMRI Software Could Invalidate 15 Years of Brain Research [1]
[1]: https://www.sciencealert.com/a-bug-in-fmri-software-could-in...
That being said, as a writer I might not be able to resist pointing out the irony that the paper on fake news was fake news :). I really can't fault the author.
/fāk/
adjective
1. not genuine; counterfeit.
"fake designer clothing"
synonyms: forgery, counterfeit, copy, sham, fraud, hoax, imitation, mock-up, dummy, reproduction, lookalike, likeness;
Part of the purpose for publishing in the first place is to see if other people can find mistakes in your logic. This is particularly the case at the real frontiers of science and theoretical mathematics, where truly new ground is being broken. Computer-aided theorem provers would like to improve this, but they are currently fairly specialized tools with limited use.
It was the demands of early primitive and unreliable computing hardware that forced developers to focus on provable software correctness: it was the best way to find hardware bugs! Dijkstra has a great monograph on developing early interrupt handling correctly. More people should read it, since as anyone who has worked with Unix signals knows, it's easy to get wrong in subtle and nasty ways.
Edit: Also to speak to the pure math point, Dijkstra's official title after graduating was "Mathematical Engineer."
Regarding Dijkstra, I personally consider TCS to be a branch of applied math or maybe applied logic. What most programmers do in practice, however, is more akin to engineering (or I'd say, is a type of engineering).
But it is true that it is a difficult battle to fight.
Just read the news on any political topic to see they have been getting away with reporting rumors from "unnamed officials" as news for decades, with no one even recording the track record of these anonymous sources. So we have no way to figure out if they are reliable or not.
It reminds me of when wikipedia became a thing and no one trusted it since "anyone could edit it". I was like "you should be skeptical of the regular encyclopedia too."
One can't account for everyone, but there's no conversation to be had ever, about *anything, if you automatically presume that the other person is not at least somewhat serious about the subject matter and does not care about it at least a little bit.
I'm sure grown men and women will not lose their capacity for critical thinking after a laugh. If anything, the humor's role is to relieve tension. Maybe it would enhance the conversation.
Not that there is any, at the moment. The paper got retracted, and we know why, and we know the reason wasn't a matter of falsification, but in the following of the ethics of science. The way I see it, there are other subjects around the news that are worth considering – for example, as someone in the comments has already mentioned, how come the bug hasn't been noticed before.
I am, of course, guilty of this all the time. The nuanced difference is when the individual no longer cares about correcting that vice (I know because, well, they've said so).
You're right, I am exaggerating the danger. I will assert, however, that the danger is nonzero and deserves more than dismissal. Mostly I urge thought about how the weight of these sorts of jokes can subtly affect the perceptions of people rather than insisting on the triviality of a single event.
They messed up and told everyone so. That's pretty much the opposite of propaganda falsehoods.
The authors get some points for issuing a retraction, but they lose some for not doing proper due diligence first. Especially when they likely published their paper to cash in on the zeitgeist.
And so does it's demise. The term backfired and began to be used against mainstream sites when they ran articles that were 'factually challenged'. And though 'fake news' has hardly faded, now that the big players have almost entirely stopped using the term - as quickly as they had chosen to start using it in the first place, it's trending back to 0. You even now have articles such as CNN suggesting we "ban the term 'fake news'" [2], a WaPo columnist suggesting, "It’s time to retire the tainted term 'fake news.'", and so on. [3].
The moral of the story being just yet another telling of Frankenstein. 'Fake news' as a term probably did well in focus tests, but once the term was used in the wild it took on a life of its own leading those that created it to desire nothing other than its extermination.
[1] - https://trends.google.com/trends/explore?date=all&geo=US&q=f...
[2] - https://edition.cnn.com/2017/11/26/opinions/fake-news-and-di...
[3] - https://www.wired.com/2017/02/internet-made-fake-news-thing-...
[0] https://www.washingtonpost.com/news/the-intersect/wp/2016/10...
[1] https://trends.google.com/trends/explore?date=2016-09-01%202...
[2] https://twitter.com/realDonaldTrump/status/80758863287799808...
[3] https://www.telegraph.co.uk/technology/2018/10/22/government...
What once was a real media outlet arguing to ban words they don’t like... hmm
Many of the original news stories about the paper were the "fake news".
The term "fake news" is borderline infantile. Maybe we should just redirect en.wikipedia.org to simple.wikipedia.org. Hey, it's simpler right? In fact, let's phase out English from public discourse and all restrict ourselves to Ogden's Basic English. We'll have to rename "fake news" though, 'fake' isn't on that list. It's now "false news."
I wonder if news organizations will bring in rules not to report on them until there's been a number of successful replications. They just mislead people.
Consider the replication crisis in psychology as another embodiment of this. This was all really started when a replication study attempted to replicate a number of impactful studies from top tier psychology journals. It turns out that 64% of all studies, including 74% of social psychology studies, could not be replicated. That means if somebody actually tried to replicate any given study, they'd more likely than not find it was dodgy. But nobody did this, because there was not much motivation to do it. Refuting others' studies hurts them, likely creates enemies for you, and doesn't do all that much for your own career.
It's a messed up system that basically needs ideological heterogeneity to create that motivation needed to ensure good quality. But such heterogeneity is practically nonexistent in many soft sciences now a days, and to some degree the problem is even starting to seep into the hard sciences.
Maybe the authors had some problems using these methods for another more recent project, and found the error. Retraction doesn’t mean ‘it’s wrong please pretend this never happened,’ they will pull the paper as it contains an error, fix it, then resubmit it.
That's a huge issue with finding credible media sources nowadays. Depressingly few will admit they're wrong/update an incorrect story, and those that do will barely advertise that.
(Source appears hugged to death.)
It wasn't clear from the RetractionWatch page that the authors think that at least some form of the paper is still worth republishing.
I think it's this ambiguity (depending on the speaker) that causes some distaste for this term. But I think we should instead choose to embrace the former meaning and reject the latter. Or if absolutely, positively necessary, consider the latter to be the true meaning and come up with a new term to describe the former.
I don't think we need to throw up our hands in the face of this lexical challenge. To do so would forfeit all debates to some epistemological quandary.
Great summary. My perception is that was the original usage, and I am okay with it. I wouldn’t say it’s the common usage now since the term was hijacked by the president to mean the latter definition, and his reach and influence beats that of those using the term in the original sense. I’m not sure if it can be rescued at this point, so I prefer the solution of ditching it.
I don’t expect to hear the term much by the time we don’t hear about him much.
It would make sense if the term was only leveled against specific news that is not based on openly verifiable fact. However, some individuals use it to refer to entire news organizations, seemingly ones that oppose their views, while not applying it to other organizations that clearly have the same or lower standards for verifiability.