Can we reduce "fake news" with anonymous group shaming?

I have many things to discuss on the problem of "fake news" (which is to say, deliberately constructed false reports aimed to be spread to deceive) and the way it spreads through social media. This hot topic, seen as one of the largest threats to democracy to ever arise -- especially when combined with automated microtargeting of political propaganda -- is causing people to clamour for solutions.

Some of the solutions proposed are problematic on their own. Appointing social network sites to be arbiters of what is real and fake. Censorship executed by web sites or the government or both. Rules similar to the "false news" law in Canada that was ruled unconstitutional after it convicted a holocaust denier. (See: R v Zundel)

Anonymous shaming

Here I propose an alternative in the form of semi-anonymous shaming. If we can create consequences for the spreading of fake news, social consequences, we may be able to reduce it. Of course, if your friend online posts some fake news, you may be inclined (as I often am) to call them out on it. This is not typical activity. Most people are afraid of damaging a friendship in this fashion.

Perhaps we can craft a way to call friends out for posting fake news that makes them feel some shame in forwarding it, but which can still be done in friendly society.

At a first level, networks could include a tool so that when you see what you believe to be fake news, you could tag it. Those upstream who forwarded the story could then, after some number of such tags, be told, "5 of your friends want to point out to you that this story is fake." Additional text lines could be offered without attribution.

Social networks love to have people tag other content, though usually positively, and not anonymously. Everybody on Facebook is constantly clicking "Like." All the variations of the Like button, even "Sad" are positive feedback, and all are identified. Some networks allow both upvoting and downvoting. The presence of downvoting is controversial. It is often misused (people downvote things they disagree with, not things they think are poorly written or incorrect) but is usually anonymous. Some proposals exist that upvotes and downvotes would be tallied independently.

I will emphasize again that what is proposed is primarily a method of communication among friends, not to random strangers. Upvoting/reporting systems on public postings are common, but also often gamed. Robots could not easily participate in this unless you friend them.

Encouraging Retraction

The use of shame could go further. Ideally we would like a way so that somebody, discovering they have made a mistake, would not be too shamed, because we want them to issue a retraction and pass it down the line. Nobody likes issuing retractions, even large professional news organizations that feel it is their journalistic duty.

A good system could do things to encourage retraction, and in particular to make retraction less embarrassing. For example, if more than one person forwarded the same item of fake news to a given person, that person could simply be informed, "Recently, several of your friends included this story in your feed. Some of them now report they have learned the story was seriously erroneous or even fake. They extend their apologies and wish to make sure you know the story is fake."

I want to duplicate the dynamic of somebody saying, "Sorry, my bad" and others responding "It's cool" so that errors are corrected but friendship is not damaged.

Of course, if you got a fake story from only one person, you will know they issued the apology but even so it will seem less direct and embarrassing than a personal and public retraction.

The social site could also add language to make people feel better about retractions. Throw in lines like, "As you probably know, false stories show up all the time in social feeds, and people decide to retract them thousands of times a day. We hope you appreciate that your contact has had the courage and sense of right to correct this unfortunately common mistake that people make online."

The ultimate goal of course is that people, not wanting to retract or get called out will think a little bit more before posting fake news.

This can also be combined with plans at some social networks to build tools that detect fake news after the fact. If a story has been confirmed, through crowd wisdom or other sources, to be fake news, this can lead to encouragement for retraction by the original poster and forwarders, or encouragement to recipients to investigate and call out the source on their BS.

Unintended Consequences

Naturally, any system like this has a danger. Some will use it on stories that they simply disagree with rather than just stories that are outright deceptions. Experimentation would be needed to figure out the best ways to limit that, or even to allow counter-claims. Of course, since it's just opinion between friends, nothing is harmed if you claim that news you don't agree with or with minor errors is fake news (like Donald Trump often does) unless those reports somehow can cause other things, like flagging articles for review.

Comments

I am a huge fan of the "inoculating" game getbadnews.com which according to this research effectively immunizes people against propaganda techniques by making them role-play the evil use of them to manipulate others. If this research can be confirmed, let's make it part of the curriculum.

I must say I like it. Of course the problem is that hoax news is often spread by people who do not think they are gullible or falling for tricks. Nobody likes to think that, so nobody would on their own "train myself to not be gullible" until you've been burned by your own gullibility.

Thus you first need something like the use of shame I propose to let people know that they got tricked, that there is some shame in that but not too much and that a good way to avoid it might be to play a game like this.

Or, of course, to consider putting this sort of training in school curriculum and job requirements. Companies saying, "we don't hire you unless you have been trained in the ways of propaganda." I have always felt that critical thinking and understanding how people lie to you is one of the most important skills that should be boosted in our education programs. But that will take generations, unless the employers join.

Some will use it on stories that they simply disagree with rather than just stories that are outright deceptions.
This is the real problem. I frequent many blogs, but many comment sections are afflicted by up/down votes based purely on ideology rather than the merits of an argument.
This potentially makes reality even more susceptible to the whims of political fashion than it already is.

If your friends what to tell you that you are bullshitting when they disagree, it's not great but it's not too terrible.

But it could be that the form would offer some choices:

Would you like to tell your friend that this article is:

  • A hoax, designed to trick us
  • Highly biased, but not a hoax
  • Something you disagree with strongly

Could also add positive choices, but most social networks already have something to do that, though it is not anonymous.

It's an interesting idea, I can't really tell if it would work or not, but I would also worry about people using it as a dislike button instead. Maybe giving the other options could work though.

But it's also not always that straight forward to tell what is true and what is not (well, sometimes it is, but there is a fairly large grey area in between). For example someone might say "robocars are safe". If you compare statistics with humans it appears so, but at the same time there have been accidents where people have been killed, so it really depends on what you meant by "safe" here.

But it's not only because natural language is imprecise, for example I might say that air pollution (per unit energy) from wood stoves is much more dangerous than pollution from nuclear power plants. A lot of people would think that isn't true, although it's my understanding it is. I would expect some to anonymously flag that as fake if they could, and they would probably really believed it was, but they wouldn't have checked the facts behind it (it's not that easy to verify either).

So I think there is a risk of people just moderating to strengthen what a majority among friends believe rather than what actually is true (i.e. strengthening the filter bubbles).

(And looking at what otherwise reasonable people say about e.g. different diet trends, I think a lot of people are really bad at telling truth from bs (i.e. based on available scientific evidence), even people who are supposedly doing that professionally (e.g. journalists and academics)).

My goal is not to decide what is true and what is not (which is hard) but to help discourage the spreading of malicious hoaxes, which is to say lies. It is more of a question of fact if something is a lie than if it's true. Of course it is not a bright line. Things that are wrong but were written in earnest are bad, but not the same class of problem.

I actually am proposing something that's a bit like a dislike button, but with clarity that it's not dislike but hoax. (The best way may be to offer both as choices.) The key difference is that all the forwarder learns is that some people disliked their items, not who, so it doesn't sting as much.

And indeed, people would flag your wood stove article and so there does need to be more than one way to flag it, so people are clear on the different ways they could express this.

Add new comment