Sunday, November 13, 2016

The Role of Facebook and Social Media in the Election of 2016 (edited)


This much is true:  "The post falsely claiming that the Pope endorsed Trump has more than 868,000 Facebook shares, while the story debunking it has 33,000.” And it may have had a significant effect in Trump's triumph, claims Cliff Kuang in this FastCoDesign post.  He's mistaken, however, when he asserts that this problem is a design flaw in Facebook specifically or social media in general.  Blaming Facebook for the impact of fake news on society is like blaming the effect of gossip transmitted via post or ATT on the post office or the phone company.  Unless, of course, we consider the scale of "sharing" afforded by social media...

Modern web and mobile experiences make it easier than ever to create and consume social content... but it makes it harder and harder to understand relationships between sources of information and virtually impossible to easily confirm the source and integrity of more and more content on the Internet.  While this is true in general, he's got it wrong in this case:  this is not really a Facebook or a Facebook design problem.  He's not thinking clearly about who actually creates social media and why.  If we understand more about what the web is, what Facebook is, who owns what and who pays for it all, it's pretty clear that everything functions rather well at least with respect to its design intent.  It's just that neither the web nor social media is actually DESIGNED to deliver reliable and verifiable content.  This is a publishing problem which happens to include design, not a design problem per se and certainly not a technology problem. (1)

First of all, let's distinguish between phony content on a website and a web "hyperlink" connecting content between sites.   Within a website, publishers, editors and writers are responsible for content. Theoretically at least they develop a reputation with readers based on experience which can be linked to a recognized brand, a font or layout.  But beyond individualized reputation and visual branding, there is not even a way for the Times or Wired or the Huffington Post or The New York Post to communicate editorial policies in a consistent, verifiable way.  There is no such thing as an “Underwriters Laboratory” or  “Good Housekeeping” seal of approval for journalism in print or in digital formats.  And there's certainly no way at all for a random site such as my own science blog to quickly and effortlessly communicate some kind of editorial standard vis-a-vis fake news or unsubstantiated claims.

Second, hyperlinks present another level of challenge, a distinct problem.  What can be done about links to bad data?  The "anchor" end of the links in this post, for example, are created by me, the author of this post.  However, the "target" of the links, are beyond my control.  It's my responsibility to create context for the link on the anchor end that establishes an expectation on the part of the reader, a kind of promise from me to you.  But what's on the other end of the link is the responsibility of the authors, editors, administrators and ultimately owners of the other website.  They will ultimately meet the expectations I create and deliver on my promise.  Or not.  Again, beyond what you get with a brand reputation, there are no standards that add reliable, standard, qualitative and editorial context to a link any more than there are mechanisms with conventional citations in print media.

And third, when we discuss either content or links the context of Facebook and social media, we really need to differentiate between social media PLATFORMS (which contains no content at all) and the content hosted on that platform whether it's user-generated or paid-for advertisements.  At first glance, anyway, it might seem pretty easy prohibit ads that present themselves as news.  Consider for example, the story of this author who has admitted to creating enticing fake news to generate ad traffic from Facebook but outside Facebook where he could generate ad revenue.  His name is Paul Horner and apparently he has generated about $10k a month by placing fake news as Facebook ads for several years!  But what about "legitimate" sources of news without serious editorial constraints and processes that include fact checking?  Would Facebook block "entrepreneurs" like Horner while they allow the Huffington Post or Breitbart or FOX news to advertise their content, much of it being deceptive if not blatantly false?  Seems discriminatory to me...

And then there is the problem of user-generated content.  Even if the most celebrated examples of abuse were introduced by ads, there is no doubt that the damage of these news stories are created in the social network itself by sharing.  The purveyors of "fake" news operating on a truly global scale on social media are USERS of social media, not the social media platforms themselves.  They are guilty of posting user-generated links to the fake news sites, not authoring the phony content themselves.   By design, social media platforms cannot easily regulate or even provide values-based editorial filters on user-generated content without interfering with their user-centric, consumer mission.

Perhaps the real issue is the magnitude of the effect rather than the nature of the "fake" news.  In other words, maybe some lies are simply too egregious to allow on such a powerful platform?  Academics, for example, are among those concerned by the power, scale and impact of social media.  This New York Times article begins with the same, false Pope Francis endorsement of Donald Trump.  "A fake story ... was shared almost a million times, likely visible to tens of millions [yet] its correction was barely heard.  Of course Facebook had significant influence in this last election’s outcome,” said Zeynep Tufekci, an associate professor at the University of North Carolina who studies social media and it's impact.

In this same article, it's apparent that even Facebook employees and executives seem vexed:
Some employees are worried about the spread of racist and so-called alt-right memes across the network, according to interviews with 10 current and former Facebook employees. Others are asking whether they contributed to a “filter bubble” among users who largely interact with people who share the same beliefs.
Even more are reassessing Facebook’s role as a media company and wondering how to stop the distribution of false information. Some employees have been galvanized to send suggestions to product managers on how to improve Facebook’s powerful news feed: the streams of status updates, articles, photos and videos that users typically spend the most time interacting with.
Zuckerberg and stockholders who are clear on the Facebook mission as an advertising-funded communication platform see it differently, however.
Chris Cox, a senior vice president of product and one of Mr. Zuckerberg’s top lieutenants, has long described Facebook as an unbiased and blank canvas to give people a voice....
In May, the company grappled with accusations that politically biased employees were censoring some conservative stories and websites in Facebook’s Trending Topics section, a part of the site that shows the most talked-about stories and issues on Facebook. Facebook later laid off the Trending Topics team. 
In September, Facebook came under fire for removing a Pulitzer Prize-winning photo of a naked 9-year-old girl, Phan Thi Kim Phuc, as she fled napalm bombs during the Vietnam War. The social network took down the photo for violating its nudity standards, even though the picture was an illustration of the horrors of war rather than child pornography. 
Both those incidents seemed to worsen a problem of fake news circulating on Facebook. The Trending Topics episode paralyzed Facebook’s willingness to make any serious changes to its products that might compromise the perception of its objectivity, employees said. The “napalm girl” incident reminded many insiders at Facebook of the company’s often tone-deaf approach to nuanced situations. 
In this article entitled "Donald Trump Won Because of Facebook", Max Read argues that "Facebook enabled a Trump victory" specifically because of "its inability (or refusal) to address the problem of hoax or fake news."  Max does an excellent job framing this:
To some extent I’m using “Facebook” here as a stand-in for the half-dozen large and influential message boards and social-media platforms where Americans now congregate to discuss politics, but Facebook’s size, reach, wealth, and power make it effectively the only one that matters.  ... [Besides scale, the] most obvious way in which Facebook enabled a Trump victory has been its inability (or refusal) to address the problem of hoax or fake news. Fake news is not a problem unique to Facebook, but Facebook’s enormous audience, and the mechanisms of distribution on which the site relies — i.e., the emotionally charged activity of sharing, and the show-me-more-like-this feedback loop of the news feed algorithm — makes it the only site to support a genuinely lucrative market in which shady publishers arbitrage traffic by enticing people off of Facebook and onto ad-festooned websites, using stories that are alternately made up, incorrect, exaggerated beyond all relationship to truth, or all three.
It's ironic that a substantial fraction of this fake news that is widely shared on social media is there by design.  And in addition to deceiving a lot of voters (or reinforcing false beliefs), the creators of this fake news are siphoning off potentially significant amounts of advertising revenue from Facebook for themselves.  Why do they allow this at all?

It seems to me that this question gets to the heart of the matter.  Facebook wants to make it both OPEN and EASY to post links to 3rd party links content because that is actually EASIER for their users than creating content of their own.  Meanwhile, 3rd party content providers from The Huffington Post to Breitbart make it really, really easy to create compelling Facebook links.  And user-followers find it really compelling to click on them and re-share, the shorter and more superficial they are the better.  These fake stories are designed to attract attention, "likes" and re-sharing among friends.  And the more attention they garner, the more they will be propagated by Facebook's feed algorithm because, in fact, they encourage users pass hours on the site.  Despite the ad revenue than they loose to 3rd party sites, they apparently generate even more for themselves.

This feedback loop operates by design:  it's great for Facebook AND the completely independent ad networks that attract our attention with fake news.  These are precisely the design goals behind the system that make it easy to share a lot of compelling but ultimately low-quality content.  And that's the real reason a LOT of the 3rd party content that will be most compelling to read and share will turn out to be misleading exaggerations at best, and completely erroneous or bogus at worst.

So what's the answer?

We know that Facebook an important source of news for a lot of people.  But although Facebook hosts the content and links, in fact Facebook is NOT the publisher of this phony news.  Instead,  it serves as a platform for user-generated and ad-generated social media which include user- and ad-generated links to bogus 3rd party content.  These links take readers beyond Facebook to other websites where neither Facebook or the users who have created the links have a voice in editorial policies or control over the content.

I'd like to see Facebook make a bonafide attempt to prevent the most blatant ads pretending to be news which are, in fact, pure ad networks siphoning off traffic from Facebook.  And like anti-spam measures in email, perhaps Facebook could warn users who share links to "suspicious" sites but allow them to do so, along with a clear statement explaining their policy.

Ultimately, however, I'm afraid there is no substitute for eduction, diligence and skepticism on the part of social media users.

(1) Print and digital content are protected by the First Amendment to the Constitution as a form of speech.  Publishers, editors, and authors are responsible for the veracity of their words, not the manufacturing and distribution technology that connect them to readers.

5 comments:

Gavino said...

In a funny way, this FB meme is true. And to deny it is like saying, guns don't kill people...

Rumor, innuendo and falsehood have been effective motivators for many centuries before FB. Just like people killed people back in the day, before guns.

Thing is, you can kill a LOT MORE people with guns, a lot more efficiently.

Not to mention nukes... (Oh - did I just mention them...?)

A shout is more effective than a whisper. A megaphone more effective than a shout. Microphone more effective than a megaphone, Youtube more effective than a microphone...

I know a handful of people who didn't vote for ANYONE this time around. As far as I know, there were no things on FB that spread that... Imagine if there were...

FB played a role in AMPLIFYING rumor, innuendo, falsehood, simply because it's so good at amplification.

The good news is that leaders, so to speak, make their decisions irrespective of that stuff.

That's also the bad news...

GV

Stephen Quatrano said...

What the heck can Facebook do about fake news that's published on FOX, for example? Or CNN? If the audience is large enough and the frame is sufficiently "establishment", does that render what they publish news?

Stephen Quatrano said...

Turns out that even the NYTimes has this problem with their on-line ads.

The Times, like any conventional publisher of what we've come to know as 'news' has an editorial process. Obviously, they sometimes make mistakes. But when they do, they also publish RETRACTIONS. Reader comments are also curated.

But what about ads? What does the NYTimes do to prevent purveyors of fake news from hijacking their print platform? Even the ads are curated. Paid advertisements that look like news stories must carry the warning to the reader, "PAID ADVERTISEMENT" so they can tell the difference.

Not surprisingly, however, in the on-line version of the New York Times, they are using some of the same advertising platforms as every other website... and therefore cannot curate every ad they same way they would have in print.

This is pretty interesting. We need to think about this some more...

Stephen Quatrano said...

Here's a pretty good list of things to consider when you encounter 'fake news' from the Huffington Post.

Although it IS a good list, it's not likely to help most people who fall for it because they're not thinking critically in the first place. And even if you KNOW about this list, even if you are proud of your critical thinking skills, even if you regularly surf Snopes, even then you are SOMETIMES susceptible to 'fake news.'

Stephen Quatrano said...

Here is some actual investigative reporting from the NY Times on the 'fake news' story. See what you can do when you have actual investigative journalists, editors, and the will to get to the bottom of a story? Amazing. This article shows how it is really working. I need to write about this too...