This piece is part of Gizmodo’s ongoing effort to make the Facebook Papers available to the public. See the full directory of documents here.
After Donald Trump’s 2016 victory, Facebook faced a tsunami of allegations blaming its core product, News Feed, for filling the brains of its users with false information. Scrutiny sharply intensified among lawmakers long convinced of the company’s duty to stamp out state-affiliated trolls and other malicious groups set on eroding electoral trust. Mark Zuckerberg, whose platform received $81 million for ads on behalf of the two White House contenders, defensively belittled his own creation: Within days of the results, he told a tech conference crowd it would be “pretty crazy” to assume Facebook—the world’s largest communications platform—was capable of influencing voters “in any way.” Ten months later, Zuckerberg would claim to “regret” those words.
Soon after the off-the-cuff dismissal, however, sources inside Facebook told Gizmodo that political meddling had been a major concern at the company for the better part of a year. High-level discussions over its approach to false news, disinformation, and other activities aimed at manipulating voters had been routinely held, the current and former employees said, anonymously out of fear of retaliation. One source with direct knowledge of the discussions recalled a potential update that employees believed would reduce the flow of “fake or hoax news stories.” A slew of right-wing pages had been flagged by Facebook’s algorithmic moderation system for habitually spreading falsehoods, they said. Afraid of upsetting conservatives, however, Facebook had shelved the update, and many decisions around the election were “caught up in that,” the source said.
Facebook officials refused to confirm or deny the existence of the update, or to even acknowledge that a disproportionate amount of the misinformation arose from one side of the political spectrum. Instead, a spokesperson said, “We did not build and withhold any News Feed changes based on their potential impact on any one political party.”
Facebook asserted at the time that accusations of political bias had no influence whatsoever over its decision making. But look no further than the leaked statements of its own moderators among its employees for evidence that it does.
Today, Gizmodo is publishing our third batch of the Facebook Papers—documents that shine a light on, among other things, the company’s reluctance to take action against confirmed sources of misinformation. Employees whose work appears in the papers repeatedly attribute decisions like the shelved News Feed update to the fear that the company would be portrayed as favoring certain publications that, in some cases, its own users have ruled more informative. In particular, accusations of a liberal bias by Republican leaders weigh heavily on debates about whether to improve or even correct failures in how News Feed prioritizes journalism and political content. Accusations that the company is guided by a liberal slant are described in two papers as weighing heavily over proposed changes designed to minimize the lies being promoted into people’s feeds.
An internal post dated August 2019 briefly describes the decision by Facebook to kill a News Feed update purportedly designed to prioritize “high quality” news. In this case, Facebook obtained the underlying data responsible for gauging the trustworthiness of news sources by polling users. The company came to the decision not to reduce the flow of “low quality” news to stave off charges from “some quarters” about “perceived anti-conservative bias,” according to the post.
Asked about the discrepancy between the company’s prior claims and the once-confidential testimony of its own employees, a Facebook spokesperson declined to comment.
In the same 2019 document, Facebook employees estimated that the company had only taken action against “approximately 2% of the hate speech on the platform,” while concluding that misinformation, when noticed at all, often goes unidentified “until after it has gotten a lot of distribution.” The most “impactful abusive accounts,” it says, continue to persistently evade moderation. While employees generally have “considerable leeway” when it comes to making decisions that affect “a wide range of content,” the author writes, “policy concerns become significantly higher” when politics enter the frame.
The documents take on new relevance in the political climate of 2022. Attempts by social media companies to minimize the spread of election-related hoaxes and false news have spurred Republican leaders in several states to pursue new laws around content moderation. Laws authored in Texas and Florida have been framed as attempts by legislatures to protect users from being punished for holding unpopular political opinions. Last month, researchers at MIT and Yale debuted a paper finding Republicans on Twitter had, in fact, been “much more likely” to face suspension. At the same time, analysis showed Republicans users had posted misinformation ahead of the election at a rate “substantially” higher than their Democratic counterparts. “Thus,” the researchers wrote, “policies aimed at fighting misinformation in a nonpartisan way could have easily” explained the discrepancy.
Another Facebook document, dated August 2020, speaks to the influence of its public relations department over a policy already in effect, aimed at limiting political content more broadly across News Feed. The author reveals that “internal employees” had started “expressing discomfort” over the way in which this policy was imposed—specifically, that Facebook was continuing to inject content into people’s feeds stemming from pages pushing “highly partisan content.”
A fix proposed by Facebook’s news team appears to have been shot down by both its PR department and the News Feed’s own policy team. The concerns about fixing the problem stemmed, the document’s author wrote, from anticipated accusations of bias brought by “certain political entities.”
Months prior to the 2020 election, while awash in allegations of censorship by a concerted Republican campaign against the company, Facebook began throttling the level of political content surfacing across the platform. Not all of these changes were announced. A press release from Sept. 2020 described such measures as banning political ads the week before Election Day and adding labels to posts trying to “delegitimize” the results. But no reference was made to restricting political content across the platform more generally. That wasn’t disclosed until a week before the election, seemingly a consequence of Mark Zuckerberg begin grilled under oath.
These quieter efforts at restricting political chatter appear to be tied to concerns about users quitting the platform over constant partisan bickering. A confidential report—published by Gizmodo this year—had warned in January 2020 that many users had begun associating Facebook with “feelings of exhaustion, discouragement, stress, and anger.” Many blamed the platform for “strained relationships” and for costing them lifelong friends. The company quoted one user as saying Facebook had “severely harmed” many of friendships; a consequence, they said, of “how viscous, hateful, and biased” people’s posts had become.
The Facebook Papers includes tens of thousands of leaked pages describing, often in granular detail, how Facebook’s moderation systems really operate. Some of these documents, in fact, are so specific that security experts have warned against making them public, afraid bad actors will learn secrets to evading detection while breaking rules in place for good reason. The records were first provided to Congress last year by Frances Haugen, a Facebook product manager-turned-whistleblower, and later obtained by hundreds of journalists, including those at Gizmodo. Haugen offered blistering testimony to Congress about Facebook’s harms in October 2021. In our first drop, we shared 28 files related to the 2020 election and the Jan 6. attack on the U.S. Capitol. In our second, 37 files. Gizmodo has partnered with a group of independent experts to review, redact, and publish the documents responsibly.
Not one of the documents, however, points to any calculated conspiracy aimed at censoring conservative voices. Far from forcing Facebook to shun partisanship, the company’s biggest accusers on Capitol Hill seem only to have achieved the opposite.
May 10, 2022: Ranking and News Feed documents
How Much of News Feed is Good (or Bad) for the World?
Results from the first-ever Good for the World survey, which asks a wide swath of Facebook users what content on Facebook they think benefits or harms the world. Users were largely aligned on what was “bad/very bad.” What those users feel is “good/very good,” on the other hand, varies extremely and, according to Facebook’s researchers, “frequently doesn’t align with Facebook’s integrity/quality standards.” Some examples of the dreck US respondents thought was good for the world:QAnon and the “Save Our Children” movementA post blaming Kenosha shooting victim Jacob Blake for being shot by the copsA “borderline” nude picture featuring a topless woman with body paint of an American eagle and the word “Trump” covering her breasts. She’s also wearing a fancy cowboy hat. A clip from a violent boxing match. Super lazy engagement bait, e.g. a stock picture of a bridge captioned “Would you walk across for $15 million?”A video compilation of crocodiles devouring other animals.
Political Party Response to ‘18 Algorithm Change
A post describing how politicians in the EU feel pigeonholed into posting more inflammatory, clickbaity content in response to the company’s ongoing reliance on MSI. “From their perspective, they’re stuck in an inescapable cycle of negative campaigning by the incentive structures of the platform.”
Demoting On Integrity Signals Is Not Enough
A note explaining why the “integrity signals” the company relies on are insufficient to fully address the political harms in people’s feeds. Apparently “hate speech” is much harder to quantify in people’s feeds than nudity/porn (N&P) or graphic violence. “According to the best estimates I have seen, we only take action against approximately 2% of the hate speech on the platform,” one document reads. Misinformation “is often not caught until after it’s gotten a lot of distribution,” if it’s caught at all.
The Surprising Consequences to Sessions and MSI Caused by Turning Off Video Autoplay on News Feed
In an effort to boost original broadcast posts (OBP’s) on Facebook, one researcher tried turning off auto-play videos in News Feed. The researcher notes that the numbers suggest a “significant” amount of video watch time on users’ feeds is due to autoplay, and that while turning off autoplay on News Feed globally might be good for user wellbeing in the abstract, it would be “difficult” to implement on a wide scale because of the immediate blows to metrics like meaningful social interactions (MSI), and the loss of longer sessions.
You Should Query Non-Feed VPVs as a Real-Time Signal (You Probably Aren’t!)
A doc describing some ways to rank user eyeballs (view port views, or VPVs) on in-feed content vs off-feed content. Facebook measures impressions when a piece of content is fully in a person’s line of sight in the News Feed. The document notes that 78% of content infernally flagged as “misinfo” gets its eyeballs from the News Feed.
Opt-In Dogfooding for Power User Experience [Focused on Hard News]
An invitation to a beta “News Feed recommendations experience” designed to push hard political news and trusted publishers to the top.
Showing More Informative Links in News Feed
An internal announcement about an upcoming change meant to “reduce low quality links in News Feed.”
Goal report: Apr 30, 2019
A 2021 document listing some of the goals and progress from members of the News Feed team. Some standouts:Notifications are driving sessions among US/Canada usersReshares are starting to create more user sessions Feed ranking changes “continue to cannibalize Stories”Global sessions “continue to trend behind forecasts,” among users in the US and Canada in particular.
Distribution and Personalization
Announcing some launches/various experiments on News Feed from the integrity team. Launches include:A new system to demote content “with a high prediction of anger reactions that cause anger, and high likelihood that users perceive the content as “bad for the world.”A new clickbait detection system launched in 6 languagesRemoved 4 different signals used to detect “low quality pages” being pushed into people’s recommendations, since they were “no longer capturing problematic content.” This leads to 400,000 pages being newly eligible for recommendation. Climate science experts are now upranked in search in response to users’ climate science inquiries
Integrity Audit—Context, Update and Next Steps
Feed Enforcement Team Mission and Principles
Two 2019 documents detailing ongoing work on an integrity audit for News Feed.
One-Go Summary Post for Recent Goaling and Goal Metric Changes for News Feed
A post from early 2021 announcing “goal changes” for metrics in the News Feed.
News Feed UXR Quarterly Insights Roundup
A collection of various internal studies and reports about News Feed that were conducted during Q1 2021.
Is News Feed Becoming More or Less Valuable to People?
A 2018 study that tries to suss out whether News Feed has become more or less valuable to users over time. The writer notes that the company “is down on Feed satisfaction,” and those surveyed reported a “significant decline” in the number of posts from people important to them.
What Happens If We Delete Ranked Feed?
An experiment in which researchers switched some user’s feeds from ranked to chronological. As it turns out: a chronological feed resulted in users seeing more content, as measured by VPVs, and ad revenue jumped in turn. The writer hypothesizes this is because people are frustratedly scrolling through their feeds and seeing more ads as a result. Another theory: less compelling content on your News Feed means fewer reasons to click away from it to profiles and pages.
Friends Rank Feed Experiment
OP describes an experiment called “The Newly Feed Game” where 10 Facebook users (and their friends) get separated into two separate rooms, and are tasked with ordering printouts of the top ten posts in both their and their friends’ News Feed. Turns out, they’re pretty darn good at it!
Feed Composition and Integrity
An experiment that sought to discourage people from sharing hateful and violating content by changing the compositions of their News Feeds.