WITNESS RADIO MILESTONES

Exclusive: Facebook Opens Up About False News

Published

on

NEWS FEED, THE algorithm that powers the core of Facebook, resembles a giant irrigation system for the world’s information. Working properly, it nourishes all the crops that different people like to eat. Sometimes, though, it gets diverted entirely to sugar plantations while the wheat fields and almond trees die. Or it gets polluted because Russian trolls and Macedonian teens toss in LSD tablets and dead raccoons.

For years, the workings of News Feed were rather opaque. The company as a whole was shrouded in secrecy. Little about the algorithms got explained and employees were fired for speaking out of turn to the press. Now Facebook is everywhere. Mark Zuckerberg has been testifying to the European Parliament via livestream, taking hard questionsfrom reporters, and giving tech support to the Senate. Senior executives are tweeting. The company is running ads during the NBA playoffs.

In that spirit, Facebook is today making three important announcements on false news, to which WIRED got an early and exclusive look. In addition, WIRED was able to sit down for a wide-ranging conversation with eight generally press-shy product managers and engineers who work on News Feed to ask detailed questions about the workings of the canals, dams, and rivers that they manage.

The first new announcement: Facebook will soon issue a request for proposals from academics eager to study false news on the platform. Researchers who are accepted will get data and money; the public will get, ideally, elusive answers to how much false news actually exists and how much it matters. The second announcement is the launch of a public education campaign that will utilize the top of Facebook’s homepage, perhaps the most valuable real estate on the internet. Users will be taught what false news is and how they can stop its spread. Facebook knows it is at war, and it wants to teach the populace how to join its side of the fight. The third announcement—and the one the company seems most excited about—is the release of a nearly 12-minute video called “Facing Facts,” a title that suggests both the topic and the repentant tone.

The film, which is embedded at the bottom of this post, stars the product and engineering managers who are combating false news, and was directed by Morgan Neville, who won an Academy Award for 20 Feet from Stardom. That documentary was about backup singers, and this one essentially is too. It’s a rare look at the people who run News Feed: the nerds you’ve never heard of who run perhaps the most powerful algorithm in the world. In Stardom, Neville told the story through close-up interviews and B-roll of his protagonists shaking their hips on stage. This one is told through close-up interviews and B-roll of his protagonists staring pensively at their screens.

In many ways, News Feed is Facebook: It’s an algorithm comprised of thousands of factors that determines whether you see baby pictures, white papers, shitposts, or Russian agitprop. Facebook typically guards information about the way the Army guards Fort Knox. This makes any information about it valuable, which makes the film itself valuable. And right from the start, Neville signals that he’s not going to merely scoop out a bowl of peppermint propaganda. The opening music is slightly ominous, leading into the voice of John Dickerson, of CBS News, intoning about the bogus stories that flourished on the platform during the 2016 election. Critical news headlines blare, and Facebook employees, one carrying a skateboard and one a New Yorkertote, move methodically up the stairs into headquarters.

‘Is there a silver bullet? There isn’t.’

EDUARDO ARIÑO DE LA RUBIA

The message is clear: Facebook knows it screwed up, and it wants us all to know it knows it screwed up. The company is confessing and asking for redemption. “It was a really difficult and painful thing,” intones Adam Mosseri, who ran News Feed until recently, when he moved over to run product at Instagram. “But I think the scrutiny was fundamentally a helpful thing.”

After the apology, the film moves into exposition. The product and engineering teams explain the importance of fighting false news and some of the complexities of that task. Viewers are taken on a tour of Facebook’s offices, where everyone seems to work hard and where there’s a giant mural of Alan Turing made of dominos. At least nine times during the film, different employees scratch their chins.

Oddly, the most clarifying and energizing moments in “Facing Facts” involve whiteboards. There’s a spot three and a half minutes in when Eduardo Ariño de la Rubia, a data science manager for News Feed, draws a grid with X and Y axes. He’s charismatic and friendly, and he explains that posts on Facebook can be broken into four categories, based on the intent of the author and the truthfulness of the content: innocent and false; innocent and true; devious and false; devious and true. It’s the latter category—including examples of cherry-picked statistics—that might be the most vexing.

A few minutes later, Dan Zigmond—author of the book Buddha’s Diet, incidentally—explains the triptych through which troublesome posts are countered: remove, reduce, inform. Terrible things that violate Facebook’s Terms of Service are removed. Clickbait is reduced. If a story appears fishy to fact-checkers, readers are informed. Perhaps they will be shown related stories, or more information on the publisher. It’s like a parent who doesn’t take the cigarettes away but who drops down a booklet on lung cancer and then stops taking them to the drug store. Zigmond’s whiteboard philosophy is also at the core of a Hard Questions blog post Facebook published today.

The central message of the film is that Facebook really does care profoundly about false news. The company was slow to realize the pollution building up in News Feed, but now it is committed to cleaning it up. Not only does Facebook care, it’s got young, dedicated people who are on it. They’re smart, too. John Hegeman, who now runs News Feed, helped build the Vickrey-Clark-Groves auction system for Facebook advertising, which has turned it into one of the most profitable businesses of all time.

The question for Facebook, though, is no longer whether it cares. The question is whether the problem can be solved. News Feed has been tuned, for years, to maximize our attention and in many ways our outrage. The same features that incentivized publishers to create clickbait are the ones that let false news fly. News Feed has been nourishing the sugar plantations for a decade. Can it really help grow kale, or even apples?

To try to get at this question, on Monday, I visited with the nine stars of the film, who sat around a rectangular table in a Facebook conference room and explained the complexities of their work. (A transcript of the conversation can be read here.) The company has made all sorts of announcements since December 2016 about its fight against false news. It has partnered with fact-checkers, limited the ability of false news sites to make money off their schlock, and created machine-learning systems for combatting clickbait. And so I began the interview by asking what had mattered most.

The answer, it seems, is both simple and complex. The simple part is that Facebook has found that just strictly applying its rules—”blocking and tackling,” Hegeman calls it—has knocked many purveyors of false news off the platform. The people who spread malarkey also often set up fake accounts or break basic community standards. It’s like a city police force that cracks down on the drug trade by arresting people for loitering.

In the long run, though, Facebook knows that complex machine-learning systems are the best tool. To truly stop false news, you need to find false news, and you need machines to do that because there aren’t enough humans around. And so Facebook has begun integrating systems—used by Instagram in its efforts to battle meanness—based on human-curated datasets and a machine-learning product called DeepText.

Here’s how it works. Humans, perhaps hundreds of them, go through tens or hundreds of thousands of posts identifying and classifying clickbait—”Facebook left me in a room with nine engineers and you’ll never believe what happened next.” This headline is clickbait; this one is not. Eventually, Facebook unleashes its machine-learning algorithms on the data the humans have sorted. The algorithms learn the word patterns that humans consider clickbait, and they learn to analyze the social connections of the accounts that post it. Eventually, with enough data, enough training, and enough tweaking, the machine-learning system should become as accurate as the people who trained it—and a heck of a lot faster.

In addition to identifying clickbait, the company used the system to try to identify false news. This problem is harder: For one, it’s not as simple as analyzing a simple, discrete chunk of text, like a headline. Secondly, as Tessa Lyons, a product manager helping to oversee the project, explained in our interview, truth is harder to define than clickbait. So Facebook has created a database of all the stories flagged by the fact-checking organizations that it has partnered with since late 2016. It then combines this data with other signals, including reader comments, to try to train the model. The system also looks for duplication, because, as Lyons says, “the only thing cheaper than creating fake news is copying fake news.” Facebook does not, I was told in the interview, actually read the content of the article and try to verify it. That is surely a project for another day.

Interestingly, the Facebook employees explained, all clickbait and false news is treated the same, no matter the domain. Consider these three stories that have spread on the platform in the past year.

“Morgue employee cremated by mistake while taking a nap.” “President Trump orders the execution of five turkeys pardoned by Obama.” “Trump sends in the feds— Sanctuary City Leaders Arrested.”

The first is harmless; the second involves politics, but it’s mostly harmless. (In fact it’s rather funny.) The third could scare real people and bring protesters into the streets. Facebook could, theoretically, deal with each of these kinds of false news differently. But according to the News Feed employees I spoke with, it does not. All headlines pass through the same system and are evaluated the same way. In fact, all three of these examples seem to have gotten through and started to spread.

Why doesn’t Facebook give political news strict scrutiny? In part, Lyons said, because stopping the trivial stories helps the company stop the important ones. Mosseri added that weighting different categories of misinformation differently might be something that the company considers later. “But with this type of integrity work I think it’s important to get the basics done well, make real strong progress there, and then you can become more sophisticated,” he said.

Behind all this though is the larger question. Is it better to keep adding new systems on top of the core algorithm that powers News Feed? Or might it be better to radically change News Feed?

I pushed Mosseri on this question. News Feed is based on hundreds, or perhaps thousands, of factors, and as anyone who has run a public page knows, the algorithm rewards outrage. A story titled “Donald Trump is a trainwreck on artificial intelligence,” will spread on Facebook. A story titled “Donald Trump’s administration begins to study artificial intelligence” will go nowhere. Both stories could be true, and the first headline isn’t clickbait. But it pulls on our emotions. For years, News Feed—like the tabloids—has heavily rewarded this kind of story, in part because the ranking was heavily based on simple factors that correlate with outrage and immediate emotional reactions.

Now, according to Mosseri, the algorithm is starting to take into account more serious factors that correlate with a story’s quality, not just its emotional tug. In our interview, he pointed out that the algorithm now gives less value to “lighter weight interactions like clicks and likes.” In turn, it is putting more priority on “heavier weight things like how long do we think you’re going to watch a video for? Or how long do we think you’re going to read an article for? Or how informative do you think you’d say this article is if we asked you?” News Feed, in a new world, might give more value to a well-read, informative piece about Trump and artificial intelligence, instead of just a screed.

‘Two billion people around the world are counting on us to fix this.’

DAN ZIGMOND

Perhaps the most existential question for Facebook is whether the nature of its business inexorably helps the spread of false news. Facebook makes money by selling targeted ads, which means it needs to know how to target people. It gathers as much data as it can about each of its users. This data can, in turn, be used by advertisers to find and target potential fans who will be receptive to their message. That’s useful if an advertiser like Pampers wants to sell diapers only to the parents of newborns. It’s not great if the advertiser is a fake-news purveyor who wants to find gullible people who can spread his message. In a podcast with Bloomberg, Cyrus Massoumi, who created a site called Mr. Conservative, which spread all kinds of false news during the 2016 election, explained his modus operandi. “There’s a user interface facebook.com/ads/manager and you create ads and then you create an image and advert, so lets say, for example, an image of Obama. And it will say ‘Like if you think Obama is the worst president ever.’ Or, for Trump, ‘Like if you think Trump should be impeached.’ And then you pay a price for those fans, and then you retain them.”

In response to a question about this, Ariño de la Rubia noted that the company does go after any page it suspects of publishing false news. Massoumi, for example, now says he can’t make any money from the platform. “Is there a silver bullet?” Ariño de la Rubia asked. “There isn’t. It’s adversarial, and misinformation can come from any place that humans touch and humans can touch lots of places.”

Pushed on the related question of the possibility of shutting down political Groups into which users have put themselves, Mosseri noted that it would indeed stop some of the spread of false news. But, he said, “you’re also going to reduce a whole bunch of healthy civic discourse. And now you’re really destroying more value than problems that you’re avoiding.”

Should Facebook be cheered for its efforts? Of course. Transparency is good, and the scrutiny from journalists and academics (or at least most academics) will be good. But to some close analysts of the company, it’s important to note that this is all coming a little late. “We don’t applaud Jack Daniels for putting warning labels about drinking while pregnant. And we don’t cheer GM for putting seat belts and airbags in their cars,” says Ben Scott, a senior adviser to the Open Technology Institute at the New America Foundation. “We’re glad they do, but it goes with the territory of running those kinds of businesses.”

Ultimately, the most important question for Facebook is how well all these changes work. Do the rivers and streams get clean enough that they feel safe to swim in? Facebook knows that it has removed a lot of claptrap from the platform. But what will happen in the American elections this fall? What will happen in the Mexican elections this summer?

Most importantly, what will happen as the problem gets more complex? False news is only going to get more complicated, as it moves from text to images to video to virtual reality to, one day, maybe, computer-brain interfaces. Facebook knows this, which is why the company is working so hard on the problem and talking so much. “Two billion people around the world are counting on us to fix this,” Zigmond said.

 Source: WIRED

 

Click to comment

Trending

Exit mobile version