Facebook Quietly Makes a Big Admission
Back in February, Facebook announced a little experiment. It would reduce the amount of political content shown to a subset of users in a few countries, including the US, and then ask them about the experience. âOur goal is to preserve the ability for people to find and interact with political content on Facebook, while respecting each personâs appetite for it at the top of their News Feed,â Aastha Gupta, a product management director, explained in a blog post.
On Tuesday morning, the company provided an update. The survey results are in, and they suggest that users appreciate seeing political stuff less often in their feeds. Now Facebook intends to repeat the experiment in more countries and is teasing âfurther expansions in the coming months.â Depoliticizing peopleâs feeds makes sense for a company that is perpetually in hot water for its alleged impact on politics. The move, after all, was first announced just a month after Donald Trump supporters stormed the US Capitol, an episode that some people, including elected officials, sought to blame Facebook for. The change could end up having major ripple effects for political groups and media organizations that have gotten used to relying on Facebook for distribution.
The most significant part of Facebookâs announcement, however, has nothing to do with politics at all.
The basic premise of any AI-driven social media feedâ"think Facebook, Instagram, Twitter, TikTok, YouTubeâ"is that you donât need to tell it what you want to see. Just by observing what you like, share, comment on, or simply linger over, the algorithm learns what kind of material catches your interest and keeps you on the platform. Then it shows you more stuff like that.
In one sense, this design feature gives social media companies and their apologists a convenient defense against critique: If certain stuff is going big on a platform, thatâs because itâs what users like. If you have a problem with that, perhaps your problem is with the users.
And yet, at the same time, optimizing for engagement is at the heart of many of the criticisms of social platforms. An algorithm thatâs too focused on engagement might push users toward content that might be super engaging but of low social value. It might feed them a diet of posts that are ever more engaging because they are ever more extreme. And it might encourage the viral proliferation of material thatâs false or harmful, because the system is selecting first for what will trigger engagement, rather than what ought to be seen. The list of ills associated with engagement-first design helps explain why neither Mark Zuckerberg, Jack Dorsey, nor Sundar Pichai would admit during a March congressional hearing that the platforms under their control are built that way at all. Zuckerberg insisted that âmeaningful social interactionsâ are Facebookâs true goal. âEngagement,â he said, âis only a sign that if we deliver that value, then it will be natural that people use our services more.â
In a different context, however, Zuckerberg has acknowledged that things might not be so simple. In a 2018 post, explaining why Facebook suppresses âborderlineâ posts that try to push up to the edge of the platformâs rules without breaking them, he wrote, âno matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on averageâ"even when they tell us afterward they don't like the content.â But that observation seems to have been confined to the issue of how to implement Facebookâs policies around banned content, rather than rethinking the design of its ranking algorithm more broadly.
Thatâs why the companyâs latest announcement is quietly such a big deal. It marks perhaps the most explicit recognition to date by a major platform that âwhat people engage withâ is not always synonymous with âwhat people value,â and that this phenomenon is not limited to stuff that threatens to violate a platformâs rules, like pornography or hate speech.
The new blog post, as with all Facebook announcements, is pretty vague, but itâs possible to read between the lines. âWeâve also learned that some engagement signals can better indicate what posts people find more valuable than others,â Gupta writes. âBased on that feedback, weâre gradually expanding some tests to put less emphasis on signals such as how likely someone is to comment on or share political content.â Translation: Just because someone comments on something, or even shares it, doesnât mean itâs what they would prefer to see in their timeline. âAt the same time, weâre putting more emphasis on new signals such as how likely people are to provide us with negative feedback on posts about political topics and current events when we rank those types of posts in their News Feed.â Translation: If you want to know what people like, ask them. The answers may differ from what a machine learning algorithm learns by silently monitoring their behavior.
This is pretty obvious to anyone who has ever used social media. When I scroll Facebook and see the latest rant by my one anti-vaccine contact, I canât help but read in horror. Facebook registers that fact and makes sure to push that guyâs next post to the top of my News Feed the next time I open the app. What the AI doesn't understand is that I feel worse after reading those posts and would much prefer to not see them in the first place. (I finally, belatedly, muted the account in question.) The same goes for Twitter, where I routinely allow myself to be enraged by tweets before recognizing that Iâm wasting time doing something that makes me miserable. Itâs a bit like food, actually: Place a bowl of Doritos in front of me, and I will eat them, then regret doing so. Ask me what I want to eat first, and Iâll probably request something I can feel better about. Impulsive, addictive behavior doesnât necessarily reflect our âtrueâ preferences.
As with any policy announcement from Facebook, the real question is how it will be implemented, and given the companyâs lackluster track record of transparency, we may never stop waiting for answers. (Very basic question: What counts as âpoliticalâ?) It would be good, in theory, if social media companies began taking the divide between engagement and what users value more seriously, and not just for political content. Perhaps Facebookâs latest announcement will mark a shift in that direction. But itâs also possible that Facebook is behaving opportunisticallyâ"using some vague research findings as an excuse to lower its own political risk profile, rather than to improve usersâ experienceâ"and will refuse to apply the lesson more broadly. Nicole Bonoff, a researcher at Twitter, suggested as much, and argued that Facebookâs data may not be reliable. âUser surveys, which tend to ask ungrounded hypotheticals about âpolitics,â elicit negative responses,â she tweeted. âThis is due to a combination of social desirability bias, differing definitions of politics & stereotypes about politics on social media.â
So the effects of the new policy remain to be determined. Thereâs a difference, after all, between what someone says and what they do. At least Facebook appears to have learned that lesson.
More Great WIRED Stories
0 Response to "Facebook Quietly Makes a Big Admission"
Post a Comment