Welcome to the inaugural issue of Recoding Tech’s newsletter. As our civic discourse and democratic institutions begin to buckle under a deluge of digital disinformation, and our data is amassed and sold to the highest bidder for purposes we cannot anticipate, let alone consent to, the most-pressing questions of our time is not whether Big Tech’s code and business practices are causing harms -- but rather what are we going to do about it.
Each new issue of this newsletter roundups the most relevant news, research, and policy actions from the previous month and provides analysis to help policymakers understand the impacts of Big Tech and what solutions can protect the public interest and sustain democracy in the digital age.
To receive the monthly newsletter directly to your email inbox, you can signup below.
In this month’s issue:
Featured topic: Disinformation and the 2020 U.S. presidential election
Launch of beta version of Recoding Tech’s resource library
Monthly roundup of research, government policy, and news
Highlights from the latest research
Highlights from the latest policy actions
Highlights from the latest news
Featured topic
Disinformation and the 2020 U.S. presidential election
Former Vice President Joe Biden won the 2020 U.S. presidential election in November and is now the president-elect. President Trump, however, has yet to concede the race and is continuing to pursue legal options to challenge the results. Despite no documented evidence of widespread voter fraud, President Trump and his allies are using disinformation-driven campaigns, like “Stop the Steal” on Facebook, to rally supporters and cast doubt on the validity of the electoral process. In turn, unsubstantiated accusations of voter fraud have rapidly proliferated and spread across all major social media platforms. These actions have jeopardized the longstanding tradition of a peaceful transition of power in the U.S., which the country and democracies around the world rely on for legitimacy and stability.
For their part, social media companies (including Facebook, Twitter, and Google) took a number of extraordinary measures to dampen the spread and impact of election-related disinformation in the weeks before and after the election. These included adding labels to tweets and posts as a means to fact-check or direct users to authoritative news sources about the election results. In the days after the election, Facebook took down numerous groups associated with the “Stop the Steal” campaign and tweaked its news feed to highlight more mainstream news sources. Both Facebook and Twitter also took steps to limit circulation of the New York Post’s Hunter Biden story in October. And Google’s YouTube prohibited content that encouraged election interference and removed ads from videos with false information about the election results.
All three companies also made changes to their political advertising policies, with Twitter deciding to no longer run political ads and Google prohibiting micro-targeting and temporarily suspending election-related ads after the polls closed. Facebook enacted a policy to stop accepting political ads on October 20, and also temporarily suspended political ads in the period after the election. In addition, both Facebook and Twitter experimented with changes to limit sharing, including limiting the number of forwards on Facebook’s Messenger service and suspending recommendations for political and social issue groups. Twitter also turned off recommendations in all user timelines and, in a small number of cases, prevented tweets from being liked or retweeted.
Unfortunately, these collective efforts appear to have done relatively little to diminish the scale and reach of election-related disinformation across social media. According to internal analyses, Facebook’s labels on Trump’s posts only “decrease[d] their reshares by ~8%.” Twitter results were somewhat better as the company announced it “saw an estimated 29% decrease in Quote Tweets . . . due in part to a prompt that warned people prior to sharing.”
Despite these decreases, however, disinformation writ large saw increased engagement across both platforms. According to analysis from Kornbluh and Goodman, outlets that repeatedly publish false election-related content (such as OAN) “had slightly higher average Facebook interactions (likes, comments, shares) per article during election week than during the prior three months.” By comparison, high-credibility outlets (such as AP News), “had slightly lower interactions per article during election week, with a 6 percent drop.”
The results were considerably worse on Twitter, where “[f]alse-content outlets saw a 68 percent jump in shares (original posts or retweets) per article” and “[h]igh-credibility outlets saw a 5 percent decrease.” On YouTube, false-content outlets increased their likes per video “by 98 percent, while high-credibility ones increased theirs by 16 percent.”
These startling findings are only preliminary. Researchers and journalists will continue to develop more detailed assessments and analysis over the coming months and years. But the U.S. 2020 election has already revealed a key takeaway: disinformation -- when viewed through a partisan lens -- is very much in the eye of the beholder. This simple fact complicates efforts by social media and platform companies to act as “neutral” arbiters of truth. This difficulty is of course filled with irony given that social media is also actively contributing to the widening of partisan differences as its content algorithms shape radically different worldviews for each side.
Whether social media is the cause of the current political crisis in the U.S. is up for some debate among researchers and thinkers. A recent paper from the Berkman Center makes the case that in the run-up to the election, social media played only a secondary role in perpetuating the incorrect belief among Americans that mail-in voting would increase voter fraud. Instead, the paper claims that the successful disinformation campaign “was an elite-driven, mass-media led process.” The authors argue President Trump and mass media outlets like Fox News “were far more influential in spreading false beliefs than Russian trolls or Facebook clickbait artists.”
This might indeed be the case. Yet in an elite-driven, mass media world built on newspapers and television reporting (i.e., one without Facebook), would Trump’s disinformation campaign have really reached so many so effectively? If so, would the audience have been as receptive had they not already been inundated with messages of distrust for democratic governance and institutions? In today’s increasingly online world the messenger hardly seems to matter when social media can be exploited by any number of domestic and foreign actors to undermine democracy and target and cultivate an audience of billions who increasingly believe disinformation to be true despite considerable efforts by mass media to debunk it.
Facebook and the other large platform companies took their most aggressive actions to date limit the spread of disinformation during the 2020 election. Yet in doing so they left intact the very recommendation algorithms and ad technologies that currently drive the world’s information disorder -- knowing full well that keeping such systems in place would necessarily make their efforts far less effective. Why? Because these systems also serve as the foundation of Big Tech’s wildly profitable business model -- a model reliant on attention maximization, invasive data collection, exploitative and manipulative advertising, and unprecedented levels of scale. A model that not only enables the spread of disinformation, but actively prospers from it. Until this root problem is addressed, the American electoral experience in 2020 may become the norm for future elections in democracies around the world.
Sources
Kornbluh and Goodman, “How Well Did Twitter, Facebook, and YouTube Handle Election Misinformation?”
Molla, “Social Media Is Making a Bad Political Situation Worse.”
Launch of beta version of Recoding Tech resource library
Recoding Tech is currently building a library of resources and knowledge to help policymakers better understand the harms resulting from Big Tech’s code and business models, as well as what types of oversight, regulations, and laws can create better outcomes for democracies and societies.
A beta version, available at recoding.tech, was launched in December. The project is still in the process of aggregating a large universe of research and analysis from journalists, academics, policy thinkers, and governments. On the site, Recoding Tech will also be rolling out more in-depth summaries and overviews of topic areas during the next several months to provide the most relevant and up-to-date understanding what is going wrong, why it is happening, and what solutions can help the internet rebuild its democratic promise. At this early stage, user feedback on how to make the information as accessible and useful as possible is highly valued. Please do not hesitate to reach out to hello@recoding.tech with any questions or suggestions.
Monthly roundup of research, policy, and news
Highlights from the latest research
Policy Brief: Containing Misinformation, Bolstering Democracy
A new policy brief from Sascha Meinrath, Palmer Chair in Telecommunications at Penn State University, outlines the regulatory actions needed to address the misinformation harms stemming from digital media platforms. The brief (Containing Misinformation, Bolstering Democracy: A proposed intervention framework for digital media oversight) focuses on specific steps regulators can take and proposes a five-tier intervention framework to help establish a transparency and auditing regime to combat the problem. In doing so, the paper identifies four primary powers that regulators will need to exercise to effectively curb misinformation:
Access: Unconstrained, comprehensive access to the data and metadata (timestamps, active content algorithms, originating and disseminating users, etc.) specific to the misinformation being audited;
Audit: The establishment of audit authorities allowing for the establishment of a “harm threshold” (a quantitative and qualitative measure of exposure to harmful content across a user group or demographic that constitutes a violation of the law);
Implement: The creation of measures that companies must implement -- be they design, delivery, or terms of use changes -- in order to mitigate those actual and potential harms; and,
Intervene: Meaningful interventions that disincentivize platforms from continuing propagation of misinformation and other harmful content.
The full brief is available here.
Report: #Tech2021 - Ideas for Digital Democracy
A new report from the German Marshall Fund, #Tech2021 - Ideas for Digital Democracy, offers strategic, turnkey reforms from experts for how the U.S. government can leverage technology to ensure individuals and society thrive in the midst of rapid change. Key proposals in the report include:
Goodman, “Building Civic Infrastructure for the 21st Century.”
Palfrey, “Advancing Digital Trust with Privacy Rules and Accountability.”
Kornbluh, “Protecting Democracy and Public Health from Online Disinformation.”
The full report is available here.
Highlights from the latest policy actions
UK: New competition regime for tech giants
In the United Kingdom, a dedicated Digital Markets Unit (DMU) will be set up to introduce and enforce a new code to govern the behavior of platforms that currently dominate the market, such as Google and Facebook. The code will seek to provide consumers with more control over their data, help small businesses thrive, and ensure news outlets are not forced out of business by their bigger rivals.
The proposed DMU comes in response to a prior study by the UK competition authority into online platforms and digital advertising that found Google and Facebook currently enjoy a dominant position in digital advertising that is unlikely to diminish through market forces. The report finds evidence that a lack of competition in digital advertising markets also leads to harms to consumers and businesses including reduced innovation and quality, higher prices for goods and services, lack of consumer control, and other broader societal harms.
Sources
Other policy actions by governments
Satariano, “Amazon Charged With Antitrust Violations by European Regulators.”
India opens competition investigation into Google’s app store practices
Highlights from the latest news
Political ads
Facebook demands NYU researchers discontinue transparency initiative on political ads
According to the Knight First Amendment Institute at Columbia University, in late October:
…two New York University researchers, Laura Edelson and Damon McCoy, received a letter from Facebook demanding that they discontinue use of a research tool crucial to understanding political ads on the platform. The letter threatens further action if the researchers do not comply by November 30.
Edelson and McCoy’s research relies on Ad Observer, a browser plug-in they and others created that allows Facebook users to voluntarily share with the researchers information about the political ads shown to users on the platform. The tool enables researchers and journalists to more accurately document and follow trends in Facebook political advertising via a public-facing site, Ad Observatory.org.
Analysis
Platforms like Facebook have often argued that they are committed to bolstering transparency as a means to mitigate many of the harms resulting from digital ads. But researchers and advocates have found that the transparency initiatives actually implemented by platform companies are often plagued with inconsistencies and uneven rule enforcement. Worse, as the Ad Observer case demonstrates, if Facebook or other platforms are ever displeased with third-party transparency efforts, they can simply change their terms of service to prohibit the use of such tools and then use computer fraud regulations and other related laws to threaten legal action against civil society and journalistic projects.
To be fair, however, in this specific instance Facebook may have some legitimate concerns regarding user privacy. To the extent that the Ad Observer extension collects or obtains access to the personal data of not just the user that opts in, but also their friends on the platform, then not opposing the use of the tool could put Facebook at risk of violating its consent decree with the U.S. Federal Trade Commission (established in the wake of the Cambridge Analytica scandal). Unfortunately, in the absence of specific and enforceable transparency rules put in place by governments, such conflicts will continue to arise as Facebook can pit privacy concerns against the public interest in an effort to minimize the data it shares with the public and policymakers.
Sources
Facebook does not enforce its rules against disinformation for Trump allies
A November 2020 Washington Post investigation found that prominent associates of President Trump and members of conservative groups "frequently crossed the boundaries set forth by Facebook about the repeated sharing of misinformation."
From a pro-Trump super PAC to the president’s eldest son, however, these users have received few penalties, according to an examination of several months of posts and ad spending, as well as internal company documents. In certain cases, their accounts have been protected against more severe enforcement because of concern about the perception of anti-conservative bias, said current and former Facebook employees, who spoke on the condition of anonymity because of the matter’s sensitivity.
Analysis
In their report Dead Reckoning: Navigating Content Moderation After Fake News, Caplan et. al. argue: "Whether moderation is conducted by humans enforcing guidelines or by algorithms programmed by human values, discretion is an artifact of human decision-making which is reified by the choices platforms make to curate content." Due to free speech concerns, moderation decisions will therefore always be contentious -- and, as the Washington Post investigation demonstrates, never fully separated from politics (leading to uneven and unfair enforcement). Such decisions happen all the time in traditional media, where news outlets can also provide coverage-specific context or counter-views. But Facebook is so dominant in the social media sphere that its moderation decisions inherently have a disproportionate impact on the public’s understanding of fact vs. fiction. Thus, although policymakers may never be able to regulate away the subjective nature of moderation decisions, they can choose to diminish the impact of those decisions by working to dis-aggregate the power of companies like Facebook in order to create a more competitive marketplace for public discourse now and in the future.
Sources
Content moderation
Posts calling attention to attacks on protesters in Nigeria erroneously labeled as false by Facebook’s content moderation system
In October, Facebook flagged as false Instagram posts containing images and other information calling attention to an attack by members of the Nigerian army on protesters in the capital city of Lagos. The flagged posts included the hashtag #EndSARS, a reference to the Federal Special Anti-Robbery Squad (a tactical unit of the Nigerian Police Force). In flagging the content, Facebook's automated system assumed the #ENDSARS posts were related to posts that had been previously identified as false about SARS (the acronym for severe acute respiratory syndrome, the precursor to COVID-19). Facebook later removed the incorrect labels and apologized for the error.
Analysis
Platforms like Facebook are increasingly dependent on automated systems to flag disinformation and other dangerous content. Unfortunately, these automated techniques often struggle to take into account cultural context. Their inability to adapt to the dynamic nature of civic discourse and current events routinely leads to over-enforcement that ends up penalizing legitimate content. Yet under-enforcement is also problematic. False content that Facebook’s automated algorithms fail to label or remove can be assumed to be true by users -- thereby undermining user trust and the overall usefulness of fact-checking efforts.
In November, Facebook announced new changes to its moderation approach, noting that it would be using machine learning to better prioritize for review offending content identified by its automated systems. (It is unclear at this time if the change will impact how content is flagged, or whether content flagged by automated systems will go through human review before any public-facing action is taken to label, de-prioritize, or remove such content.) Still, short of reviewing each post with a human moderator before flagging, deleting, or taking other moderation actions, mistakes will continue to occur -- even with the most sophisticated “AI” techniques.
Even though Facebook employs some 15,000 moderators around the world, reliance on human moderators does not scale in the same manner as automated algorithms. Unless regulators demand greater accountability from platforms, Facebook and others will continue to rely on less costly and less disruptive solutions -- stopping short of the kind of measures that might actually reduce the spread and amplification of false content on social media in a meaningful way.
Sources
Ilori, “Facebook’s Content Moderation Errors Are Costing Africa Too Much.”
Vincent, “Facebook Is Now Using AI to Sort Content for Quicker Moderation.”
Other news highlights
Lopez, “Guns, Drugs and Viral Content.”
Heilweil, “Parler, the ‘Free Speech’ Twitter Wannabe, Explained.”
About us
Recoding Tech is a Reset supported initiative. Reset is engaged in programmatic work on technology and democracy. Reset seeks to change the way the internet enables the spread of news and information so that it serves the public good over corporate and political interests — ensuring tech companies once again work for democracy rather than against it.