Welcome to the second issue of Recoding Tech’s monthly newsletter. Each new issue highlights the most important research from academia and civil society, government policy, and news/commentary from the previous month to help policymakers and advocates understand the impacts of Big Tech and what solutions can protect the public interest and democracy in the digital age.
In this month’s issue:
Featured topic: Why should Facebook be broken up?
Breaking up Big Tech, an idea that started with a small handful of thinkers, has now become part of the policy conversation in the U.S. At the end of 2020, several antitrust lawsuits were filed against Big Tech for illegal monopolization, including a suit by the Federal Trade Commission against Facebook and another against the company by a coalition of 48 state attorneys general. Both lawsuits request the court to consider requiring divestitures by Facebook to address the company’s anti-competitive behavior. Here’s why breaking up the company matters to addressing disinformation and its threat to our democracies.
Monthly highlights of research, government policy, and news/commentary
Academic & civil society research: Infodemic in a pandemic and the threat of social and digital media manipulation by foreign actors.
Government policy: FTC examining data use by Big Tech, Google facing new antitrust suits from 48 states, and Europe moves forward with rewrite of rules for Big Tech.
News & commentary: A terrible year for disinformation and privacy, Twitter and Facebook walk back 2020 election changes, social media renews pledge to combat COVID-19 vaccine disinformation (but it may be too late), and Google’s massive conflict of interest in supporting AI ethics research.
Featured topic
Why should Facebook be broken up?
Scrutiny of social media companies like Facebook, Twitter, and YouTube is reaching a fever pitch in the wake of the recent attack on the U.S. Capitol Building by a pro-Trump mob. As the general public and policymakers around the world attempt to grapple with the attack’s implications, authorities in the U.S. may now finally be compelled to translate talk into concrete policy action.
As discussed in last month’s newsletter, President Trump and his allies used disinformation-driven campaigns on Facebook and Twitter to cast doubt on the validity of the 2020 U.S. presidential election. Thanks to campaigns like “Stop the Steal,” unsubstantiated accusations of voter fraud rapidly proliferated and spread across all major social media platforms — culminating as a rallying cry for the tragic events of January 6, 2021.
In response to mounting public and political pressure, as well as the continuing threat posed by Trump’s rhetoric, Facebook and Twitter have banned the president for inciting violence. The removal of a sitting president from the Internet’s most consequential mediums is an unfortunate precedent. But as some Republicans decried the moves as censorship of conservative voices, Facebook and Twitter are not required to be neutral conduits for speech under U.S. law. Though some have called for platform neutrality (something that would require regulating these companies as public utilities), policymakers and others legitimately concerned about Big Tech’s power to dictate the terms of speech have another option.
The debate over breaking up Big Tech
At the end of 2020, a string of antitrust lawsuits were filed against Facebook and Google, including a suit by the Federal Trade Commission (FTC) against Facebook for illegal monopolization. The move was a surprising development given the FTC’s prior blessing of the company’s acquisitions of Instagram and WhatsApp, but it appears to be part of a serious federal and state effort to rein in the power of Big Tech through the use of the nation’s antitrust laws. On the same day the FTC filed its case, 48 state attorneys general filed their own antitrust suit against Facebook, “alleging that the company has and continues today to illegally stifle competition to protect its monopoly power.”
Notably, both lawsuits request the Court to consider mandating divestitures by Facebook to address the company’s anti-competitive behavior. Such requests may mark a turning point in the policy response to Big Tech. Breaking up these companies started as the position of a small group of advocates and thinkers, but now has become a key part of the policy conversation among both state and federal officials in the U.S.
Yet breaking up companies like Facebook as a means to address competition issues and stem the spread of disinformation is a debated position even among Big Tech’s critics. E.U. Commissioner Margrethe Vestager, who oversaw the prosecution of several competition cases against Google and Facebook, has previously expressed skepticism over the appeal of divestiture (comparing Big Tech to the Greek mythological creature known as the Hydra):, “When you chopped off one head, 1, 2, or 7 came up. So there is a risk that you don't solve the problem, you just have many more problems."
Vestager’s concern is a fair one given the potential challenges of enforcing complex regulations on a more diffuse marketplace, as opposed to policing a small handful of tech titans. But such a concern must be balanced against the newfound reality that the unprecedented size and power of Big Tech directly threatens our democracies.
In 2020, Facebook (including WhatsApp and Instagram) had over 2.7 billion monthly active users, including 410 million in Europe and 255 million in the U.S. and Canada. For perspective, Europe’s estimated population is 743 million and the combined population of the U.S. and Canada is about 366 million. Operating at that level of exposure renders Facebook’s voice and views as powerful as that of any government (in an interview last year, former U.S. Secretary of State Hilary Clinton compared high-level discussions with Facebook executives to “negotiating with a foreign power”). Yet unlike elected governments, the company’s policies and decision-making remain completely unaccountable to the public.
As evidenced earlier this month, the threats posed by Facebook are no longer theoretical. And due to the company’s size, the potential damage of every moderation decision and algorithmic recommendation is now magnified at an unprecedented global scale. In the words of The Atlantic’s executive editor Adriennne LaFrance, “At megascale, [Facebook’s] algorithmically warped personalized informational environment is extraordinarily difficult to moderate in a meaningful way, and extraordinarily dangerous as a result.” Requiring the company to spin-off Instagram and WhatsApp may be the only way to dramatically reduce Facebook’s audience and therefore the associated risk posed by each of the company’s actions and algorithms.
Still, breaking up Facebook would be no panacea for restoring the democratic promise of the Internet or fixing the ills of surveillance capitalism. Nor would it completely remove the need for regulation to address competition harms and protect the public’s privacy. Yet it would also not stand in the way of achieving those aims nor make it harder for policymakers to address other problems.
The antitrust case for breaking up Facebook
Both Facebook suits are rooted in U.S. antitrust law and should not be viewed in the same vein as President Trump’s self-interested tantrums concerning the repeal of Section 230. The cases rely on express documentation of anti-competitive behaviors and harms, including internal company emails and documents demonstrating that Facebook viewed both Instagram and WhatsApp as threats to its business model prior to acquisition — despite telling regulators reviewing the deals the exact opposite.
These anti-competitive acquisitions and actions subsequently enabled Facebook to obtain and then exercise monopoly power over its users, resulting in the degradation of user privacy due to a lack of viable alternatives in the marketplace. The state attorneys general complaint also points to the platform’s “increased ad load” or the ability of Facebook to show users more and more ads as evidence of its monopoly power, given the fact that the company “predicts that an increase in ad load will decrease user engagement, and recognizes that consumers generally do not want to see ads.”
The State AG complaint further argues that Facebook’s dominance also causes harm to advertisers, including an “inability to audit Facebook’s reporting metrics, unreliable metrics due to Facebook error, and the prevalence of fake accounts.” Further, despite advertiser concern about “misinformation and violent or otherwise objectionable content," the company has yet to “provide advertisers with meaningful ways to ensure that ads are distanced from content that could harm a brand’s reputation.” Indeed, Facebook barely blinked in response to an organized boycott by major advertisers during the summer of 2020 that demanded the company address disinformation and other harmful content.
Still, despite this and other evidence, the Facebook case (along with the antitrust suits filed against Google) may nonetheless run into a brick wall in the U.S. courts. Unlike other jurisdictions including Europe where regulators can pursue enforcement actions that can then be challenged in the courts, competition authorities in the U.S. must make their case to a judge, who then decides on the merits of the case and any appropriate remedies. Winning is no easy task. And recent precedents such as Ohio vs. American Express Co. have made antitrust cases even more difficult. Critics are also quick to point out that breaking up these companies would be nearly unprecedented.
But as the events of January 6 confirmed, these are unprecedented times. The scale and reach of the Standard Oil or AT&T monopolies, both of which were broken up by U.S. regulators at the height of their powers, look quaint by comparison to the global scale and reach of Facebook and Google. If the antitrust cases against Facebook fail, then the U.S. may need new legislation to address Big Tech through structural remedies. In turn, European competition authorities should be willing to build upon the work of U.S. enforcers and forge ahead with their own efforts.
Only by working together will governments around the world be able to adequately address the current crisis in democracy — a global crisis driven by tech titans too big to govern. It’s time to break them up.
State of New York et al. v. Facebook Inc. United States District Court for the District of Columbia, December 9, 2020.
Federal Trade Commission v. Facebook, Inc. In the United States District Court for the District of Columbia, December 9, 2020.
Monthly highlights of research, policy, and news/commentary
Academic and civil society research
Infodemic in a pandemic
The COVID-19 infodemic continues alongside the viral pandemic, as documented by several recent studies. A new paper from Project Ainita at the Oxford Internet Institute examines infrastructural support for controversial COVID-19 websites, while another paper from research conducted at the Los Alamos National Laboratory in the U.S. sheds light on how COVID-19 misinformation spreads on Twitter. Two additional academic papers address misinformation interventions, including their effectiveness in the Latinx community.
Profiting from the Pandemic: Moderating COVID-19 Lockdown Protest, Scam, and Health Disinformation Websites. Au, Yung, and Philip N. Howard. Project Ainita: Oxford Internet Institute, December 2, 2020.
Thought I’d Share First: An Analysis of COVID-19 Conspiracy Theories and Misinformation Spread on Twitter. Dax Gerts et al. ArXiv:2012.07729 [Cs, Stat], December 14, 2020.
Social Media COVID-19 Misinformation Interventions Viewed Positively, But Have Limited Impact. Christine Geeng et al. ArXiv:2012.11055 [Cs], December 20, 2020.
COVID-19: Contextualizing Misinformation Flows in a US Latinx Border Community (Media and Communication During COVID-19). Arthur D. Soto-Vásquez et al. Howard Journal of Communications, December 23, 2020.
The threat of social and digital media manipulation by foreign actors
Several papers were released in December examining the use of disinformation by foreign actors to influence elections. A report from the Center for European Policy Analysis reviews the actions taken by democracies in the U.S. and Europe to counter state-sponsored disinformation, making recommendations for how a future U.S. presidential administration could better respond. Three case studies were released by the NATO Strategic Communications Centre of Excellence investigating the ability of foreign actors to engage in social media manipulation. Two additional papers examined the use of digital ads by foreign governments and actors, including a paper from the Alliance for Securing Democracy examining policy proposals to address media manipulation through digital advertising.
Democratic Offense Against Disinformation. Polyakova, Alina, and Daniel Fried. Center for European Policy Analysis, December 2, 2020.
Social Media Manipulation Report 2020: How Social Media Companies Are Failing to Combat Inauthentic Behavior Online. NATO Strategic Communications Centre of Excellence, December 21, 2020.
Information Laundering in Germany. NATO Strategic Communications Centre of Excellence, December 14, 2020.
Information Laundering in the Nordic-Baltic Region. NATO Strategic Communications Centre of Excellence, December 10, 2020.
The Weaponized Web: Levers in the Digital Advertising Ecosystem. Dipyan Ghosh, et al. Alliance for Securing Democracy, December 17, 2020.
Facebook Ad Engagement in the Russian Active Measures Campaign of 2016. Mirela Silva, et al. ArXiv:2012.11690 [Cs], December 23, 2020.
Government policy
FTC examining data use by Big Tech
In an effort to address the harms of Big Tech, the Federal Trade Commission issued orders in December to nine social media and video streaming companies “requiring them to provide data on how they collect, use, and present personal information, their advertising and user engagement practices, and how their practices affect children and teens.”
FTC Issues Orders to Nine Social Media and Video Streaming Services Seeking Data About How They Collect, Use, and Present Information. December 14, 2020.
Google facing new antitrust suits from 48 states
In addition to filing antitrust cases against Facebook, state attorneys general from 48 states filed two cases against Google in December (following in the footsteps of a suit filed by the Department of Justice in October 2020). These include a suit from the state of Colorado, joined by 38 other states and a second complaint, led by the state of Texas, joined by 11 other state AGs. Interestingly, according to reporting from the Wall Street Journal (citing an unredacted version of the Texas-led complaint), there is evidence that Google and Facebook agreed to “cooperate and assist one another” if they ever faced an antitrust investigation.
Ten State AGs File Lawsuit Against Google for Anti-competitive Practices and Deceptive Misrepresentations. December 16, 2020.
Colorado leads multistate lawsuit seeking to end Google’s illegal monopoly in search market. December 17, 2020.
Europe moves forward with rewrite of rules for Big Tech
In Europe, the European Commission proposed a comprehensive set of new rules for all digital services, including social media, online marketplaces, and other online platforms that operate in the European Union: the Digital Services Act and the Digital Markets Act. A helpful summary of the proposed rule changes is available here. The Acts will now go to the European Parliament to discuss adoption through the legislative process.
European Commission proposes new rules for digital platforms. December 15, 2020.
News & commentary
2020 was a terrible year for disinformation and privacy
In-depth pieces from Vox and BuzzFeed News examine the impacts of disinformation in the U.S. in 2020 and society’s general inability to address the problem in an effective manner. An essay from Lawfare looks at the burgeoning use of private firms to conduct outsourced disinformation campaigns by governments in the Middle East to target their rivals and discredit dissidents. A separate Lawfare piece analyzes the ability and limitations of Canadian law to target disinformation. Finally, a Vox article laments the lack of privacy laws even as more and more of daily life was pushed online by the COVID-19 pandemic.
Why we’re posting about misinformation more than ever.Rebecca Heilweil. Recode, December 23, 2020.
In 2020, Disinformation Broke The US. Jane Lytvynenko. BuzzFeed News, December 6, 2020.
Outsourcing Disinformation. Shelby Grossman and Khadeja Ramali. Lawfare, December 13, 2020.
Is Canadian Law Better Equipped to Handle Disinformation? Eva Gaumond. Lawfare, December 11, 2020.
The Year We Gave up on Privacy. Sara Morrison. Vox, December 23, 2020,
Twitter and Facebook walk back 2020 election changes
Another major story during the month of December was the decision by platform companies to walk back several of the changes they made during the 2020 U.S. presidential election to limit the spread of disinformation. Among the changes, Twitter eliminated a prompt that encouraged users to write something when retweeting, and Facebook discontinued its prioritization of authoritative news sources in its newsfeed algorithm.
Twitter is returning retweets to the way they used to be. Jay Peters. The Verge, December 16, 2020.
Twitter repeals retweet roadblocks, Facebook follows suit. Jim Salter. ArsTechnica, December 17, 2020.
Social media renews pledge to combat COVID-19 vaccine disinformation (but it may be too late)
In December, Twitter and Facebook made renewed pledges to combat COVID-19 vaccine disinformation on their platforms via updates to their respective policies. But as COVID-19 vaccines begin to roll out across the world, Vox documents that social media platforms may already be losing the fight on vaccine disinformation. In turn, a New York Times article reveals that those spreading disinformation about COVID-19 are often the same set of groups and individuals pushing election disinformation in the U.S.
COVID-19: Our approach to misleading vaccine information. Twitter, December 16, 2020.
Keeping People Safe and Informed About the Coronavirus.Facebook, December 16, 2020.
Social media companies are already losing the vaccine misinformation fight. Rebecca Heilweil. Vox, December 19, 2020.
From Voter Fraud to Vaccine Lies: Misinformation Peddlers Shift Gears. Davey Alba and Sheera Frenkel. New York Times, December 17, 2020.
Google’s massive conflict of interest in supporting AI ethics research
Google ignited controversy when it fired Timnit Gebru after the prominent AI and ethics researcher co-authored a paper critical of the company’s AI language model. Additional reporting from Reuters confirmed that the company is now tightening its review of similar research, indicating that the tech titan’s benevolence toward AI-related ethics research extends only to research that does not risk jeopardizing the company’s bottom line.
We read the paper that forced Timnit Gebru out of Google. Here’s what it says. Karen Hao. MIT Technology Review, December 4, 2020.
Google told its scientists to 'strike a positive tone' in AI research - documents. Paresh Dave and Jeffrey Dastin. Reuters, December 23, 2020.
About us
Recoding Tech is a Reset supported initiative. Reset is engaged in programmatic work on technology and democracy. It seeks to change the way the internet enables the spread of news and information so that it serves the public good over corporate and political interests — ensuring tech companies once again work for democracy rather than against it.