Newsletter (issue no. 6) - What to know about the American Data Privacy and Protection Act
August 2022
Welcome back to a new issue of Recoding Tech’s newsletter. Together with the website, each recoding.tech newsletter offers relevant analyses and highlights of the policy discussions, research, and news from the past month that are reshaping the rules for Big Tech.
In this month’s issue:
Featured topic
What to know about the American Data Privacy and Protection Act (ADPPA)
Updates on how governments are recoding tech
New privacy bill introduced in the U.S. Congress
Two bills to protect children from Big Tech move forward in U.S. Senate
U.S. Senate discusses revised bill to compel Facebook and Google to negotiate with news outlets
E.U.’s Digital Market Acts and Digital Services Act on the final road to approval
Highlights from new research, policy papers, and news/commentary
Big Tech is still failing to stop Russian propaganda
Disinformation is being produced and spread in new ways
Why we should be more concerned about how to govern AI than whether it can become sentient
Featured topic
What to know about the American Data Privacy and Protection Act (ADPPA)
H.R. 8152, or the “American Data Privacy and Protection Act” (ADPPA), is a bipartisan bill that would create a comprehensive federal consumer privacy framework to govern how companies handle personal data and information across different industries. Overall, the bill represents important progress. At the federal level in the U.S., no general privacy law currently exists that gives users any rights to govern the use of their data and information. The Federal Trade Commission (FTC) – whose mandate includes consumer protection – has the authority to bring legal action against companies only in cases where a company acts in a “deceptive” or “unfair” manner. However, its ability to define more comprehensive rules to protect user privacy is less certain. Only a handful of U.S. states, including Illinois, California, Virginia, Colorado, Utah, and Connecticut, have passed laws that provide equivalent or more rigorous privacy protections than those proposed in the ADPPA.
We take a look at what the bill gets right, what aspects of it may be problematic, and how it could be improved to better protect the public’s privacy. We will update the analysis as the bill is revised and discussed.
Updates on how governments are recoding tech
Recoding.tech is tracking existing and proposed laws and regulations, along with government investigations and litigation from across the U.S. and Europe. Here are the new additions and updates for July. You can view all the actions being tracked on recoding.tech using our law & regulation tracker.
United States. Except for a handful of recent state laws, the U.S. government has lagged behind the E.U. and other governments on regulations to protect the public’s privacy. That might begin to change. A new bipartisan privacy bill, the American Data Privacy and Protection Act, was approved by the House of Representatives Energy and Commerce Committee in a 53 - 2 vote. The legislation is far from perfect, but it has received support from public interest groups, including the Lawyers’ Committee and EPIC. Despite its promise, passing the bill will be a difficult challenge given Big Tech’s lobbying efforts, limited time to bring it full to vote before the midterm elections, and the need for a parallel action in the U.S. Senate.
The bill would increase the regulatory tools available to the Federal Trade Commission, even as the agency begins its own effort to potentially expand its privacy and data protection rules now that it has a full slate of Commissioners. The agency opened an extensive rulemaking proceeding into harmful commercial surveillance and lax data security practices. The proceeding will solicit public comment over the next several months.
Several other relevant bills are also gaining traction in the U.S. Congress. The Kids Online Safety Act and the Children and Teens' Online Privacy Protection Act were approved by the Senate Committee on Commerce, Science, and Transportation. The latter bill would update COPPA, a children’s privacy law, by extending protections for children up to age 16. The Kids Online Safety Act is a more ambitious bill establishing a duty of care for companies to act in the best interests of a minor that uses the platform's products or services.
Finally, a revised draft of the Journalism Competition and Preservation Act of 2021 is being circulated that seeks to replicate the News Bargaining Code enacted in Australia last year. A blog post from EFF is highly critical of the revised bill. A similar legislative effort in Canada is also underway. An article on NiemanLab provides a helpful discussion of Canada’s Online News Act and offers recommendations to improve it.
European Union. The Digital Markets Act (DMA) and Digital Services Act (DSA) continue to progress towards enactment. The Council of the E.U. gave its final approval to the text of the DMA in July. After being signed by the President of the European Parliament and the President of the Council, it will be published in the Official Journal of the European Union and will start to apply six months later. The latest text of the DMA is available here. The DSA was adopted by the European Parliament in July and is expected to be adopted by the Council in September. The latest text of the DSA is available here.
Highlights from research, policy papers, and news/commentary
Recoding Tech curates a collection of academic and civil society research, policy papers, investigative journalism, and op-eds. These articles illuminate what’s wrong with Big Tech’s platforms and business models, debate policy options that could address the problems, and make recommendations for government action. Below are some highlights, organized by topic. You can explore the entire collection in our library.
Big Tech is still failing to stop Russian propaganda. Stories in the Washington Post and AP discuss how Russian disinformation is evading lackluster efforts by social media companies to block its spread, including the role of Russian embassies. Related, a new report from Tracking Exposed finds that TikTok’s algorithms recommend content from Russian accounts that the company says it banned. Back in April, Recoding.tech highlighted several of the concerns. Finally, ProPublica finds that Google shared “potentially sensitive user data with a sanctioned Russian ad tech company owned by Russia’s largest state bank.”
Big Tech Tried to Quash Russian Propaganda. Russia Found Loopholes — Washington Post,
Russian Disinformation Spreading in New Ways despite Bans — AP NEWS
Shadow-Promotion: TikTok’s Algorithmic Recommendation of Banned Content in Russia — Tracking Exposed
Google Allowed a Sanctioned Russian Ad Company to Harvest User Data for Months — ProPublica
Disinformation is being produced and spread in new ways. One major challenge to limiting the spread and impact of disinformation on social media and the internet is that how disinformation is made and shared keeps shifting. An article in the Journal of Democracy discusses a change in disinformation campaigns away from social media bots and towards more complex efforts combining coordinated human users, such as influencers, and artificial intelligence software. In addition, Wired investigates an entity that hires and coordinates influencers to sway public opinion. Relatedly, a report from the RAND Corporation reviews how bad actors can use AI to develop deepfake videos, voice cloning, deepfake images, and generative text to produce disinformation. In addition, the Media Manipulation Casebook documents how a Tiktok hoax propagated across social media to fuel a media panic. Finally, an article in the Internet Journal of Communication discusses why most reposters on social media are not concerned with the accuracy of the information they share.
Digital Propaganda: The Power of Influencers — Journal of Democracy
Meet the Lobbyist Next Door — Wired
Artificial Intelligence, Deepfakes, and Disinformation: A Primer —- RAND Corporation
Slap a Teacher: From TikTok Hoax to Media-Fueled Panic — Media Manipulation Casebook
Sharing Truths About the Self: Theorizing News Reposting on Social Media — International Journal of Communication
Why we should be more concerned about how to govern AI than whether it can become sentient. In July, a Google employee made the claim that the company’s AI chatbot system, LaMDA AI, was sentient. Most AI experts critiqued the assertion. Moreover, as Paul Romer and Noah Giansiracusa argue, the most significant concern for AI is not sentience, but how AI’s development is “dominated by the technology giants, in which secret knowledge conveys power and profit to the few.” A related article discussed how AI can exacerbate or reduce healthcare disparities depending on the type of training data it uses. Experts at the Stanford Cyber Policy Center and Institute for Innovation and Public Purpose discuss a policy framework to govern AI in the public interest. Finally, a paper from the UCLA School of Law makes a case for the Federal Trade Commission’s role in addressing unfair uses of AI.
Commentary: The Problems with AI Go Beyond Sentience — Barron’s
Artificial Intelligence Exacerbates and Mitigates Racial Bias in Health Care — The Journalist’s Resource
Governing Artificial Intelligence in the Public Interest — Stanford Cyber Policy Center
Unfair Artificial Intelligence: How FTC Intervention Can Overcome the Limitations of Discrimination Law — UCLA School of Law
Other noteworthy research. An article in the Journal of Law, Market, & Innovation examines the legal issues and inconsistencies around protections for gig workers in Italy. A survey of young people in Europe found that 93.8% use social media to get information, with over 30% indicating they “follow news media and journalists on social media to get information…” and another 30% agreeing they “get informed about news on social media without specifically seeking it out.” The Institute for Strategic Dialogue investigated mis- and disinformation ahead of the French 2022 presidential elections. A report from Tech Transparency Project finds that Facebook monetized searches for white supremacist groups, auto-generated 24 pages for these groups when a user listed them as an interest or employer in their profile, and, recommended other extremist or hateful content to users visiting one of the group’s pages. Finally, two papers examine privacy laws in relation to data governance and competition, while a third paper investigates data brokers.
The Uber Case and Gig-Individuals against the Backdrop of the Gig-Economy: Dilemmas between Labour Law and Techno-Law — Journal of Law, Market & Innovation
Mind the Gap! Journalism on Social Media and News Consumption Among Young Audiences — International Journal of Communication
A France Divided by the Pandemic: The Disinformation Ecosystem Leading up to the 2022 Elections — Institute for Strategic Dialogue
Facebook Profits from White Supremacist Groups — Tech Transparency Project
Data Governance: A Tale of Three Subjects —- Journal of Law, Market & Innovation
Competition, Privacy, and Justifications: Invoking Privacy to Justify Abusive Conduct under Article 102 TFEU — Journal of Law, Market & Innovation
The Untamed and Discreet Role of Data Brokers in Surveillance Capitalism: A Transnational and Interdisciplinary Overview — Internet Policy Review
About us
Recoding Tech is a Reset-supported initiative.Reset is engaged in programmatic work on technology and democracy. It seeks to change the way the internet enables the spread of news and information so that it serves the public good over corporate and political interests — ensuring tech companies once again work for democracy rather than against it.