Googles AI answers could come back to bite it

Posted by Tobi Tarwater on Tuesday, August 20, 2024

Happy Wednesday! It is with a heavy heart that I must announce that the orcas are at it again. Send news tips to: will.oremus@washpost.com.

Below: Congress is considering a flurry of AI bills. First:

Google’s AI answers could come back to bite it

Using Google to search for information on other websites is old hat. Now the company is boasting that it can answer users’ questions directly, thanks to artificial intelligence.

Starting this week for U.S. users, Google will respond to many types of search queries with a new feature called “AI Overviews,” which will appear above the traditional list of search results. At its annual developer conference Tuesday, the company’s executives touted the technology as ready for prime time. They showed off some impressive ways to use it, like taking a video of a malfunctioning record player with your phone and asking the Google app, “Why is this happening?” In the demo, Google explained that a record player’s tonearm “may move freely if it’s unbalanced,” before providing suggestions for how to fix it.

Advertisement

This is the future of search that AI enthusiasts have been touting — and web publishers have been fearing — since ChatGPT debuted 18 months ago. But it’s not without its risks, both to users and to Google itself.

For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won’t apply when its AI answers search questions directly.

“As we all know, generative AIs hallucinate,” said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. “So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information,” rather than just the distributor of it. 

Advertisement

The scenario is not entirely hypothetical. Already, OpenAI is facing a lawsuit from a conservative radio host in Georgia who says ChatGPT falsely claimed that he embezzled money from a gun rights group. 

Google has allowed users to opt in to experimental AI search results in the past, but incorporating them into its core search engine means they’ll be relied upon by a far wider audience. And while the word “hallucination” may make factual missteps seem like an aberration, they’re actually quite common. 

In fact, Google’s AI made a potentially costly error in one of the company’s own demonstration videos on Tuesday. 

In the clip, a user took a video of his film camera and asked Google, “Why is the lever not moving all the way?” Google first listed a number of potential reasons an advance lever might be stuck, then offered a list of solutions that included opening the camera’s back door and removing the film. But as the Verge’s Nilay Patel pointed out, removing the film might be “the worst thing you can do in this situation,” because it will expose and ruin all your photos.

Advertisement

Google might not be quaking at the prospect of getting sued by an amateur photographer. But Grimmelmann said he can envision plenty of other scenarios in which an AI giving bad answers could have serious consequences. For example: 

  • What are qualifying child-care expenses for my taxes?
  • Is this mushroom safe to eat?
  • How can I fix the buzzing sound in my guitar amplifier?
  • Can I take Benadryl while breastfeeding?

To what extent Google can actually be held liable may depend in part on how it attributes its answers. 

U.S. courts have generally dismissed or shielded Google from lawsuits stemming from answers in which it uses AI tools to accurately summarize or quote snippets from third-party websites, while properly attributing the information to those sites and linking to them, Grimmelmann said. But if the AI falsely summarizes a website or gives a false answer without direct attribution, it could be in more trouble. In that case, Google’s best defense may be to portray its answers as mere suggestions rather than actionable information.

Advertisement

Google, which did not respond to a request for comment Tuesday, isn’t the only company replacing search with AI. Microsoft’s Bing uses the company’s Copilot AI to generate answers to many queries. And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot. 

Whether tech firms potentially facing liability for AI answers is a good thing or bad depends on your priorities.

Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn’t extend Section 230 to cover AI tools. 

“As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors,” he predicted. “It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate.” 

Advertisement

But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be “a really good outcome.” 

Meanwhile, some lawmakers are looking to ditch Section 230 altogether. 

On Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has “outlived its usefulness.” 

Advertisement

The tech industry trade group NetChoice fired back on Monday that scrapping Section 230 would “decimate small tech” and “discourage free speech online.”

Our top tabs

Congress is considering a flurry of AI bills

Senate committees are weighing a bevy of artificial intelligence bills, starting with bipartisan legislation that aims to curb the impact of deepfakes on elections, my colleague Cat Zakrzewski reports. 

The Rules Committee, chaired by Sen. Amy Klobuchar (D-Minn.), will consider three bipartisan AI elections bills today. The Protect Elections from Deceptive AI Act would ban the distribution of AI-generated videos or photos of federal candidates when they are part of an attempt to fundraise or influence an election. Another bill, the Transparency in Elections Act, would require political advertisements to disclose when they are AI-generated. 

Advertisement

More than 50 election officials and social media reform advocates signed a letter from the nonprofit Issue One calling for the swift enactment of laws addressing the impact of generative AI in elections ahead of the Wednesday session. The letter, which The Technology 202 viewed ahead of its publication Wednesday, warns that tools for generating synthetic media have “the potential to turbocharge preexisting election interference tactics” even as it is becoming harder to detect.

This week, the House Foreign Affairs Committee will weigh a bill that would update export controls to prevent foreign adversaries from exploiting AI. Next week, the Senate Commerce Committee will mark up several other bills, including one that would expand researcher access to AI computing infrastructure, according to Axios.

Government scanner

TikTok creators sue U.S. government over potential ban (By Taylor Lorenz and Drew Harwell)

Advertisement

YouTube to block Hong Kong protest anthem videos after court order (Reuters)

How Modi and the BJP turned WhatsApp into an election-winning machine (Rest of World)

Elon Musk ordered to testify again in US SEC probe of Twitter takeover (Reuters)

Hill happenings

Senators unveil plan to regulate AI, as companies race ahead (By Cat Zakrzewski)

Inside the industry

Google pitches its vision for AI everywhere, from search to your phone (By Gerrit De Vynck and Danielle Abril)

OpenAI co-founder Ilya Sutskever leaves the company (By Nitasha Tiku and Gerrit De Vynck)

Google adds ‘web’ filter to only show text-based links in search results (Search Engine Land)

Threads finally starts its own fact-checking program | TechCrunch (TechCrunch)

Competition watch

Comcast to launch Peacock, Netflix and Apple TV bundle at a ‘vastly reduced price’ (Variety)

Trending

Trump gets $1 million from Silicon Valley donor who once gave to Democrats (By Elizabeth Dwoskin and Maeve Reston)

Daybook

  • Bloomberg Government hosts an event, “Newsmaker Breakfast: The Future of Defense,” today at 8:30 a.m.
  • The Senate Rules Committee holds a markup on three bills related to AI’s role in elections, today at 10 a.m.
  • ITIF’s Center for Data Innovation hosts a webinar, “How Can Policymakers Address AI Voice-Cloning Scams?”, today at noon.
  • The Senate Intelligence Committee holds a hearing to examine an update on foreign threats to the 2024 elections, today at 2:30 p.m.
  • The House Oversight Committee holds a hearing on countering the cyberthreat from China, Wednesday at 4 p.m.
  • The House Foreign Affairs Committee holds a markup on various measures, including one to control exports of AI technologies to foreign adversaries, Thursday at 10 a.m.
  • The Information Technology and Innovation Foundation hosts a webinar, “Social Media and the First Amendment,” Thursday at noon.

Before you log off

Thats all for today — thank you so much for joining us! Make sure to tell others to subscribe to The Technology202 here. Get in touch with Cristiano (via email or social media) and Will (via email or social media) for tips, feedback or greetings!

ncG1vNJzZmivp6x7uK3SoaCnn6Sku7G70q1lnKedZL2wuMitoJyrX2d9c4COaWxoaWVktLC7xqWcrGWRnnqiutKwnKurXZi8trjDZpqopZVir6KvymaZoqyVYra1ew%3D%3D