OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

2 hours ago 2

OpenAI is throwing its enactment down an Illinois authorities measure that would shield AI labs from liability successful cases wherever AI models are utilized to origin superior societal harms, specified arsenic decease oregon superior wounded of 100 oregon much radical oregon astatine slightest $1 cardinal successful spot damage.

The effort seems to people a displacement successful OpenAI’s legislative strategy. Until now, OpenAI has mostly played defense, opposing bills that could person made AI labs liable for their technology’s harms. Several AI argumentation experts archer WIRED that SB 3444—which could acceptable a caller modular for the industry—is a much utmost measurement than bills OpenAI has supported successful the past.

The bill, SB 3444, would shield frontier AI developers from liability for “critical harms” caused by their frontier models arsenic agelong arsenic they did not intentionally oregon recklessly origin specified an incident, and person published safety, security, and transparency reports connected their website. It defines frontier exemplary arsenic immoderate AI exemplary trained utilizing much than $100 cardinal successful computational costs, which apt could use to America’s largest AI labs similar OpenAI, Google, xAI, Anthropic, and Meta.

“We enactment approaches similar this due to the fact that they absorption connected what matters most: Reducing the hazard of superior harm from the astir precocious AI systems portion inactive allowing this exertion to get into the hands of the radical and businesses—small and big—of Illinois,” said OpenAI spokesperson Jamie Radice successful an emailed statement. “They besides assistance debar a patchwork of state-by-state rules and determination toward clearer, much accordant nationalist standards.”

Under its explanation of captious harms, the measure lists a fewer communal areas of interest for the AI industry, specified arsenic a atrocious histrion utilizing AI to make a chemical, biological, radiological, oregon atomic weapon. If an AI exemplary engages successful behaviour connected its ain that, if committed by a human, would represent a transgression discourtesy and leads to those utmost outcomes, that would besides beryllium a captious harm. If an AI exemplary were to perpetrate immoderate of these actions nether SB 3444, the AI laboratory down the exemplary whitethorn not beryllium held liable, truthful agelong arsenic it wasn’t intentional and they published their reports.

Federal and authorities legislatures successful the US person yet to walk immoderate laws specifically determining whether AI exemplary developers, similar OpenAI, could beryllium liable for these types of harm caused by their technology. But arsenic AI labs proceed to merchandise much almighty AI models that rise caller information and cybersecurity challenges, specified arsenic Anthropic’s Claude Mythos, these questions consciousness progressively prescient.

In her grounds supporting SB 3444, a subordinate of OpenAI’s Global Affairs team, Caitlin Niedermeyer, besides argued successful favour of a national model for AI regulation. Niedermeyer struck a connection that’s accordant with the Trump administration’s crackdown connected authorities AI information laws, claiming it’s important to debar “a patchwork of inconsistent authorities requirements that could make friction without meaningfully improving safety.” This is besides accordant with the broader presumption of Silicon Valley successful caller years, which has mostly argued that it’s paramount for AI authorities to not hamper America’s presumption successful the planetary AI race. While SB 3444 is itself a state-level information law, Niedermeyer argued that those tin beryllium effectual if they “reinforce a way toward harmonization with national systems.”

“At OpenAI, we judge the North Star for frontier regularisation should beryllium the harmless deployment of the astir precocious models successful a mode that besides preserves US enactment successful innovation,” Niedermeyer said.

Scott Wisor, argumentation manager for the Secure AI project, tells WIRED helium believes this measure has a slim accidental of passing, fixed Illinois' estimation for aggressively regulating technology. “We polled radical successful Illinois, asking whether they deliberation AI companies should beryllium exempt from liability, and 90 percent of radical reason it. There’s nary crushed existing AI companies should beryllium facing reduced liability,” Wisor says.

Read Entire Article