close
close

Semainede4jours

Real-time news, timeless knowledge

US laws governing artificial intelligence prove elusive, but there may be hope
bigrus

US laws governing artificial intelligence prove elusive, but there may be hope

Can the US regulate AI in a meaningful way? It’s not entirely clear yet. Policymakers have made progress in recent months but also faced setbacks that illustrate the tough nature of laws that impose guardrails on technology.

in Tennessee in March happened First state to protect voice actors from unauthorized AI cloning. This summer in Colorado adopted A layered, risk-based approach to AI policy. In September, California Governor Gavin Newsom signed dozens Some AI-related security laws require companies to disclose details about their security artificial intelligence training.

But the US still has no comparable federal AI policy. EU’s Artificial Intelligence Law. Even at the state level, regulation continues to face major hurdles.

After a protracted battle with special interests, Governor Newsom vetoed invoice SB1047A law that would impose far-reaching security and transparency requirements on companies developing artificial intelligence. Another California bill targeting social media distributors of AI deepfakes was: stayed The outcome of a trial is expected this fall.

But there are reasons for optimism, according to Jessica Newman, co-director of the AI ​​Policy Center at UC Berkeley. Speaking on a panel about AI governance TechCrunch Disruption 2024Newman noted that many federal bills were not written with AI in mind but still apply to AI, such as anti-discrimination and consumer protection legislation.

“We often hear that the US is sort of the ‘Wild West’ compared to what’s going on in the EU,” Newman said. “But I think that’s exaggerated and the reality is more nuanced than that.”

In Newman’s view, the Federal Trade Commission forced Companies are secretly collecting data to delete AI models and to research Whether the sale of AI startups to big tech companies violates antitrust regulation. Meanwhile, the Federal Communications Commission declared Automated searches voiced by AI are illegal, and it has introduced a rule that AI-generated content in political ads is illegal. announced.

President Joe Biden has also sought to put certain AI rules on the books. About a year ago Biden signed AI Executive Order supporting voluntary reporting and benchmarking practices that many AI companies currently choose to implement.

One of the results of the executive order was the US Artificial Intelligence Security Institute (AISI), a federal agency that examines risks in artificial intelligence systems. operating in National Institute of Standards and TechnologyAISI has research partnerships with major AI laboratories such as OpenAI and Anthropic.

But with a simple repeal of Biden’s executive order, AISI could be shut down. one in october coalition More than 60 organizations have called on Congress to pass legislation codifying AISI before the end of the year.

“I think we all, as Americans, have a common interest in reducing the potential downsides of technology,” said AISI director Elizabeth Kelly, who attended the panel.

So is there hope for comprehensive AI regulation in the US? The failure of SB 1047, which Newman described as a “light touch” with input from industry, is less than encouraging. SB 1047, authored by California State Senator Scott Wiener, was opposed by many in Silicon Valley, including high-profile technologists like Meta’s chief AI scientist Yann LeCun.

That being the case, Wiener, another Disrupt panelist, said he wouldn’t have drafted the bill any differently and is confident that broad AI regulation will eventually take hold.

“I think this sets the stage for future efforts,” he said. “Hopefully we can do something that can bring more people together, because the fact that all the major labs have already acknowledged is that the (AI) risks are real and we want to test them.”

As a matter of fact, last week Anthropic warned If governments do not implement regulation within the next 18 months, there could be an AI disaster.

Opponents have only doubled down on their rhetoric. Last Monday, Vinod Khosla, founder of Khosla Ventures in the name Wiener is “completely uninformed” and “not competent” to regulate the real dangers of artificial intelligence. And Microsoft and Andreessen Horowitz published a publication expression to band together against AI regulations that could affect their financial interests.

But Newman argues that the push to unify the growing patchwork of AI rules state by state will ultimately yield a stronger legal solution. Rather than agreeing on a regulatory model, state policymakers introduced Nearly 700 pieces of AI legislation this year alone.

“My sense is that companies don’t want a patchwork regulatory system environment where every state is different,” he said, “and I think there will be increasing pressure to have something at the federal level that provides more clarity and reduces some of that uncertainty.”

TechCrunch has a newsletter focused on AI! Sign up here To get it in your inbox every Wednesday.