Sam Altman, CEO of OpenAI, appeared to do a complete 180-degree turn in just two days regarding his public opinions of European artificial intelligence regulation. First, he threatened to stop the company’s activities in Europe if regulation went too far, but now he maintains the company has “no plans to leave.”
The Financial Times said that Altman expressed his worries about the European Union’s AI Act to reporters in London on Wednesday. The AI Act is scheduled to be finalized in 2024.
Altman reportedly stated that “the details really matter.” We’ll make an effort to comply, but if we’re unsuccessful, we’ll stop doing business.
The legislation was initially designed for “high-risk” uses of AI, such as in medical equipment, hiring, and loan choices. It may be the first of its sort in terms of AI governance.
Now that generative AI is booming, lawmakers have suggested broader regulations: The creators of massive machine learning systems and tools, such as large language models, which enable chatbots like OpenAI’s ChatGPT, Google’s Bard, and others, will have to provide summaries of any copyrighted data used as training data for their systems and declare any content generated by AI.
A little less than 48 hours after making his original remarks about possibly ending operations, Altman tweeted about a “very productive week of conversations in Europe about how to best regulate AI,” adding that the OpenAI team is “excited to continue to operate here and of course have no plans to leave.” The more recent proposal for the EU’s AI Act will be developed throughout the course of the following year between the European Commission and member states, according to the Financial Times.
Along with the other heads of AI firms DeepMind and Anthropic, he also met with British Prime Minister Rishi Sunak to talk about the risks of the technology, from disinformation to “existential threats” to national security. He also discussed the voluntary actions and regulations needed to manage those risks.
As of March, an open letter was written and signed by a large number of tech leaders and CEOs, including Elon Musk, emphasizing that AI systems are a threat to humans and urging to slow down its development. Experts have been voicing their concerns that AI technology could threaten the existence of human civilization.
However, according to PM Sunak, AI has the potential to “positively transform humanity” and “deliver better outcomes for the British public, with emerging opportunities in a range of areas to improve public services”.
According to Tim O’Reilly, creator of O’Reilly Media and a veteran of Silicon Valley, legislating openness and creating regulatory institutions to enforce responsibility would be the ideal place to start.
“AI fearmongering, when combined with its regulatory complexity, could lead to analysis paralysis,” he said.
Companies developing advanced AI must collaborate to develop a complete set of metrics that can be frequently and reliably presented to regulators and the general public, as well as a procedure for updating those measures when new best practices arise.
When it comes to understanding security disability benefits, knowing the criteria for eligibility is crucial for those seeking assistance. To…
Driving under the influence (DUI) represents one of the most common charges in Florida, carrying substantial legal consequences. The complexity…
For millions of Americans, student debt is more than just a pesky bill—it's a formidable obstacle to financial freedom. The…
You need one specific slide—the slide—from that histology project you wrapped up last year. You open drawer after drawer. Peek…
Have you noticed more couples wearing matching pajamas? This trend has grown a lot lately, and it’s easy to see…
Retail is the crucial element in bridging the gap between the products and consumers in the current competitive business environment.…