AI businesses such as OpenAI, Alphabet, and Meta Platforms have made voluntary promises to the White House to implement safety precautions such as watermarking AI-generated content.
“These commitments are a promising step forward, but we have a lot more work to do together,” Joe Biden, the Vice President of the United States
Biden addressed rising worries about the potential for artificial intelligence to be exploited for disruptive purposes at a White House event, stating that “we must be clear-eyed and vigilant about the threats from emerging technologies” to US democracy.
The firms, which include Anthropic, Inflection, Amazon.com, and OpenAI partner Microsoft, have agreed to rigorously test systems before releasing them, as well as share knowledge on how to decrease risks and invest in cybersecurity.
The decision is considered a victory for the Biden administration’s efforts to regulate technology, which has seen a surge in investment and consumer popularity.
“We welcome the president’s leadership in bringing the tech industry together to hammer out concrete steps that will help make AI safer, more secure, and more beneficial for the public,” Microsoft wrote in a blog post.
Since generative AI, which uses data to create new material such as ChatGPT’s human-sounding language, gained popularity this year, politicians around the world have begun to investigate ways to reduce the growing technology’s risks to national security and the economy.
In terms of artificial intelligence regulation, the United States falls behind the European Union. In June, EU lawmakers agreed on a set of draught rules that would require systems such as ChatGPT to reveal AI-generated information, assist in distinguishing so-called deep-fake images from real ones, and provide safeguards against illicit content.
In June, US Senate Majority Leader Chuck Schumer advocated for “comprehensive legislation” to progress and protect artificial intelligence. Congress is drafting legislation that would force political advertisements to declare whether or not AI was used to create visuals.
Biden, who visited executives from the seven businesses at the White House on Friday, also stated that he is working on executive order and bipartisan legislation on artificial intelligence technology.
“We’ll see more technological change in the next 10 years, or even in the next few years than we have in the previous 50.” “That has been an astonishing revelation to me, to be honest,” Biden added.
As part of the endeavor, the seven businesses agreed to create a mechanism to “watermark” various types of information, from text, images, and audio to AI-generated movies, so that consumers can tell when the technology has been utilised.
This watermark, which is technically integrated in the content, supposedly will make it simpler for users to recognise deep-fake images or audios that may, for example, show violence that did not occur, create a better fraud, or distort a photo of a politician to cast the person in an unfavorable light.
The firms also promised to prioritise user privacy as AI advances, as well as ensure that the technology is free of bias and is not used to discriminate against vulnerable groups.
Other pledges include AI solutions for scientific concerns such as medical research and climate change mitigation.