President Joe Biden has unveiled an executive order to control artificial intelligence to ensure the tech cannot be transformed into a weapon.
The order, unveiled Monday, will require developers like Microsoft, OpenAI and Google to conduct safety tests and submit results before launching models for the public.
These results will be analyzed by federal agencies, including Homeland Security, to address threats to critical infrastructure and chemical, biological, radiological, nuclear and cybersecurity risks.
Biden believes the government was late to address the dangers of social media, and now US youth are grappling with related mental health issues.
Monday's executive order is an 'urgent' move to rein in the technology before it warps basic notions of truth with false images, deepens racial and social inequalities, provides a tool to scammers and criminals and is used for warfare.
President Joe Biden has unveiled the most sweeping actions ever taken to control artificial intelligence yet to ensure the tech cannot be transformed into a weapon
Approximately 15 leading companies have agreed to implement voluntary AI safety commitments, but the executive order aims to provide concrete regulation for the technology's development.
Several companies, including Microsoft and OpenAI, have also testified in front of Congress, where they were grilled on the safety of their chatbots.
Vice President Kamala Harris - who has the lowest approval rating of any VP- was appointed the 'AI czar' in May, tasked with leading the crackdown on the AI race amid growing concerns the tech could upend life as we know it.
However, Harris has been quiet in the role since being appointed.
She is set to speak at the UK Summit on AI Safety on November 2.
The new order reflects the government's effort to shape how AI evolves in a way that can maximize its possibilities and contain its perils.
AI has been a source of deep personal interest for Biden, with its potential to affect the economy and national security.
The order, unveiled Monday, will require developers like Microsoft, OpenAI - ChatGPT creator - and Google to conduct safety tests and submit results before launching models for the public.
Deepfakes are sharing misinformation, AI robots are scamming people out of money and chatbots are showing signs of bias.
Congress grilled OpenAI's founder Sam Altman in May for five hours about how ChatGPT and other models could reshape 'human history' for better or worse, likening it to either the printing press or the atomic bomb.
White House chief of staff Jeff Zients recalled Biden giving his team a directive to move urgently on the issue, considering the technology a top priority.
'We can´t move at a normal government pace,' Zients said the Democratic president told him. 'We have to move as fast, if not faster than the technology itself.'
AI companies are conducting their own testing to weed out disinformation, bias or racism.
The fears of AI come as experts predict it will achieve singularity by 2045, which is when the technology surpasses human intelligence to which we cannot control it
Using the Defense Production Act, the order will require leading AI developers to share safety test results and other information with the government.
In accordance with the Defense Act, Monday's order will require companies developing AI to notify the federal government if models show signs of risk to national security, public health and safety.
The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release.
The Commerce Department is to issue guidance with labeling and watermarking AI-generated content to help differentiate between authentic interactions and those generated by software.
One point made in the order is to 'protect against the risks of using AI to engineer dangerous biological materials,' which entails rolling out 'strong new standards for biological synthesis screening,' the document reads.
The order also touches on privacy, civil rights, consumer protections, scientific research and worker rights.
An administration official who previewed the order on a Sunday call with reporters said the to-do lists within the order would be implemented and fulfilled over 90 days to 365 days, with the safety and security items facing the earliest deadlines.
While much of the order concerns AI development risks, Biden is aware of the technology's potential to benefit the public by making products better, cheaper, and more widely available.
This has been seen with the development of affordable and life-saving drugs, and the executive order states that 'The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI.'
'AI is in our lives everywhere. And it's going to be even more prevalent,' Zients said.
'I think that it's an important part of making our country an even better place and making our lives better... at the same time, we've got to avoid the downsides.'
With Congress still in the early stages of debating AI safeguards, Biden's order stakes out a US perspective as countries around the world race to establish their own guidelines.
After more than two years of deliberation, the European Union is putting the final touches on a comprehensive set of regulations that targets the riskiest applications for the technology. China, a key AI rival to the U.S., has also set some rules.
U.K. Prime Minister Rishi Sunak also hopes to carve out a prominent role for Britain as an AI safety hub at a summit that Vice President Kamala Harris plans to attend this week.