President Joe Biden speaks as he meets with AI experts and researchers at the Fairmont Hotel in San Francisco, California, June 20, 2023.
Jane Engelska | Medianews Group | Getty Images
President Joe Biden issued a new executive order on artificial intelligence – the first action of its kind by the US government – that requires new security assessments, guidance on equity and civil rights, and research on the impact of AI on the labor market.
While law enforcement agencies have warned that they are prepared to use existing law to misuse AI, and Congress has moved to learn more about the technology to enact new laws, the executive order could have a more immediate impact. Like all presidential orders, it has the “force of law,” according to a senior administration official who spoke to reporters in a phone call Sunday.
The White House divides the main components of the Executive Order into eight parts:
- Create new security standards for AIincluding by requiring some AI companies to share security testing results with the federal government, directing the Department of Commerce to create guidelines for AI watermarking, and creating a cybersecurity program that can develop AI tools to help mitigate flaws in identify critical software.
- Protecting consumer privacyincluding by creating guidelines that authorities can use to evaluate the privacy techniques used in AI.
- Promoting justice and civil rights by providing guidance to landlords and federal contractors to prevent AI algorithms from promoting discrimination, and by establishing best practices for the appropriate role of AI in the justice system, including when used in sentencing, risk assessment, and crime prediction .
- Protect consumers overall by directing the Department of Health and Human Services to create a program to assess potentially harmful AI-related health practices and create resources for how educators can responsibly use AI tools.
- Support for workers by producing a report on the potential impact of AI on the labor market and examining ways the federal government could support workers affected by labor market disruptions.
- Promoting innovation and competition by expanding grants for AI research in areas such as climate change and modernizing the criteria for allowing high-skilled immigrants with key skills to remain in the US
- Collaboration with international partners Implement AI standards worldwide.
- Develop guidelines for use and procurement by federal agencies of AI and the acceleration of the government’s recruitment of skilled workers in this area.
The order represents “the strongest set of actions ever taken by any government in the world on AI safety, security and trust,” White House deputy chief of staff Bruce Reed said in a statement.
It builds on voluntary commitments the White House previously received from leading AI companies and represents the first major binding government action on the technology. It also comes ahead of an AI security summit hosted by the UK
The senior administration official pointed to the fact that 15 major American tech companies have agreed to implement voluntary AI security commitments, but said that was “not enough” and that Monday’s executive order was a step toward concrete regulation for development of technology.
“Several months ago, the President directed his team to pull out all the stops, and this order does exactly that: leverages the power of the federal government across a broad range of areas to address the risks of AI and leverage its benefits use,” the official said.
Biden’s executive order requires large companies to share security test results with the U.S. government before officially releasing AI systems. It also prioritizes the National Institute of Standards and Technology’s development of standards for AI “red teaming,” meaning stress testing of defenses and potential problems within systems. The Department of Commerce will develop standards for watermarking AI-generated content.
The regulation also addresses training data for large AI systems and sets out the need to assess how authorities collect and use commercially available data, including data acquired from data brokers, particularly when that data is personal identifiers.
The Biden administration is also taking steps to strengthen the AI workforce. Starting Monday, the senior administration official said, workers with AI expertise will be able to find relevant job openings in the federal government on AI.gov.
The administration official said Sunday that the “most aggressive” timeline for some security aspects of the order involves a turnaround time of 90 days, and for some other aspects that time frame could be closer to a year.
Building on previous AI actions
Monday’s order follows a series of steps the White House has taken in recent months to create spaces for discussion about the pace of AI development as well as proposed policies.
Since the viral launch of ChatGPT in November 2022 – which became the fastest-growing consumer application in history within two months, according to a UBS study – the widespread adoption of generative AI has already sparked public concerns, litigation and questions from lawmakers. For example, just days after Microsoft integrated it into its Bing search engine, ChatGPT was criticized for toxic language, and popular AI image generators came under fire for racial bias and spreading stereotypes.
Biden’s executive order directs the Justice Department and other federal agencies to develop standards for “investigating and prosecuting civil rights violations related to AI,” the administration official said on the call with reporters on Sunday.
“The President’s executive order requires that clear guidance be provided to landlords, federal benefit programs, and federal contractors to prevent AI algorithms from being used to exacerbate discrimination,” the official added.
In August, the White House called on thousands of hackers and security researchers to outsmart the leading generative AI models from the leading companies in the field, including OpenAI, Google, Microsoft, Meta and Nvidia. The competition took place as part of Def Con, the world’s largest hacking conference.
“It is accurate to call this the first-ever public assessment of multiple LLMs,” a representative from the White House Office of Science and Technology Policy told CNBC at the time.
The competition followed a meeting in July between the White House and seven leading AI companies, including Alphabet, Microsoft, OpenAI, Amazon, Anthropic, Inflection and Meta. Each of the companies left the meeting after agreeing to a series of voluntary commitments in developing AI, including allowing independent experts to evaluate tools before public launch, studying the societal risks associated with AI, and allowing third parties to do so , to test for system vulnerabilities, such as competition at Def Con.
WATCH: How AI could impact outsourced programmer jobs in India
Source : www.cnbc.com