AI Safety Instituteartificial intellgienceDOGEDOGE layoffsFeaturedlayoffsNISTTechnology

Potential cuts at AI Safety Institute stoke concerns in tech industry

Potential cuts to the U.S. AI Safety Institute (AISI) are causing alarm among some in the technology space who fear the development of responsible artificial intelligence (AI) could be at risk as President Trump works to downsize the federal government.

The looming layoffs at the National Institute of Standards and Technology (NIST) could reportedly impact up to 500 staffers in the AISI or Chips for America, amping up long-held suspicions the AISI could eventually see its doors shuttered under Trump’s leadership.  

Since taking office last month, Trump has sought to switch the White House tone on AI development, prioritizing innovation and maintaining U.S. leadership in the space.  

Some technology experts say the potential cuts undermine this goal and could impede America’s competitiveness in the space.  

“It feels almost like a Trojan horse. Like, the exterior of the horse is beautiful. It’s big and this message that we want the United States to be the leaders in AI, but the actual actions, the [goal] within, is the dismantling of federal responsibility and federal funding to support that mission,” said Jason Corso, a robotics, electrical engineering and computer science professor at the University of Michigan.  

The AISI was created under the Commerce Department in 2023 in response to then-President Biden’s executive order on AI. The order, which Trump rescinded on his first day in office, created new safety standards for AI among other things. 

The institute is responsible for developing the testing, evaluations and guidelines for what it calls “trustworthy” AI. 

AISI and Chips for America — both housed under the NIST — could be “gutted” by layoffs aimed at probationary employees, Axios reported last week.

Some of these employees received verbal notices last week about upcoming terminations, though a final decision on the scope of the cuts had not yet been made, Bloomberg reported, citing anonymous sources.  

Neither the White House nor the Commerce Department responded to The Hill’s request for comment Monday. 

The push comes as Trump’s Department of Government Efficiency (DOGE) panel, led by tech billionaire Elon Musk, takes a sledgehammer to the federal government and calls for the layoffs of thousands of federal employees to cut down on spending.  

Jason Green-Lowe, the executive director for the Center for AI Policy, noted the broader NIST and the AISI are already “seriously understaffed,” and any cuts may jeopardize the country’s ability to create not only responsible, but effective and high-performing AI models.  

“Nobody wants to pay billions of dollars for deploying AI in a critical use case where the AI is not reliable,” he explained. “And how do you know that your AI is reliable? Do you know it’s reliable because somebody in your company’s marketing department told you?” 

“There needs to be some kind of quality control that goes beyond just the individual company that’s obviously under tremendous pressure to ship and get to market before their competitors,” Green-Lowe continued.  

Leading AI firms including OpenAI and Anthropic have agreements allowing their models to be used for research at the AISI, including studying the risks that come with the emerging tech. 

The AISI’s job revolves around standards development. Despite common misconceptions, it is not a regulatory agency and cannot impose regulations on the industry under the current structure.  

Rumors have floated the institute will eventually shut down under Trump, and Director Elizabeth Kelly stepped down earlier this month. The institute was also reportedly not included in the U.S. delegation to the AI Action Summit in Paris. 

By cutting back or completely closing the tech institute, some tech experts worry private companies’ safety and trust goals will fall by the wayside.  

“There is really no direct incentive for a company to worry about safe AI as long as users will pay money for their product,” said Corso, who is also the co-founder and CEO of computer vision startup Voxel51.  

Trump has made clear he wants the U.S. to ramp up AI development in the coming months. 

One of the president’s first actions back in office last month was the announcement of a $500 billion investment into building AI infrastructure with the help of OpenAI, SoftBank and Oracle.  

Meanwhile, the White House Office of Science and Technology put out a request for information on the development of AI to create an “AI Action Plan” later this year. 

Vice President Vance doubled down on the administration’s stance earlier this month, slamming “excessive regulation” in a speech at the AI Action Summit in Paris. That followed Trump’s executive order last month to remove “barriers” to U.S. leadership in the space.  

And Commerce Secretary Howard Lutnick recommended developing AI standards at the NIST, comparing it to the department’s work on cyber technology and rules.  

The prospective layoffs or funding cuts will contradict the administration’s remarks and moves so far and hinder America’s competitive edge, various industry observers told The Hill.  

“If we’re going to be doing all of this investment in AI, then we need a proportional increase in the investment in the people who are doing guidelines and standards and guardrails,” Green-Lowe said. “Instead, we’re throwing out some of the best technical talent.” 

“It weakens our competitive position,” he added. “If the government is serious about being in a tight race with China or others, if they’re serious that we need every advantage we can get … one of those advantages would be leading the way on development.” 

Many of these probationary employees are “where a lot of the AI talent is,” given the increasing interest in AI over the past year, Eric Gastfriend, the executive director of nonprofit Americans for Responsible Innovation, told The Hill.  

The global AI race heated up over the past few months, especially last month, after the high-performing and cheaply built Chinese AI model DeepSeek took the internet and stock markets by storm.  

“We want to have a clear picture of where China is on this technology, and the AI Safety Institute has the technical talent to be able to evaluate models like DeepSeek,” Gastfriend said, adding, the institute is “getting understandings and evaluations of … the capabilities of models and what dangers and threats do they pose.”  

David Sacks, Trump’s AI and crypto czar, called DeepSeek a “wake-up call” for AI innovation but brushed off concerns it will outperform American-made models.  

The U.S. has repeatedly tried to ensure the production of AI-powering technology, most notably chip manufacturing, remains in America. During his confirmation hearing last month, Lutnick pledged to take an aggressive approach toward chips production. 

Still, some in the AI space do not think innovation of models or equipment will come to a major halt if these layoffs take place, underscoring the continued debate around the path forward.  

“I have a lot of confidence in the private sector to innovate, and I don’t believe that we need the government to do the research for us or to fund research, except in special cases,” said Matt Calkins, the co-founder and CEO of cloud computing and software company Appian.  

Today, AI is “the subject of more frantic private sector investment than anything since railroads,” Calkins added. “We absolutely don’t need the government to do any innovation for us.”  

He further brushed aside concerns there will be an “immediate” risk to the safety of AI development.  

“All AI is converging to the same place, and it’s a tool that’s very valuable, and you can do some bad things with it, you can do some good things with it,” he said, adding, “We know the situation across the industry, and when danger is seen, then we will doubtless wish to address it.” 

Should the layoffs take place, or the institute lose funding, some experts suggested DOGE efforts are moving fast and corrections down the road might be needed to address what Green-Lowe described as “unintended consequences.”  

Source link

Related Posts

Load More Posts Loading...No More Posts.