The ChatGPT maker said it is “reviewing indications that DeepSeek may have inappropriately distilled” its models.
Distillation is a technique used to transfer the knowledge of a large model to a smaller model.
“We know that groups in the [People’s Republic of China] are actively working to use methods, including what’s known as distillation, to try to replicate advanced U.S. AI models,” an OpenAI spokesperson said in a statement.
“We take aggressive, proactive countermeasures to protect our technology and will continue working closely with the U.S. government to protect the most capable models being built here,” they added.
Distillation does not expose a model’s inner workings and can be used by developers to improve their applications, the spokesperson noted.
However, OpenAI’s terms of service bar users from using the data obtained through distillation to build competing AI products.
DeepSeek sent shock waves through the American AI industry with the release of its R1 open-source reasoning model last week.
The Chinese startup claims its model performs on par with OpenAI’s latest model and cost just $5.6 million to train with a couple thousand reduced-capacity chips.
DeepSeek now sits atop Apple’s App Store after overtaking OpenAI’s ChatGPT.
White House AI and crypto czar David Sacks claimed Tuesday that there is “substantial evidence” that DeepSeek used distillation to pull information from OpenAI’s models.
“I don’t think OpenAI is very happy about this,” he told Fox News. “I think one of the things you’re going to see over the next few months is our leading AI companies taking steps to try and prevent distillation.”
Read more in a full report at TheHill.com.