Please Share With Your Friends

OpenAI has reportedly claimed that DeepSeek might have distilled its artificial intelligence (AI) models to build the R1 model. As per the report, the San Francisco-based AI firm stated that it has evidence that some users were using its AI models’ outputs for a competitor, which is suspected to be DeepSeek. Notably, the Chinese company released the open-source DeepSeek-R1 AI model last week and hosted it on GitHub and Hugging Face. The reasoning-focused model surpassed the capabilities of the ChatGPT-maker’s o1 AI models in several benchmarks.

OpenAI Says It Has Evidence of Foulplay

According to a Financial Times report, OpenAI claimed that its proprietary AI models were used to train DeepSeek’s models. The company told the publication that it had seen evidence of distillation from several accounts using the OpenAI application programming interface (API). The AI firm and its cloud partner Microsoft investigated the issue and blocked their access.

In a statement to the Financial Times, OpenAI said, “We know [China]-based companies — and others — are constantly trying to distil the models of leading US AI companies.” The ChatGPT-maker also highlighted that it is working closely with the US government to protect its frontier models from competitors and adversaries.

Notably, AI model distillation is a technique used to transfer knowledge from a large model to a smaller and more efficient model. The goal here is to bring the smaller model on par or ahead of the larger model while reducing computational requirements. Notably, OpenAI’s GPT-4 has roughly 1.8 trillion parameters while DeepSeek-R1 has 1.5 billion parameters, which would fit the description.

See also  HTC Wildfire E7, Wildfire E4 Plus Allegedly Surface on Google Play Console Database

The knowledge transfer typically takes place by using the relevant dataset from the larger model to train the smaller model, when a company is creating more efficient versions of its model in-house. For instance, Meta used the Llama 3 AI model to create several coding-focused Llama models.

However, this is not possible when a competitor, which does not have access to the datasets of a proprietary model, wants to distil a model. If OpenAI’s allegations are true, this could have been done by adding prompt injections to its APIs to generate a large number of outputs. This natural language data is then converted to code and fed to a base model.

Notably, OpenAI has not publicly issued a statement regarding this. Recently, the company CEO Sam Altman praised DeepSeek for creating such an advanced AI model and increasing the competition in the AI space.


Please Share With Your Friends

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *