Since Chinese artificial intelligence (AI) start-up DeepSeek rattled Silicon Valley and Wall Street with its cost-effective models, the company has been accused of data theft through a practice that is common across the industry.
David Sacks says OpenAI has evidence that Chinese company DeepSeek used a technique called "distillation" to build a rival model.
OpenAI thinks DeepSeek may have used its AI outputs inappropriately, highlighting ongoing disputes over copyright, fair use, and training data.
White House AI czar David Sacks alleged Tuesday that DeepSeek had used OpenAI’s data outputs to train its latest models through a process called distillation.
DeepSeek’s success learning from bigger AI models raises questions about the billions being spent on the most advanced technology.
OpenAI claims to have found evidence that Chinese AI startup DeepSeek secretly used data produced by OpenAI’s technology to improve their own AI models, according to the Financial Times. If true, DeepSeek would be in violation of OpenAI’s terms of service. In a statement, the company said it is actively investigating.
What I can say is that it's a little rich for OpenAI to suddenly be so very publicly concerned about the sanctity of proprietary data. Collectively, the contributions from copyrighted sources are significant enough that OpenAI has said it would be "impossible" to build its large-language models without them.
OpenAI CEO Sam Altman downplayed the significance of a new artificial intelligence (AI) model released by Chinese startup DeepSeek on Thursday, saying it did a “couple of nice things” but has been
After DeepSeek AI shocked the world and tanked the market, OpenAI says it has evidence that ChatGPT distillation was used to train the model.
OpenAI believes DeepSeek used a process called “distillation,” which helps make smaller AI models perform better by learning from larger ones.
The San Francisco start-up claims that its Chinese rival may have used data generated by OpenAI technologies to build new systems.