Large language models (LLMs) can learn complex reasoning tasks without relying on large datasets, according to a new study by researchers at Shanghai Jiao Tong University. Their findings show that ...
Personally identifiable information has been found in DataComp CommonPool, one of the largest open-source data sets used to train image generation models. Millions of images of passports, credit cards ...
Microsoft and Tsinghua University have developed a 7B-parameter AI coding model that outperforms 14B rivals using only ...
AI engineers often chase performance by scaling up LLM parameters and data, but the trend toward smaller, more efficient, and better-focused models has accelerated. The Phi-4 fine-tuning methodology ...
This story was updated to add new information. LinkedIn user data is being used to train artificial intelligence models, leading some social media users to call out the company for opting members in ...
Is it possible for an AI to be trained just on data generated by another AI? It might sound like a harebrained idea. But it’s one that’s been around for quite some time — and as new, real data is ...
As chief data officer for the Cybersecurity and Infrastructure Security Agency, Preston Werntz has made it his business to understand bias in the datasets that fuel artificial intelligence systems.
If left unchecked, "model collapse" could make AI systems less useful, and fill the internet with incomprehensible babble. When you purchase through links on our site, we may earn an affiliate ...
From boardroom bedlam to courtroom drama, Sam Altman has had a tumultuous three months. In December, the New York Times filed a federal lawsuit against OpenAI, alleging that the company infringed on ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results