close
close

Semainede4jours

Real-time news, timeless knowledge

OpenAI Reportedly Achieves Law of Diminishing Returns When Transferring Computing Resources to Artificial Intelligence
bigrus

OpenAI Reportedly Achieves Law of Diminishing Returns When Transferring Computing Resources to Artificial Intelligence

This could be a big problem.

Age of Wonders

Reports are emerging that OpenAI is hitting a wall as it continues to pour more computing power into its much-hyped large language models (LLMs) like ChatGPT in an attempt to produce smarter outputs.

AI models need lots of training data and computing power to run at scale. But somehow interview with ReutersRecently departed OpenAI co-founder Ilya Sutskever claimed that the firm’s recent tests trying to scale up its models showed that those efforts had stagnated.

“The 2010s were the age of scale, now we are once again back in the age of curiosity and discovery,” Sutskever said. faithful believer regarding the imminent arrival of technology called artificial general intelligence (AGI) or human-level artificial intelligence. Reuters. “Everyone is looking for the next thing.”

While it’s unclear exactly what that “next thing” might be, Sutskever’s confession — especially coming less than a year after that confession. moved to expel OpenAI CEO Sam Altman and later stepped aside as much as him final departure – seems to dovetail with other recent claims and conclusions: AI companies, and OpenAI in particular, are challenging the law of diminishing returns.

Limited Jumps

During the weekend, Information reported With each new flagship model, OpenAI sees a slowdown in the kind of “leaps” users have come to expect in the wake of this game-changing development ChatGPT version In December 2022.

This slowdown appears to be testing the core belief at the heart of the argument for AI scaling: as long as there is more data and computing power to fuel the models — which is a big “if,” given that firms have already achieved that. running out of training data And consume electricity at unprecedented rates – these models will continue to grow or “scale” at a consistent rate.

In response to this latest news Informationdata scientist Yam Peleg made fun of X HE another cutting-edge AI firm “hit a HUGE unexpected wall of diminishing returns while trying to brute force better results by training for longer and using more and more data.”

Peleg’s comment may be just gossip, but researchers to have happened warning for years now LLMs would eventually hit this wall. given insatiably high demand for powerful AI chips – and these companies are now train your models Open Data generated by artificial intelligence — you don’t have to be a machine learning expert to wonder if the low-hanging fruit is selling out.

“I think it’s safe to assume that all the great players have reached their limits by training longer and collecting more data.” Peleg continued. “It’s all about data quality now. And that takes time.”

More on AI crises: Sam Altman Says The Most Important Thing He’s Excited About Next Year Is Achieving AGI