back to top
Friday, September 20, 2024

Careers

OpenAI Lowers Hype with GPT-5-Less DevDay Event This Fall

Last year, OpenAI held a splashy press event in San Francisco, during which the company announced a bevy of new products and tools, including the ill-fated App Store-like GPT Store.

A Quieter Affair This Year

This year will be a quieter affair, however. On Monday, OpenAI said it’s changing the format of its DevDay conference from a tentpole event into a series of on-the-road developer engagement sessions. The company also confirmed that it won’t release its next major flagship model during DevDay, focusing on updates to its APIs and developer services.

OpenAI Focus on Developer Education

“We’re not planning to announce our next model at DevDay,” an OpenAI spokesperson told TechCrunch. “We’ll be focused more on educating developers about what’s available and showcasing dev community stories.”

OpenAI DevDay Events Schedule

OpenAI’s DevDay events this year will take place in San Francisco on October 1, London on October 30, and Singapore on November 21. All will feature workshops, breakout sessions, demos with the OpenAI product and engineering staff, and developer spotlights. Registration will cost $450 (or $0 through scholarships available for eligible attendees), with applications to close on August 15.

Incremental Steps in Generative AI

In recent months, OpenAI has taken more incremental steps than monumental leaps in generative AI, opting to hone and fine-tune its tools as it trains the successor to its current leading models, GPT-4o and GPT-4o mini. The company has refined approaches to improving. Its models’ overall performance and preventing them from going off the rails as often as they previously did. Still, OpenAI appears to have lost its technical lead in the generative AI race — at least according to some benchmarks.

OpenAI Challenges in Finding High-Quality Training Data

One of the reasons could be the increasing challenge of finding high-quality training data. OpenAI’s, like most generative AI models, are trained on massive collections of web data — web data that many creators choose to gate over fears that their data will be plagiarized or won’t receive credit or pay. More than 35% of the world’s top 1,000 websites now block OpenAI’s web crawler, according to data from Originality. AI. According to a study by MIT’s Data Provenance Initiative, around 25% of data from high-quality. Sources has been restricted from the major datasets used to train AI models.

Potential Future Data Shortage

Should the access-blocking trend continue, the research group Epoch AI predicts that developers will run out of data to train generative AI models between 2026 and 2032. That — and fear of copyright lawsuits — has forced OpenAI to enter costly licensing agreements with publishers and various data brokers.

Promising New Techniques

(OpenAI revealed in a blog post in May that it had begun training its next “frontier” model.) That’s pledging a lot — and there’s high pressure to deliver. OpenAI is reportedly hemorrhaging billions of dollars in training its models and hiring top-paid research staff.

Controversies and a Slower Product Cycle

OpenAI still faces many controversies, such as using copyrighted data for training, restrictive employee NDAs, and effectively pushing out safety researchers. The slower product cycle might have the beneficial side effect of countering the narrative that OpenAI has deprioritized work on AI safety to pursue more capable, powerful, generative AI technologies.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here