OpenAI’s Project Strawberry set to bring reasoning to AI

BY

Published 17 Jul 2024

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free An artist’s illustration of artificial intelligence (AI). This image represents how machine learning is inspired by neuroscience and the human brain. It was created by Novoto Studio as par... Stock Photo

A new era approaches the field of artificial intelligence (AI) as OpenAI is reportedly working on a project to equip its models with unprecedented reasoning capabilities.

Filed under the code name “Strawberry,” the project seeks to enable OpenAI’s ChatGPT and other services to perform “deep research” that will not only let them automate answers based on prompts but also plan ahead to run independent searches, according to Reuters.

“We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time,” said an OpenAI spokesperson when asked about Project Strawberry.

The Microsoft-backed company is not yet directly addressing questions about the project, but it is looking to keep the specific details under wraps even within the firm.  

Peeking through the leak

Although developers at OpenAI remain tight-lipped, a document describing Strawberry’s goals has been leaked to Reuters, though it does not explain how.

Based on an exclusive piece published online by the news agency, Project Strawberry includes “post-training” of OpenAI’s generative AI tools, wherein the existing models will be fine-tuned in specific ways after they have already been “trained” on large sets of generalized data.

Strawberry is also expected to share similarities with a Stanford-developed method called “Self-Taught Reasoner” or “STaR,” which allows models to produce their own training data. According to one of its creators, Professor Noah Goodman, STaR can be applied to propel AI tech into a higher level of intelligence, possibly transcending humans.

Further, OpenAI hopes to make its technologies capable of performing long-horizon tasks (LHT). These complex tasks will enable its AI to plan beforehand and take independent actions over an extended period. This capability can then be used to do research by navigating the Internet autonomously with the help of a computer-using agent.

With the breakthroughs expected from Project Strawberry, the company has set its target on using AI to complete the work of machine learning and software engineers.

The search for reasoning

Project Strawberry, albeit a work in progress, has gathered numerous attention in the AI landscape because of its promise to enhance reasoning to artificial intelligence.

Amid rapid advancements in generative AI technology, these systems have always been limited by their lack of reasoning. When it comes to recognizing logical fallacies, playing tic-tac-toe, and answering common-sense problems in general, AI often produces hallucinations and unreliable information.

Now, OpenAI is in the process of improving reasoning in AI models, and this serves as a key to unlocking plenty of abilities that will make AI tools more useful in broader disciplines, from the sciences to engineering.  

Big companies like Google, Meta, Microsoft, and academic AI research laboratories are also exploring different approaches to address this problem. However, not all researchers agree that integrating long-term planning into large-language models (LLM) is possible. For example, Meta’s Yann LeCun, who is among the leaders of modern AI, has gone on record multiple times claiming that LLMs are incapable of reasoning similar to humans.