OpenAI has recently launched its groundbreaking artificial intelligence model, o1. While the name might seem arbitrary, o1 embodies OpenAI's ambitious vision for the future of AI. In essence, o1 is a sophisticated "reasoning" model capable of tackling complex problems with a processing speed that surpasses human capabilities. However, its usage cost is considerably higher than previous models, which may be a significant consideration for users.
For those following developments in the AI sector, o1 is the highly anticipated "Strawberry" model. In addition to o1, OpenAI has introduced a more affordable version, o1-mini, which offers a budget-friendly option with somewhat simplified features. On multi-AI subscription platform POE, the cost to use the o1 model is approximately 25,000 points per interaction, reflecting its advanced capabilities.
Though currently in testing, users are limited to three interactions per day, and each message consumes about 50 times more points than the standard GPT-4 model. Despite these constraints, many users are eager to explore what o1 has to offer. It's important to note that responses may be slower, requiring more waiting time. In tests involving logical and mathematical queries, we compared o1's performance to ChatGPT-4, particularly focusing on its reasoning skills.
Test One: Which is greater, 9.11 or 9.9? This common pitfall for GPT models often leads to the incorrect assumption that 9.11 is larger. ChatGPT-4 failed this test, but o1 accurately identified 9.9 as the larger number, further exploring potential ambiguities in comparisons beyond mere numerical value. This nuanced understanding showcases o1's advanced logical reasoning abilities.
Test Two: A cup containing a ring is moved from the living room table to the study, then to the bedroom where it was overturned but returned to normal before finally being placed back on the living room table. Where is the ring now? Compared to ChatGPT-4, o1 provided a much clearer and more precise reasoning process.
Test Three: In a challenging math question from the 2022 college entrance examination, o1 demonstrated rigorous adherence to standard answers in its derivation, while ChatGPT-4 encountered errors, signaling a key difference in their mathematical reasoning skills.
The release of o1 marks a significant step for OpenAI toward achieving human-like intelligence, albeit with a relatively high cost. With o1, AI can not only assist in coding but also tackle challenges requiring deep analytical thinking. However, accessing o1 through the API comes with steep fees—$15 per million input tokens and $60 for output tokens—compared to the more economical pricing of its predecessors.
OpenAI's Jerry Tworek highlighted that o1's training differs fundamentally from earlier AI models, introducing innovative optimization algorithms and specially designed datasets. While traditional GPT models mimic patterns from training data, o1 exhibits self-learning capabilities. Trained through reinforcement techniques, o1 learns through a reward and correction mechanism. Additionally, it employs a "thought chain" methodology that mirrors human-like progressive problem-solving.
Officially, o1 has shown substantial advantages over GPT-4 in various areas, excelling in encoding and mathematical problem-solving while clearly articulating its reasoning processes. OpenAI's chief research officer, Bob McGrew, humorously noted that o1 would likely outperform him in AP calculus, albeit he had a mathematics minor in college.
Testing o1 against the International Mathematical Olympiad qualification exam revealed an impressive 83% accuracy rate, compared to GPT-4’s 13%. Despite these achievements, o1's limitations are apparent. While it excels in complex reasoning, it lags behind GPT-4 in accessing broader world knowledge and lacks functionality like web browsing and file handling.
In conclusion, OpenAI views o1 not merely as a model but as a new capability representing a fresh chapter in AI evolution. The name "o1" symbolizes this innovative phase, marking a departure from previous naming conventions, which McGrew candidly acknowledged were less successful.
Looking ahead, the advancements in reasoning capabilities signal a promising trajectory for AI, poised to assist in decision-making, innovation, and addressing significant challenges across various sectors, including education, healthcare, and scientific research. As technology progresses and costs decrease, AI is expected to play an increasingly vital role, becoming a powerful driver of societal advancement.