OpenAI o3-mini: Smarter AI for STEM at Lower Cost
- OpenAI o3-mini is a cost-effective AI model built for STEM reasoning.
- Available in ChatGPT and API, replacing OpenAI o1-mini with faster responses and higher accuracy.
- Supports function calling, structured outputs, and different reasoning levels.
- Outperforms previous small models in math, science, and coding benchmarks.
- Free plan users can try it by selecting ‘Reason’ in the message composer.
- Offers a "high" mode for even stronger reasoning, available to Pro users.
- Improved latency, with 24% faster response time compared to o1-mini.
- Users are already testing it and sharing their thoughts online.

OpenAI just dropped o3-mini—their latest, most cost-friendly AI built for math, science, and coding. It’s fast, sharp, and way more capable than past small models, making it a solid choice for developers and ChatGPT users alike.
Starting today, ChatGPT Plus, Team, and Pro users can start using o3-mini. Enterprise access follows in a week. Free users can also try it for the first time by selecting ‘Reason’ in ChatGPT. For developers, it’s rolling out in the Chat Completions API, Assistants API, and Batch API.
What Makes o3-mini Special?
This thing is apparently a powerhouse for STEM. It handles complex math, PhD-level science, and competition coding like a pro, with better accuracy and fewer errors than OpenAI o1-mini. It’s also flexible—users can choose between low, medium, and high reasoning effort, letting the model “think harder” when needed or prioritize speed when time matters.
Key upgrades:
- Stronger logic and problem-solving for STEM fields.
- Supports function calling, structured outputs, and developer messages.
- Three reasoning modes: low, medium, and high, balancing speed and accuracy.
- No vision support—use OpenAI o1 for anything involving images.
How Well Does It Perform?
In tests, o3-mini outperformed o1-mini across math, science, and coding challenges. With medium reasoning, it matches o1’s performance but responds faster. At high reasoning effort, it pushes past o1-mini and o1, making it OpenAI’s best small model yet.
Numbers don’t lie:
- AIME math accuracy: 83.6% at high reasoning.
- PhD science accuracy: 77.0% at high reasoning.
- Competitive coding Elo: 2073—stronger than previous small models.
- 39% fewer major errors in real-world testing.
Early testers are already putting o3-mini through its paces, and opinions are rolling in.
@NickADobos shared his first take:
"o3-mini definitely hallucinating some Swift APIs. That being said, it's pretty fast, and I think comparable or better than DeepSeek R1. For a low-cost mini model, pretty dang good!"
He also ranked it at the top of his budget-friendly AI list:
"My low-cost tier list is now: o3 mini > DeepSeek R1 > Claude Haiku 3.5."
@mckaywrigley was blown away by the cost savings:
"I replaced every AI agent & workflow I have running OpenAI’s o1 model with new o3-mini. They all still work, and some work better. But for 9x cheaper and 4x faster. They are significantly under-hyping this model—it’s absolutely unbelievable."
Faster and Smoother at Lower Cost
o3-mini isn’t just smarter—it’s quicker. Responses are 24% faster than o1-mini, with an average reply time of 7.7 seconds. The time to first token is also cut by 2500ms, making interactions feel way snappier.
This launch continues OpenAI’s trend of making powerful AI cheaper. Since GPT-4, they’ve cut per-token pricing by 95%, keeping strong reasoning affordable for developers and businesses.
Published: Feb 1, 2025 at 12:07 PM