Reasoning Model
A model trained to perform extended internal reasoning before producing a response.
Full Definition
Reasoning models are language models specifically trained to generate long internal reasoning traces — a 'chain of thought' that may span thousands of tokens — before producing a final answer. OpenAI's o1 and o3 models exemplify this paradigm: they use reinforcement learning to develop reasoning strategies, 'thinking' through problems rather than immediately generating responses. Reasoning models dramatically outperform standard chat models on mathematical olympiad problems, competitive programming, and multi-step logic puzzles. The trade-off is latency and cost: reasoning models take longer and consume more tokens per query. They represent a shift from scale (more parameters) to compute at inference time.
Examples
OpenAI's o1 scoring in the top 500 on AIME (American Invitational Mathematics Examination), a level typically requiring years of dedicated maths training.
Using a reasoning model to verify a formal proof in 15 minutes that would have taken a human expert several hours.
Apply this in your prompts
PromptITIN automatically uses techniques like Reasoning Model to build better prompts for you.
Related Terms
Chain-of-Thought
A prompting technique that asks the model to reason step-by-step before giving a…
View →Large Language Model
A neural network with billions of parameters trained on text to understand and g…
View →Benchmark
A standardised test suite used to measure and compare AI model capabilities acro…
View →