Home/Glossary/Reasoning
Concepts

Reasoning

The capacity of an AI model to derive conclusions from premises through logical or analogical inference.

Full Definition

Reasoning in language models refers to the ability to perform multi-step inference — combining premises, applying rules or analogies, and reaching conclusions that are not directly stated in the input. LLM reasoning is categorised as deductive (applying rules to specific cases), inductive (inferring rules from examples), abductive (finding the best explanation for observations), and analogical (applying structure from a known domain to a new one). Modern LLMs demonstrate impressive surface-level reasoning on benchmarks but can fail on trivially modified versions of problems they solve correctly, suggesting their 'reasoning' may partly be sophisticated pattern matching rather than genuine logical inference. Improving systematic, reliable reasoning is a central research goal.

Examples

1

A model correctly deducing that if all mammals breathe air and dolphins are mammals, then dolphins breathe air — classic syllogistic deductive reasoning.

2

An LLM failing to solve 'A bat and ball cost $1.10. The bat costs $1 more than the ball. How much does the ball cost?' without chain-of-thought prompting.

Apply this in your prompts

PromptITIN automatically uses techniques like Reasoning to build better prompts for you.

✦ Try it free

Related Terms

Chain-of-Thought

A prompting technique that asks the model to reason step-by-step before giving a

View →

Emergent Behaviour

Capabilities that appear suddenly in large models without being explicitly train

View →

Tree of Thoughts

A framework that explores multiple reasoning branches in parallel and selects th

View →
← Browse all 100 terms