Home/Glossary/Instruction-Tuned Model
Models

Instruction-Tuned Model

A model fine-tuned on instruction-response pairs to follow natural-language directives reliably.

Full Definition

Instruction tuning adapts a base language model by training it on a curated dataset of (instruction, response) pairs that cover diverse tasks — summarisation, translation, question answering, coding, and more. After instruction tuning, the model follows natural-language commands rather than just continuing text. This was the key insight behind InstructGPT (2022) and is now standard practice. Instruction-tuned models are dramatically more useful for end-users because they generalise instruction-following to tasks not in the training set. Most commercially deployed models — ChatGPT, Claude, Gemini — are instruction-tuned versions of their underlying base models.

Examples

1

InstructGPT responding helpfully to 'List five benefits of meditation' instead of just continuing the sentence fragment.

2

Llama 3 Instruct following the instruction 'Rewrite this paragraph more formally' without needing few-shot examples.

Apply this in your prompts

PromptITIN automatically uses techniques like Instruction-Tuned Model to build better prompts for you.

✦ Try it free

Related Terms

Instruction Tuning

Supervised fine-tuning on diverse instruction-response pairs to improve a model'

View →

Base Model

A model trained only on next-token prediction over a large corpus, before any in

View →

Fine-Tuned Model

A pretrained model whose weights have been updated on a specific dataset for a t

View →
← Browse all 100 terms