Home/Glossary/Fine-Tuned Model
Models

Fine-Tuned Model

A pretrained model whose weights have been updated on a specific dataset for a targeted task.

Full Definition

A fine-tuned model starts from a pretrained foundation model and undergoes additional training on a smaller, task-specific dataset. This adapts the model's behaviour — its tone, domain knowledge, output format, and task focus — to a particular application. Fine-tuning is more powerful than prompting for consistent style or highly specialised knowledge, but requires labelled training data and compute. Techniques range from full fine-tuning (all weights updated) to parameter-efficient methods like LoRA that update only a small adapter. Fine-tuned models can outperform much larger base models on narrow tasks because they trade breadth for depth.

Examples

1

Fine-tuning GPT-3.5 on 5,000 customer support transcripts so it reliably matches the company's brand voice.

2

Fine-tuning a code model on a proprietary internal codebase so it can autocomplete company-specific API calls.

Apply this in your prompts

PromptITIN automatically uses techniques like Fine-Tuned Model to build better prompts for you.

✦ Try it free

Related Terms

Fine-Tuning

Continuing training of a pretrained model on a smaller, task-specific dataset to

View →

LoRA (Low-Rank Adaptation)

A parameter-efficient fine-tuning method that updates only small low-rank matric

View →

Instruction-Tuned Model

A model fine-tuned on instruction-response pairs to follow natural-language dire

View →
← Browse all 100 terms