The Task Is the Core of Every Prompt
The task is the verb — the action you want the AI to perform. Summarize, write, list, compare, explain, generate, rewrite, classify, translate, debug. Without a clear task, even excellent role and context information is wasted because the model doesn't know what to produce. This sounds obvious, but a surprising number of prompts lack a clear task entirely. They provide background, express a vague desire, and then expect the model to infer the action. The model will usually infer something — but it's rarely exactly what you wanted.
Vague Tasks vs. Specific Tasks
The difference between a vague task and a specific task isn't usually the length of the instruction — it's the specificity of the verb and the scope of the action. 'Write something about email marketing' is vague. 'Write a 3-email drip sequence for re-engaging SaaS customers who signed up but never completed setup, with one email per day, starting 3 days after inactivity, each under 150 words' is specific. The second prompt has the same core action (write emails) but defines the number, subject, timing, constraints, and word count. Every one of those additions reduces the model's degrees of freedom and increases the odds that the output matches what you actually need.
How to Write a Task That Leaves No Ambiguity
A well-formed task instruction answers four questions before the model has to ask them: What action? (write, list, summarize), On what subject or material? (this article, the following code, the meeting transcript below), For what purpose or outcome? (so that a customer can understand, so that it can be published on a career site), and in what format or quantity? (as 5 bullet points, as a 300-word paragraph, as a JSON object). You don't need to answer all four in every prompt, but the more you do, the less guessing the model has to do and the more useful the output will be.
Breaking Complex Tasks Into Multiple Steps
For tasks with multiple distinct components, listing each step explicitly consistently outperforms bundling everything into a single instruction. Instead of 'write a complete marketing plan,' write: '1. List 5 target audience segments for this product. 2. For each segment, write one sentence describing their primary pain point. 3. Write a positioning statement that addresses the most important segment.' This approach reduces errors, makes the output easier to verify, and allows you to catch and correct mistakes at each stage before they propagate to the next. Numbered steps also help the model track its own progress through complex multi-part tasks.
Task Scope: When Smaller Is Better
One of the most common task definition errors is making a single prompt responsible for too much. A prompt that asks for research, synthesis, structuring, writing, editing, and formatting all at once will almost always produce a mediocre result at each stage. Narrowing the scope of each task — even if it means using multiple prompts — produces better output at every step. Think of it like work delegation: you wouldn't give one person a brief and say 'turn this into a finished product.' You'd have different people handle strategy, writing, and editing. The same principle applies to prompt chains.
Task Definitions That Work for Different Output Types
Different output types benefit from different task structures. For written content, specify type, length, audience, tone, and purpose. For code, specify language, function, inputs, outputs, and any libraries to avoid. For analysis, specify what to analyze, what to look for, how many items to identify, and how to present findings. For summaries, specify what to include, what to exclude, the target length, and who the summary is for. Building up a personal library of task templates for your most common AI use cases will save you significant time and produce consistently better results than starting from scratch each time.