Why Vague Prompts Produce Vague Results
Remember the mental model from Lesson 1: you're not asking a question of a knowledgeable entity. You're providing context to a prediction engine. The quality of what you get back is a direct function of the quality and specificity of the context you provide.
When you type "write me a summary," the AI has to make dozens of decisions on your behalf: How long? For what audience? Of what? In what tone? From what angle? It will guess — often plausibly, often wrong for your specific needs.
When you give it those decisions upfront, you get something much closer to what you actually want on the first attempt.
The Three Fundamentals
1. Role: Tell AI who it's being
Assigning a role to the AI is one of the highest-leverage prompt techniques available to beginners. It works because role specification implicitly imports a vast set of contextual assumptions — vocabulary, tone, framing, depth, and focus — that would take many sentences to specify individually.
Weak: "Explain machine learning to me."
Strong: "You are a patient teacher explaining machine learning to a curious professional with no technical background. Use everyday analogies and avoid jargon."
The second prompt produces an explanation calibrated to your actual need. The first produces an explanation calibrated to... average text about machine learning.
2. Context: Give it what it needs to help you
AI cannot read your mind, your file system, or your organizational context. Anything it needs to give you a useful response, you have to provide in the prompt. The most common reason AI gives unhelpful responses is missing context.
Before sending a prompt, ask: what does the AI need to know that it might not have? Common missing context includes: who this is for, what format is needed, what constraints exist, what you've already tried, and what specifically wasn't working.
Weak: "Rewrite this email to be more professional."
Strong: "Rewrite this email to be more professional. The recipient is a new client who we haven't met yet. I want to sound warm but authoritative. The current version feels too casual. Keep it under 150 words. [paste email]"
3. Format: Specify what you want back
By default, AI will respond in the format that seems most typical for your type of request. Sometimes that's right. Often it isn't. You can and should specify the output format you need.
Useful format specifications:
- "Give me three options" — prevents AI from committing to one approach
- "Use bullet points, not paragraphs" — for scannable reference material
- "Keep the total response under 200 words" — for tight constraints
- "Include a brief explanation of your reasoning" — for understanding, not just output
- "Format this as a table with columns for X, Y, and Z" — for structured data
The Before/After Pattern
The single most useful habit to develop is comparing your first-draft prompt to a revised version that applies these three principles before you send it. Ask yourself:
- Have I given it a role or at least a framing for who it should be?
- Have I provided all the context it needs to actually help me?
- Have I specified what I want the output to look like?
You don't need all three every time. A simple creative request may not need a format spec. An exploratory question may not need a role. But running through this checklist before sending will improve your first-attempt results significantly.
Iteration Is Not Failure
One more thing: expecting to get exactly what you want on the first attempt is an unrealistic standard. Expert AI users iterate. They send a first prompt, evaluate the output, and follow up with refinements: "Make it shorter," "That's the right direction but less formal," "Give me the third option expanded into a full paragraph."
Multi-turn conversations with AI are often more productive than trying to write the perfect single prompt. Think of it as a dialogue, not a command.