Sign In
Signed out 5
Why This Matters

Most people prompt AI the way they type a Google search — short, keyword-heavy, vague. This produces mediocre results and leads people to conclude that "AI isn't that useful." The problem isn't the AI. It's the input. Prompting is a skill, and like all skills, it has a learning curve. This lesson gives you the fundamentals: three changes to how you write prompts that will immediately produce better results.

The Concept

Why Vague Prompts Produce Vague Results

Remember the mental model from Lesson 1: you're not asking a question of a knowledgeable entity. You're providing context to a prediction engine. The quality of what you get back is a direct function of the quality and specificity of the context you provide.

When you type "write me a summary," the AI has to make dozens of decisions on your behalf: How long? For what audience? Of what? In what tone? From what angle? It will guess — often plausibly, often wrong for your specific needs.

When you give it those decisions upfront, you get something much closer to what you actually want on the first attempt.

The Three Fundamentals

1. Role: Tell AI who it's being

Assigning a role to the AI is one of the highest-leverage prompt techniques available to beginners. It works because role specification implicitly imports a vast set of contextual assumptions — vocabulary, tone, framing, depth, and focus — that would take many sentences to specify individually.

Weak: "Explain machine learning to me."

Strong: "You are a patient teacher explaining machine learning to a curious professional with no technical background. Use everyday analogies and avoid jargon."

The second prompt produces an explanation calibrated to your actual need. The first produces an explanation calibrated to... average text about machine learning.

2. Context: Give it what it needs to help you

AI cannot read your mind, your file system, or your organizational context. Anything it needs to give you a useful response, you have to provide in the prompt. The most common reason AI gives unhelpful responses is missing context.

Before sending a prompt, ask: what does the AI need to know that it might not have? Common missing context includes: who this is for, what format is needed, what constraints exist, what you've already tried, and what specifically wasn't working.

Weak: "Rewrite this email to be more professional."

Strong: "Rewrite this email to be more professional. The recipient is a new client who we haven't met yet. I want to sound warm but authoritative. The current version feels too casual. Keep it under 150 words. [paste email]"

3. Format: Specify what you want back

By default, AI will respond in the format that seems most typical for your type of request. Sometimes that's right. Often it isn't. You can and should specify the output format you need.

Useful format specifications:

  • "Give me three options" — prevents AI from committing to one approach
  • "Use bullet points, not paragraphs" — for scannable reference material
  • "Keep the total response under 200 words" — for tight constraints
  • "Include a brief explanation of your reasoning" — for understanding, not just output
  • "Format this as a table with columns for X, Y, and Z" — for structured data

The Before/After Pattern

The single most useful habit to develop is comparing your first-draft prompt to a revised version that applies these three principles before you send it. Ask yourself:

  1. Have I given it a role or at least a framing for who it should be?
  2. Have I provided all the context it needs to actually help me?
  3. Have I specified what I want the output to look like?

You don't need all three every time. A simple creative request may not need a format spec. An exploratory question may not need a role. But running through this checklist before sending will improve your first-attempt results significantly.

Iteration Is Not Failure

One more thing: expecting to get exactly what you want on the first attempt is an unrealistic standard. Expert AI users iterate. They send a first prompt, evaluate the output, and follow up with refinements: "Make it shorter," "That's the right direction but less formal," "Give me the third option expanded into a full paragraph."

Multi-turn conversations with AI are often more productive than trying to write the perfect single prompt. Think of it as a dialogue, not a command.

Before and after: the same request, rewritten

Here is the same request written at three levels of specificity. Notice how the quality of the likely AI response shifts with each version.

Version 1 — Typical first attempt:
"Help me write a LinkedIn post."

The AI will write a generic LinkedIn post. It has no idea what topic, what tone, what audience, or what goal. It will guess all of these. The result will be usable at best, forgettable always.

Version 2 — With role and context:
"I'm a marketing manager who just completed a three-month AI implementation project at my company. Help me write a LinkedIn post about what we learned."

Better. But still no format spec, no audience specification, and "what we learned" is vague. The AI will produce something reasonable but probably too long and too generic.

Version 3 — With role, context, and format:
"I'm a marketing manager who just completed a three-month AI implementation project at my company. The biggest lesson was that the hardest part wasn't the technology — it was getting the team to change their workflows. Write a LinkedIn post that shares this insight in a way that will resonate with other managers going through the same thing. Use a first-person narrative tone, start with a specific moment rather than a general statement, and keep it under 250 words."

Now the AI has enough to produce something genuinely useful on the first attempt.

Hands-On Exercise

Rewrite your weakest prompt

ClaudeChatGPTGemini
Think of a time you used AI and the result was disappointing — too generic, too long, wrong tone, missed the point. If you can't remember a specific example, choose one of the "weak" prompts from this lesson. Take that weak prompt and rewrite it three times, adding one element each time: 1. First rewrite: add a role ("you are a...") 2. Second rewrite: add context (audience, constraints, background) 3. Third rewrite: add a format specification Then try all three versions in an actual AI tool. Compare the outputs. Which changed the result most? Was it the role, the context, or the format spec? There's no required output format for this exercise — just the observation of what actually changed when you changed the prompt.
Don't try to write the perfect prompt on the first try. The goal is to notice what each element adds.
Active Recall

Before moving on — close this lesson and answer these from memory. Then come back and check. Testing yourself (not re-reading) is how this sticks.

1 What are the three fundamental elements of a well-structured prompt? Give a brief description of what each one does.
2 Write a before-and-after example of a prompt from your own work or life. What specific changes did you make, and why?
3 Why is iteration a normal and expected part of good AI use, not a sign that something went wrong?
Reflection

Look back at your last three AI interactions. Without judging yourself, notice: did you provide a role? Did you give context? Did you specify the output format? What one habit, if you added it to every prompt, would most improve your results?

Key Takeaway

Effective prompts give AI a role, provide necessary context, and specify the desired output format. You are not asking a question — you are providing context to a prediction engine. Iteration is normal. The goal is better first attempts, not perfect first attempts.