Sign In
Signed out 5
Why This Matters

Getting one good response from AI is luck. Getting consistently good responses is skill. This lesson is about the difference: how to evaluate what you get, how to guide AI toward what you actually need, and how to build a personal standard for when AI output is ready to use versus when it needs more work.

The Concept

Evaluating Output: The Three Questions

Before you use any AI output, run it through three questions:

  1. Is this accurate? Are any factual claims here ones that I can verify — and have I? Specific dates, statistics, citations, quotes, and names are all hallucination risk areas. Treat them as claims to check, not facts to trust.
  2. Is this useful? Does this actually answer what I needed? Or did AI answer a slightly different question than the one I asked? Did it miss the context I provided?
  3. Is this mine? Can I own this output? Does the voice sound right? Would anyone who knows my work recognize this as coming from me? For client-facing, public, or professional work — if it doesn't sound like you, it isn't done yet.

The Follow-Up Prompt: Your Most Important Tool

The most underused capability in AI tools is the follow-up prompt. Most people treat AI interactions as transactional: one message, one response, done. But multi-turn conversations often produce dramatically better results than trying to nail the perfect single prompt.

Effective follow-up prompt patterns:

  • Redirect: "That's too formal — make it sound like how I'd actually talk to a colleague."
  • Expand: "The third point is the most important one. Expand that section and reduce the others."
  • Constrain: "Cut this by half, prioritizing the most actionable information."
  • Challenge: "Play devil's advocate — what are the strongest objections to this approach?"
  • Reframe: "Rewrite this for someone who is skeptical about AI rather than enthusiastic about it."

Building Your Verification Habit

The goal isn't to verify everything — that would defeat the purpose of using AI. The goal is to have a calibrated sense of what needs checking and what can be trusted.

Generally safe to use without deep verification: structural suggestions, draft text that you'll edit anyway, brainstormed options, explanations of concepts you can sanity-check against your existing knowledge.

Always verify before using: specific dates, statistics, citations, quotes attributed to real people, legal or medical claims, information about specific individuals, anything you'll stake your reputation on.

Iteration in action

Watch how a response improves through directed iteration. The initial output is fine. Each follow-up makes it more useful.

Starting prompt: "Give me an introduction for a presentation on AI at work."

Follow-up 1: "The audience is skeptical about AI, not enthusiastic. Rewrite the opening with that in mind."

Follow-up 2: "Start with a specific story or scenario rather than a general statement."

Follow-up 3: "Now make it 30% shorter while keeping the core insight."

The fourth version is genuinely better than the first — not because the initial prompt was bad, but because iteration is how good outputs are built.

Hands-On Exercise

Build something through iteration

ClaudeChatGPTGemini
Choose a real task — a message you need to send, a document you need to write, a problem you're working on. Start with a basic prompt. Get the first response. Then do at least three follow-up prompts, each one steering the output closer to what you actually need. After your fourth or fifth message: is the output genuinely better than what the first response gave you? What type of follow-up prompt made the biggest difference?
The best follow-up prompts are specific. "Make it better" is weak. "Make it shorter and cut the third paragraph" is strong.
Active Recall

Before moving on — close this lesson and answer these from memory. Then come back and check. Testing yourself (not re-reading) is how this sticks.

1 What are the three questions to ask when evaluating any AI output before using it?
2 What types of AI output generally need independent verification before use? What types are generally safe to use as-is?
Reflection

Think about the last time you used an AI output without editing it. In hindsight, was that the right call? What would you apply from this lesson to that situation?

Key Takeaway

Evaluate AI output for accuracy, usefulness, and voice before using it. Multi-turn iteration consistently produces better results than single perfect prompts. Know what to verify — and actually verify it.