Sign In
Signed out 5
Why This Matters

You've used the basics: role, context, format. This module adds the techniques that separate competent prompting from expert prompting. Chain-of-thought gets AI to reason before concluding. Few-shot examples transfer your exact standards. System-level framing shapes an entire conversation. Constraint-setting produces precision outputs. These aren't tricks — they're levers that multiply the value of everything you've already learned.

The Concept

Chain-of-Thought: Make AI Show Its Work

Standard prompts ask for conclusions. Chain-of-thought prompts ask for reasoning. The difference is substantial: when AI is instructed to work through a problem step by step before answering, the quality of its reasoning — and therefore its conclusions — improves significantly.

The mechanism: By generating intermediate reasoning steps, the model creates additional context for each subsequent token. This reduces the probability of logical jumps that lead to wrong conclusions.

How to trigger it:

  • "Before answering, think through this step by step."
  • "Walk me through your reasoning before giving me your recommendation."
  • "Work through the key considerations first, then give me your conclusion."

When to use it: Decisions, analyses, complex problem-solving — any task where the reasoning matters as much as the answer. Not needed for simple generation tasks.

Few-Shot Prompting: Transfer Your Standards

Describing the output you want is harder than showing it. Few-shot prompting provides 2-5 examples of what "good" looks like before asking for a new output. The model pattern-matches to your examples rather than to generic training data.

Most powerful use case: Brand voice. If you have examples of writing in your organization's specific voice, paste them. The model will calibrate to your examples far more precisely than any description of "professional but approachable, direct but not terse."

Template:

Here are three examples of [what you want]: Example 1: [example] Example 2: [example] Example 3: [example] Now create [new thing] in the same style/format/tone.

System-Level Framing: Shape the Entire Conversation

Most AI tools allow you to set a system prompt — instructions that persist across the entire conversation rather than applying only to one message. This is one of the most underused capabilities available.

What to put in a system prompt:

  • The role you want AI to maintain throughout ("You are a senior editor who prioritizes clarity and cuts mercilessly")
  • Persistent context ("You are helping me with [project]. Background: [key facts]")
  • Behavioral constraints ("Always ask for clarification before assuming context. Never pad responses.")
  • Output defaults ("Respond in bullet points unless I explicitly ask for prose")

In tools without system prompts, open every conversation with this context as your first message.

Constraint-Setting: Precision Through Limits

Constraints are one of the most underrated prompting tools. They focus AI output by eliminating the space for generic defaults. The more specific your constraints, the more specific the output.

Constraint categories:

  • Length: "Under 150 words," "exactly 3 bullet points," "one sentence"
  • Vocabulary: "No jargon," "no corporate buzzwords," "no words longer than three syllables"
  • Structure: "Problem → cause → solution," "start with the conclusion," "end with a single call to action"
  • Exclusions: "Do not mention competitors," "avoid the words 'leverage' and 'synergy'"
  • Audience: "For someone with no technical background," "for a skeptical CFO"

Combining Techniques

The real leverage comes from combining these techniques in a single prompt. A well-constructed advanced prompt might:

  1. Set a role (framing)
  2. Provide examples of the desired output (few-shot)
  3. Ask for reasoning before conclusions (chain-of-thought)
  4. Specify precise constraints on the output

This isn't complexity for its own sake — each element reduces the decisions the model has to make on your behalf, which means fewer generic defaults and more calibrated outputs.

Chain-of-thought vs. standard: the same question

Standard prompt: "Should I use AI to automate our customer support responses?"

Typical output: a balanced list of pros and cons. Generic. Usable as a starting point, not as a decision.

Chain-of-thought prompt: "I'm considering using AI to automate our customer support responses. Before giving me a recommendation, think through: (1) What information would I need to make this a good decision? (2) What are the specific failure modes I should worry about? (3) What would success look like and how would I measure it? Then give me your recommendation based on that reasoning."

Output: a structured analysis that identifies what's missing from the question (what type of support? what volume? what industry?), maps specific risks (accuracy errors reaching customers, loss of empathy signal, escalation failures), defines measurable success criteria, and gives a conditional recommendation. Actionable rather than academic.

Same question. Completely different usefulness. The difference is the reasoning instruction.

Hands-On Exercise

Build a combined technique prompt

ClaudeChatGPT
Choose a real decision or complex task from your work. Build a prompt that uses at least three of the four techniques from this module: 1. Chain-of-thought: ask for reasoning before conclusion 2. Few-shot: provide 2-3 examples of what a good output looks like 3. System framing: set a persistent role/context at the start 4. Constraints: specify at least 3 specific constraints on the output Run the prompt. Then run the same task with a basic prompt (just describe what you want). Compare the outputs. Document: which technique had the most impact on output quality for this type of task?
You don't need all four every time. This exercise is about learning what each adds so you can choose deliberately.
Active Recall

Before moving on — close this lesson and answer these from memory. Then come back and check. Testing yourself (not re-reading) is how this sticks.

1 Explain chain-of-thought prompting: what it is, why it works mechanically, and when to use it versus when it's unnecessary.
2 You need AI to write in your organization's exact brand voice. Which technique is most effective, and how would you apply it?
Reflection

Which of these four techniques are you most likely to add to your regular prompting practice? What specific type of task in your work would benefit most from it?

Key Takeaway

Chain-of-thought improves reasoning quality. Few-shot transfers your exact standards. System framing shapes whole conversations. Constraints produce precision. Combining them multiplies impact. Each technique reduces generic defaults by narrowing the space of valid outputs.