Chain-of-Thought: Make AI Show Its Work
Standard prompts ask for conclusions. Chain-of-thought prompts ask for reasoning. The difference is substantial: when AI is instructed to work through a problem step by step before answering, the quality of its reasoning — and therefore its conclusions — improves significantly.
The mechanism: By generating intermediate reasoning steps, the model creates additional context for each subsequent token. This reduces the probability of logical jumps that lead to wrong conclusions.
How to trigger it:
- "Before answering, think through this step by step."
- "Walk me through your reasoning before giving me your recommendation."
- "Work through the key considerations first, then give me your conclusion."
When to use it: Decisions, analyses, complex problem-solving — any task where the reasoning matters as much as the answer. Not needed for simple generation tasks.
Few-Shot Prompting: Transfer Your Standards
Describing the output you want is harder than showing it. Few-shot prompting provides 2-5 examples of what "good" looks like before asking for a new output. The model pattern-matches to your examples rather than to generic training data.
Most powerful use case: Brand voice. If you have examples of writing in your organization's specific voice, paste them. The model will calibrate to your examples far more precisely than any description of "professional but approachable, direct but not terse."
Template:
System-Level Framing: Shape the Entire Conversation
Most AI tools allow you to set a system prompt — instructions that persist across the entire conversation rather than applying only to one message. This is one of the most underused capabilities available.
What to put in a system prompt:
- The role you want AI to maintain throughout ("You are a senior editor who prioritizes clarity and cuts mercilessly")
- Persistent context ("You are helping me with [project]. Background: [key facts]")
- Behavioral constraints ("Always ask for clarification before assuming context. Never pad responses.")
- Output defaults ("Respond in bullet points unless I explicitly ask for prose")
In tools without system prompts, open every conversation with this context as your first message.
Constraint-Setting: Precision Through Limits
Constraints are one of the most underrated prompting tools. They focus AI output by eliminating the space for generic defaults. The more specific your constraints, the more specific the output.
Constraint categories:
- Length: "Under 150 words," "exactly 3 bullet points," "one sentence"
- Vocabulary: "No jargon," "no corporate buzzwords," "no words longer than three syllables"
- Structure: "Problem → cause → solution," "start with the conclusion," "end with a single call to action"
- Exclusions: "Do not mention competitors," "avoid the words 'leverage' and 'synergy'"
- Audience: "For someone with no technical background," "for a skeptical CFO"
Combining Techniques
The real leverage comes from combining these techniques in a single prompt. A well-constructed advanced prompt might:
- Set a role (framing)
- Provide examples of the desired output (few-shot)
- Ask for reasoning before conclusions (chain-of-thought)
- Specify precise constraints on the output
This isn't complexity for its own sake — each element reduces the decisions the model has to make on your behalf, which means fewer generic defaults and more calibrated outputs.