The Research Stack: Where AI Fits
Think of strategic research as a stack of three layers:
- Source gathering: Finding the right information (primary sources, databases, interviews, documents). AI is unreliable here for specific facts — use it to generate search queries, identify what sources you need, and map the information landscape.
- Synthesis: Extracting meaning from gathered information. AI is excellent here — especially when you provide the source material. Pattern identification, comparison, framework generation, gap analysis.
- Implication: What does this mean for your specific situation? This requires your judgment, organizational context, and domain expertise. AI can suggest hypotheses; only you can evaluate them against reality.
The Synthesis Toolkit
Comparative analysis: "Given these three [reports/positions/options], compare them across these dimensions: [list]. Then identify the most significant point of disagreement and the most important question they collectively leave unanswered."
Pattern extraction: "I'm going to paste five [customer interviews/analyst reports/case studies]. After reading all of them, identify: the three themes that appear most consistently, and two patterns that appear in some but are notably absent in others."
Gap mapping: "Based on this research, what important questions remain unanswered? What would I need to know to be more confident in a decision based on this information?"
Framework generation: "Given this information, what are the most useful ways to organize or categorize it? Generate three different frameworks for thinking about this problem and explain what each one reveals."
Steelmanning: "Here is [my position/proposed decision]. Play devil's advocate: what is the strongest possible case against this? What evidence would most concern someone who disagreed?"
Managing Hallucination Risk in Research
In research contexts, hallucination risk is highest for: statistics, citations, specific claims about real organizations or people, and anything AI generates without a document you provided.
The grounding rule: For any specific claim that will inform a decision — especially numerical claims — ask: "Is this from a document I provided, or from training data?" If the latter, verify before using.
The citation test: Ask AI to cite its sources for any factual claim. If it can't produce a specific, verifiable citation, treat the claim as a hypothesis to verify, not a fact to use.