What AI Can and Cannot Do for Research
High-value AI research uses:
- Synthesizing documents you provide — paste a 50-page report and ask for the key findings
- Explaining complex topics at the right level — "explain GDPR implications as if to a non-legal business manager"
- Generating frameworks — "what are the main dimensions I should evaluate when assessing a new vendor?"
- Identifying gaps in your research — "given what I've shared, what questions am I not asking that I should be?"
- Comparing perspectives — "summarize the main arguments for and against [position]"
- Finding connections — "what do these three trends have in common that isn't immediately obvious?"
High-risk AI research uses (verify everything):
- Asking for specific statistics, dates, or data points — these are prime hallucination territory
- Asking for citations — AI will generate plausible-looking citations that often don't exist or are slightly wrong
- Asking about recent events — the model's training cutoff means anything recent is unreliable
- Asking for specific claims about real people or organizations
The Document Analysis Workflow
One of the most practical and reliable AI research uses: document analysis. The pattern is simple — you provide the source material, AI analyzes it. You've eliminated hallucination risk for any claim directly derived from the document.
High-value document analysis prompts:
- "What are the three most important findings in this document, stated as actionable insights?"
- "What assumptions does this document make that are never explicitly stated?"
- "Summarize this for an executive who has five minutes and needs to know whether to approve this project."
- "What questions does this document raise that it doesn't answer?"
- "Compare these two documents: where do they agree, and where do they contradict each other?"
The Critical Evaluation Habit
Good researchers apply critical evaluation to everything they read. AI-generated research summaries require the same treatment — and in some ways more, because AI presents everything with the same confident prose style regardless of how certain or uncertain the underlying claim is.
When reviewing AI research output, ask:
- What is the source of each specific claim? Is it from a document I provided, or from training data?
- Are any statistics or citations here worth verifying against primary sources?
- What perspective is this analysis missing? What has AI not included?
- Does this synthesis represent the full picture, or did AI over-index on certain sources or framings?