What Goes in a Prompt Library
A prompt library is not a collection of clever tricks you found on the internet. It's a documented record of prompts that work specifically for your recurring tasks. The criteria for inclusion: you've used it at least twice, it produces reliably good outputs, and the task recurs often enough that having the prompt saves real time.
High-value library categories:
- Email templates (difficult conversations, status updates, new client intros, follow-ups)
- Document analysis prompts (extract key points, identify gaps, compare documents)
- Writing assistance (tone adjustment, length reduction, audience adaptation)
- Research prompts (framework generation, pros/cons, scenario planning)
- Meeting prep (agenda refinement, pre-read synthesis, objection mapping)
- Task-specific prompts for your professional domain
The Library Entry Format
Each library entry should have four elements:
- Name: A descriptive name you'll recognize when you need it ("Executive summary — skeptical audience")
- When to use: The trigger condition that calls for this prompt
- The prompt: The actual text, with [PLACEHOLDERS] for the parts that change each time
- Notes: What to watch for, common failures, tips for this specific prompt
Building the Library Without Extra Work
The failure mode for prompt libraries: treating it as a separate project. The sustainable approach: capture as you go.
The habit is simple: when you finish an AI session that produced something really useful, spend 60 seconds adding the core prompt to your library before you close the tab. That's it. No elaborate system required.
Tools: a plain text file, Notion, Apple Notes, Obsidian — whatever you already use for notes. The format matters less than the habit.
Team Prompt Libraries
A team prompt library multiplies individual discovery across the whole team. Instead of everyone separately figuring out how to get good AI outputs for the same types of tasks, the best prompts get shared and everyone benefits.
What makes team libraries work:
- A shared, accessible location (Notion, Confluence, Google Doc — whatever the team already uses)
- Clear categories that match the team's actual work
- Each entry has a context note ("who uses this and when")
- Regular review: prompts get outdated as AI models improve
- Attribution: knowing who built a prompt and who to ask questions of