What the Foundations Assessment Measures
This assessment covers three types of competency that Foundations has built:
Conceptual Understanding
Can you accurately explain what AI is, how it works, and why it behaves the way it does? This includes: the prediction engine model, context windows and memory, hallucination and its causes, training data bias, and the distinction between AI's strengths and limitations.
Practical Prompting Skill
Can you construct prompts that reliably produce useful output? This includes: applying the RTF pattern, using role specification, providing appropriate context, specifying output format, and iterating effectively when the first response isn't right.
Critical Evaluation
Can you evaluate AI outputs accurately and responsibly? This includes: identifying hallucination risks, applying the verification habit, making disclosure decisions, and maintaining accountability for AI-assisted work.
Assessment Format
The assessment has four parts:
- Concept check (5 questions) — Short answer questions testing your understanding of core concepts. Write your answers before looking anything up.
- Prompt critique (3 examples) — Review three prompts and identify what's missing and how you'd improve each one.
- Real task — Complete a real task using AI with the approach you'd use in your actual work. Document your prompt, the output, any follow-ups, and your evaluation of whether the output was ready to use.
- Reflection — A 200-word reflection on the biggest shift in how you think about or use AI since starting Foundations.
After the Assessment
Once you've completed Foundations, you're ready for AI~Workflows — the Tier 4-6 track focused on consistent excellence, workflow integration, and critical evaluation. You'll move from competent use to systematic productivity.