Sign In
Signed out 5
Practical 19 March 2026 9 min read

How to Raise Your AIQ: A Practitioner's Field Guide

Scores don't raise themselves. This is the structured approach that moves people from Practitioner to Fluent — and from Fluent to Strategist.

Taking the AIQ assessment gives you a map. But a map is only useful if you actually move. This is the practical guide to moving — specifically, to raising your score across all three dimensions in a way that actually changes how you work, not just how you test.

The core principle is simple: deliberate practice beats volume every time. An hour of focused, reflective work with AI will do more for your AIQ than ten hours of casual use. What follows is a structured approach to making your practice deliberate.

Raising Your Think Score

AI~Thinking develops through a combination of building accurate mental models and deliberately stress-testing them. Here's how to do both.

Run prompting experiments

Pick one task you do regularly — summarising documents, drafting messages, analysing data, whatever it is. Now run five different approaches to that task across a week. Change the framing. Change the level of detail in your instructions. Try giving the model a role to play. Try chain-of-thought prompting. Try breaking the task into steps instead of asking in one go.

Keep track of which approaches produce the best outputs and why. You're not just optimising for this week's task. You're building a mental library of what works, which is transferable across tasks and tools.

Keep a prompt journal

This doesn't need to be elaborate. A simple note for each significant AI interaction: what you were trying to do, how you approached it, what worked, what didn't. After a month, read back through it. Patterns will emerge that aren't visible in individual interactions — particular failure modes you keep hitting, approaches that reliably produce good results, task types where AI consistently adds value versus task types where it consistently disappoints.

The journal converts experience into knowledge. Without it, each interaction is isolated. With it, they accumulate into a coherent understanding of how these systems actually behave.

Read the documentation and research

Model providers publish system cards, technical reports, and usage guidance. Most practitioners never read them. This is a mistake. The technical reports in particular contain precise descriptions of what the model was designed to do well, where it tends to fail, and how it was evaluated. Thirty minutes with a model's technical documentation will update your mental model more efficiently than hours of informal experimentation.

Test edge cases deliberately

Most casual use never probes the edges. But that's where understanding deepens. Try giving the model contradictory instructions and see how it handles them. Ask it something that's just outside its training likely knowing about — something obscure or very recent — and observe the response. Push it toward confident claims on topics where it should be uncertain. Each edge case is information about the underlying system, which refines your model of when to trust it.

Raising Your Apply Score

AI~Application develops through building — real things, in real workflows, with real consequences. There is no shortcut for this.

Automate one real workflow per month

Pick something you actually do, not a toy project. An email categorisation system. A weekly report that currently takes you two hours to compile. A first-pass review process for documents your team produces. Then build it — imperfectly, if necessary — and use it.

The "imperfect" part is important. The goal isn't a finished product; it's the building experience. You will learn more from a slightly broken workflow you actually run than from a perfectly designed one you only plan. The real-world friction is the curriculum.

Build in public

Sharing your builds — even rough ones — does two things. It forces you to articulate what you did and why, which deepens your own understanding. And it invites feedback from people who can see problems you've missed. A brief writeup on what you built, what worked, and what you'd do differently next time is worth ten hours of private experimentation.

Building in public doesn't require a large audience. A Slack channel, a small team, a forum like the Sosial AI Society — any context where thoughtful people can see your work and respond is sufficient.

Review AI-generated code, even if you're not a developer

If you're using AI to help write or modify code, read what it produces. Don't just run it. Understand it well enough to explain what it does and why. This isn't about becoming a developer — it's about not treating AI output as a black box. The habit of reviewing rather than accepting is one of the highest-leverage practices in the Apply dimension. It builds the evaluative judgment that separates practitioners who rely on AI from ones who collaborate with it.

Raising Your Lead Score

AI~Leadership develops through navigating the human and organisational dynamics around AI — which means you have to actually engage with those dynamics, not just think about them.

Facilitate an AI adoption conversation in your team

You don't need a formal mandate to do this. Propose a working session where your team looks at one part of your workflow and asks: where could AI add genuine value here, and what would that require? Facilitate it — which means managing the range of enthusiasm and scepticism in the room, surfacing the real concerns (not the stated ones), and helping the group arrive at something concrete rather than a vague aspiration.

Running this conversation even once will teach you more about AI leadership than any course, because you'll encounter the real dynamics: the person who's worried about their role, the enthusiast who wants to automate everything immediately, the pragmatist who just wants to know what they're supposed to do differently. Learning to hold all of that is the skill.

Write a one-page AI policy

Not because your organisation needs a formal policy today — although it probably does — but because the exercise of writing one reveals the decisions you haven't yet made. What tools are appropriate for what tasks? What data can and can't go into third-party AI systems? How should AI-generated work be disclosed? What's the process when AI output causes a problem?

A one-page policy that answers these questions for your immediate context will sharpen your thinking more than reading twenty opinion pieces about AI governance. It forces concreteness.

Interview someone at a higher tier

Find someone whose AI practice is clearly more developed than yours — someone who has built more, led more, navigated more complexity. Ask them specific questions: what decisions have been hardest? What do they wish they'd understood earlier? Where have they gotten it wrong? How do they think about responsible deployment in contexts where the stakes are real?

Learning from people one or two tiers ahead is the fastest development mechanism that exists. They can see your blind spots in a way that reading and solo practice can't surface. The Sosial AI Society exists specifically to create this kind of proximity — a community where you're consistently in conversation with people whose practice is slightly ahead of yours.

The Compounding Effect

There's a temptation, when you identify a weak dimension, to pour all your effort into that one area. Resist it. One tier of improvement across all three dimensions is more valuable than three tiers in one dimension.

The reason is interaction effects. Better Thinking makes your Application more precise. Better Application gives your Leadership concrete examples to draw on, which makes your advocacy more credible. Better Leadership creates contexts — team conversations, projects, mandates — that accelerate your Application and sharpen your Thinking. The dimensions reinforce each other, but only if you're developing all three.

A rough heuristic: spend roughly half your development time on your lowest-scoring dimension and split the rest between the other two. Reassess every 90 days.

The Role of Community

Solo study is slower than peer learning. This is not controversial — it's well established across every field where skill development has been seriously studied. The reason is that peers surface blind spots that solo practice misses, create accountability that sustains effort, and provide the friction of different perspectives that sharpens judgment.

In AI development specifically, community matters for another reason: the field moves fast enough that no individual can track all relevant developments alone. A well-calibrated community acts as a distributed early warning system — surfacing what's actually significant from the noise, and helping you contextualise new developments against your existing understanding.

The Sosial AI Society is designed around this principle. Members are at different tiers, which means you're consistently in conversations with people who can show you what's possible from a position slightly ahead of where you are. That's the fastest accelerant in the system.

Your AIQ Is a Living Number

A score is not a destination. It's a current reading. The practitioners who make the most sustained progress are the ones who treat their AIQ as a variable to be actively managed — taking the assessment, identifying the gaps, doing targeted work, and then coming back to measure again.

Ninety days is the right interval. It's long enough for genuine practice to produce measurable movement, and short enough to maintain urgency. Take the assessment now. Get your baseline. Work the three dimensions deliberately for 90 days using the approaches in this guide. Then come back and take it again.

The number will have moved. More importantly, the way you work will have changed. That's the point.

Take the AIQ assessment now. Your baseline is waiting.

Ready to measure yours?

Discover your AIQ~

Take the 8-minute assessment and get your personalised score across Think, Apply, and Lead.

Start the assessment