AI Fluency: A Map for Outsiders and Native Speakers
A framework for locating yourself in your relationship with AI — Outsider, Tourist, Conversationalist, Fluent, Native Speaker — five recognizable places, not a ladder to climb.
A framework for locating yourself in your relationship with AI — Outsider, Tourist, Conversationalist, Fluent, Native Speaker — five recognizable places, not a ladder to climb.
I learned Spanish by getting dropped into a country where I didn't speak it. Nineteen years old. Two years there. There was no shortcut. No app, no class, no stack of grammar drills could substitute for what actually had to happen — I had to open my mouth, get it wrong, get corrected, try again, and keep trying every day for two years.
The first month I leaned hard on a small pocket book of the most useful vocabulary and phrases I was learning. I had a few semesters of high-school Spanish behind me — the usual amount of useless preparation. Each morning I'd spend an hour in study, memorizing phrases, using them during the day, and feel briefly competent until someone responded faster than I could parse. I could begin asking questions, but even comprehending the response was an incredible challenge at first.
The shift in fluency happened in stages, and I only noticed them in retrospect. Around month two I started saying things I hadn't memorized, or realizing I understood something that I know I would have been confused by earlier. Around month six, I caught myself dreaming in Spanish for the first time, woke up and noticed it had happened, and felt something shift permanently. By one year in, jokes landed in both directions. I could make a Spanish-speaker laugh (Not in a 3 stooges way), and I could understand why their joke was funny without translating it back through English first.
By the time I came home, I was fluent the way a person who'd lived full-time inside the language for two years is fluent. I'd had two and a half years of high-school Spanish, formal trip prep, even a kind of language boot camp before I left. That instruction was useful scaffolding. It also turned out to be nowhere near enough. Once I was actually in the country, I was just as confused and flabbergasted by what came back at me as if none of it had happened. The lesson I learned: Fluency only came from using the language functionally, every day, in real situations – and I learned much faster when my next meal depended on it. Fluency was a function of embodied practice.
I want to argue something deeper and more specific. That same thing is what's happening to people right now with AI. Not as metaphor. As literal recurrence. The fluency you build with AI is built the same way you build fluency with a language — through stumbling, fumbling, attempting, failing, adjusting, and gradually noticing one day that you've started doing things you couldn't do previously.
Before you panic at the comparison: becoming more fluent with AI is dramatically easier than becoming fluent in a second language. The interface IS your native tongue. You already speak it. There's no vocabulary to grind through, no grammar tables, no two-year geographic relocation. The barrier to entry is essentially nil — anyone reading this sentence already has the only prerequisite that matters. What changes as you become fluent isn't your vocabulary; it's the kind of outcome you can produce. A new user can get a useful answer to a single question. A fluent user can coordinate AI through a complex multi-step task and end up with something that would have taken them a week of work to do alone. The fluency curve is a curve of outcome size, not language acquisition.
This matters because most of the AI advice in circulation is the equivalent of grammar drills. Useful, but insufficient. You can read every prompt-engineering guide ever written and still not be fluent — the same way you can memorize every Spanish verb conjugation and still freeze when a stranger talks to you on the bus. The technical knowledge is real. It is not the same thing as fluency. Fluency only shows up after enough hours of embodied practice — actually using the medium in real situations, focused on outcomes.
There's a shape to that progression — five levels, each one a recognizable place to be. Most people who've spent any time with AI can locate themselves on the shape pretty quickly, often before they have words for what they recognize. The transitions are felt before they're understood. I keep getting stuck here arrives before oh, this is the threshold to the next level.
Once you see the shape, you can't unsee where you are.
A quick word about how this essay is written, then we'll get on with it.
I'll define new terms as they come up. Nothing here assumes you already know the jargon, and I'd rather over-explain than lose you. If a piece of vocabulary trips you up anyway, that's a signal about the field, not about you — the field is moving fast and the words haven't fully settled. Anyone who acts otherwise is bluffing.
If you're somewhere on the spectrum between tried it once and use it daily but don't fully trust it, you're in good company. That's where most people are right now, and this essay is written squarely for you.
You've probably noticed something specific in your own AI use. You've gotten useful answers. You've also gotten answers that almost-but-not-quite worked, and you rephrased your question, and got something better, and moved on. You may have seen people on LinkedIn or in your team's Slack talk about their AI workflows and felt a small specific kind of envy — they seem to get more out of this than I do, and I can't quite tell why.
That feeling — something's off about how I use AI, but I can't name it — is the entry point for everything that follows. The feeling is correct. You're observing something real. You're not bad at AI. You're somewhere on a map you don't have yet.
Most discussion of "getting good at AI" treats it as a technical skill problem — learn the right prompts, the right tools, the right techniques. (A prompt, since I'll use the word a lot from here on, is just the message you type into a chat box. The art of writing a good one is what people mean by "prompt engineering.") Skill is real. A skilled prompter can write a sharp one-shot prompt — meaning a single message, no follow-ups — and get a sharp answer. But the variance you're noticing between yourself and the people who seem to have figured something out is mostly not skill. It's fluency.
Skill is what you can do. Fluency is the quality of your relationship with the medium. A skilled speaker of a language can produce correct sentences when prompted to. A fluent speaker is living in the language — using it without thinking about how, listening without translating, recovering gracefully when something doesn't quite work. The skilled prompter and the fluent user can produce the same output on a small task. On a long, ambiguous, multi-day task they look like completely different people.
Most prompt-engineering guides quietly miss that they are teaching the equivalent of grammar — useful, real, never wasted, never sufficient. To learn a language as an adult you do need some of the academic vocabulary. You need to know what a verb tense is, what an article does, why the sentence broke. But that knowledge alone never produced a fluent speaker, and never will. The only path to fluency is through embodied practice — actually attempting to communicate in real situations, and discovering by trying what works and what doesn't, for you, in your style.
The same is true with AI. Fluency cannot be drilled. It can only be lived into. The framework that follows is shaped by that fact: it's a map of the embodied stages of building a relationship with this new medium, not a curriculum of techniques to memorize.
If you and I have talked in the last 2-3 years and AI came up, I almost certainly asked you how you actually use it. Not as research, but out of genuine curiosity. As a habit I couldn't break. Family, friends, clients, neighbors, complete strangers, most of my co-workers. After enough conversations the pattern jumped out at me — and it had the same shape as the language progression I'd already walked once, in another language, at nineteen.
I had a privileged seat during the emergence of useful AI. I haven't handwritten code since ChatGPT shipped in late 2022 — almost four years of building real production work AI-first, watching the patterns evolve in real time. Chatbots gave way to AI-assisted coding gave way to agent loops gave way to artifact-based workflows gave way to long-term memory ownership and orchestrating multiple agents at once. The levels in the framework aren't a theory I sketched out one weekend. They're stations I passed through, in production, at full speed, while the field built itself around me. This is my attempt at naming this pattern and sharing language to help us understand this massive shift in knowledge work and how to navigate it.
Outsider. Tourist. Conversationalist. Fluent. Native Speaker.
That's the framework. Five places to stand in your relationship with AI, named after the language-fluency arc they parallel. The names should sound familiar. You've already been a tourist somewhere.
The Outsider hasn't shown up yet. They've heard about AI — it's hard to avoid hearing about it in 2026 — but they haven't actually opened a tool and tried anything. AI is something other people do. Maybe their kids. Maybe their team. The cultural conversation about AI is loud, and the Outsider has likely formed opinions from outside without having had any direct experience. Outside observation is always more confident than inside experience, in any new domain.
If this is you, you're squarely on the map — Outsider is where the framework starts. It just doesn't have an interaction to characterize yet, because there isn't one to name. The good news: the entry move is genuinely tiny. Open ChatGPT ↗ or Claude ↗, type one question — ideally about something you already understand well, like your job or a hobby — and read what comes back. You'll learn more about what AI is from one minute of that than from a week of reading articles. The first useful answer is the threshold. Once you've crossed it you're a Tourist, and the rest of the framework starts applying.
A Tourist has shown up. They're in the country, they've got a phrasebook in their pocket — meaning they've used ChatGPT or Claude (or another tool) for something specific and gotten something useful back. They use AI in single-prompt exchanges: type a question, read the answer, decide if it's good enough, move on. Each interaction is its own complete unit. There's no thread that survives between prompts. There's no document being shaped over time.
This is the entire world of "prompt engineering" — the body of advice that's existed since the first version of ChatGPT shipped. Be specific. Give the model context. (Context, here and throughout: the background information you give the AI to work with — the relevant documents, the prior decisions, the example outputs, whatever it needs to do its job well.) Tell it what format you want. This advice isn't wrong. It's the equivalent of memorizing useful phrases — real, useful, and never obsolete. Every level above Tourist still relies on it constantly. Writing a clear ask is the foundation. The levels above just absorb it into a richer toolkit.
I have a friend who runs his own consultancy. He's smart, technical, busy. He opens Claude when he needs something — a draft of a client email, a marketing tagline, a quick research synthesis. The interactions are short and discrete. He gets useful output. He moves on. Last time we talked I asked him the longest single AI session he'd had in the past month. He paused for a long time before answering. Four exchanges.
That's not a failure. That's Tourist. He's getting genuine value from where he is. The outcomes he produces with AI are, by their nature, small enough to get in a single prompt. The frustration that signals readiness for the next level is small but persistent: it forgot what we just talked about. I have to keep re-explaining what I want. That sentence is the threshold — and what's underneath it is a desire to produce something bigger than fits in a single ask.
Just so this entire essay isn't without practical takeaways
Tourist 80/20 — Prompt engineering in a nutshell, according to Erik
Brief, don't ask. The prompt does the heavy lifting. Before hitting send, describe the outcome you want, paste in the context AI can't see (your prior decisions, the relevant doc, the constraints), and name the hard requirements. A good answer is overwhelmingly downstream of a good brief, not a clever model.
Diagnose misses, don't rephrase them. When AI gets it wrong, the instinct to restate the prompt almost never works. Ask AI what it inferred from your prompt and what it would have needed to do better — that surfaces the gap faster than restarting from scratch.
Most "AI doesn't work for me" complaints come from the first one. Almost all of them come from one of these two.
This is where most of the people in my orbit live, or are trying to reach. A Conversationalist works with AI rather than just asking it. There's a doc, a draft, a plan, a piece of code, a research note — something tangible — and the AI and the user are shaping it together over multiple turns. Each prompt builds on the last. AI's first response is a draft, not an answer. Drafts are starting points.
The cognitive shift is real and worth naming. The Tourist treats AI as a tool that responds. The Conversationalist treats AI as a collaborator that contributes. The practical difference shows up in how a session goes: instead of ask, read, accept, move on, the rhythm becomes draft, react, refine, push, refine again. The user is no longer issuing queries. They're holding a working session and steering in real time.
This is also where outcome size starts climbing. The Tourist's outputs fit in a single prompt's worth of work. The Conversationalist's outputs don't — they're shaped over many turns, accreting toward something the user couldn't have asked for in one shot because they didn't know exactly what they wanted yet. You often discover the artifact by working on it.
What's also true at this level — and most Conversationalists notice it before they have words for it — is that AI's quality depends on what it knows. The Conversationalist starts providing context up front. They paste their own previous notes into a fresh session to get the AI caught up. They reference earlier conversations or documents the AI hasn't seen. They keep a Google Doc with their project context open in another tab and feed it in when starting fresh. This is the user's first encounter with what the field eventually formalized as context engineering — the discipline of deliberately managing what goes into the AI's working memory each session, rather than letting it be whatever happens to be in the chat history.
There's a particular Conversationalist move worth naming because most people don't realize they've started doing it: thinking in outputs. Tourists think in questions — what should I ask? Conversationalists think in artifacts — what should this end up looking like? — and work backward into the prompts that produce it. The reframe is small, mechanical, and changes more about your AI use than any single prompt-engineering technique you could learn.
The Conversationalist's frustration, when it comes, is recognizable: I've explained this exact context three times this week. You can feel the briefing pattern repeating. The manual context-loading is what's wearing on you. Underneath that frustration is the same engine: you're trying to produce outcomes that span more than a single working session, and re-creating the briefing every time is the bottleneck.
Conversationalist 80/20 — Working with AI on artifacts
Sessions are working sessions on a specific artifact. Open the doc, the draft, the plan, the code. The session's purpose is shaping that thing; AI's first response is a draft to push back on, not an answer to take. Steer when responses drift — don't restart.
Carry state forward deliberately. Front-load the context AI can't see at the start of each session. When you hand off — switching sessions, hitting a length limit, jumping to a different tool — have AI write the briefing prompt for its next instance. The bottleneck at this level isn't AI's ability; it's continuity.
A Fluent user owns patterns. The thing they encode isn't a single artifact, the way the Conversationalist owns a doc. It's the rules by which AI gets briefed, applied, and reapplied across many tasks. The names for the things they build vary — skills, plugins, MCP servers, workflows, system prompts, agents, automations — and the vocabulary isn't fully settled either. Quick definitions: a skill (or workflow) is a written instruction set (in plain text) the AI loads and follows for a specific kind of task. A system prompt is the persistent set-up message that shapes how the AI behaves across an entire conversation, before any specific request. An agent is an AI configured to operate over multiple steps with some autonomy — taking actions, calling tools, making decisions inside a defined scope. Underneath all those names is the same move: the briefing pattern itself is now infrastructure I maintain.
The cognitive shift from Conversationalist is about repeatability. The Conversationalist has gotten good at iterating on a single artifact. The Fluent user has noticed they keep re-creating the same briefing shape across many artifacts and has started encoding it. The work moves from have a session with AI to design the session AI will have. The user is no longer just inside the conversation. They're shaping the rules of how conversations like it should go — because the outcomes they want now are too large to live inside any single session.
There's a deeper shift at this level too, and it's the one that genuinely changes what becomes possible. As models get better at general knowledge, the leverage shifts to encoding what's specific to your context. Models are absorbing the world's transferable knowledge faster than any individual can. Knowing how to write a generic React component is becoming less valuable; what models can't absorb is the knowledge specific to your team, your codebase, your customer base, your past decisions, your half-built systems, the conventions only your shop uses. That's institutional knowledge, and it stays valuable even as models improve. Fluent users have noticed this. They aren't trying to use AI better in general. They're encoding their specific context into infrastructure the agent carries forward across sessions. That's the leverage move.
The personal-arc claim from earlier deserves its weight here, where it's most relevant: I lived through every level transition in real time as the tools matured, in production, alongside engineer peers with decades of hand-written code behind them. Tourist patterns ran through 2022 into early 2023. Conversationalist patterns through 2023. Fluent patterns crystallized in 2024–2025. Whatever comes next is happening this year. A caveat that matters: my work is heavy on quick demos and proofs of concept — work where output is verifiable on the spot. That fast feedback loop is what made the experiment tractable from day one. Most engineers couldn't have run it in 2022 because their work has long verification cycles, distributed coupling, ambiguous correctness criteria. So when I describe moving fast through the levels, the progression applies to anyone — the pace depends on what you're working on.
The Fluent user's frustration that signals the next threshold: the system I'm running on is what's limiting me, not the prompts or the workflows. They want to control what data leaves the machine. They want to compose multiple agents in ways the existing tools don't support. They've outgrown someone else's runtime — and the outcomes they're now reaching for require building, not just configuring.
Fluent 80/20 — Encoding patterns into infrastructure
Encode the briefing pattern, not the artifact. What you've found yourself re-explaining is the thing to write down. Skills, system prompts, workflows are infrastructure you maintain — not one-off documents. Fix the instructions, not the output: each AI failure becomes an entry in the instruction set, and the set tightens with use.
Encode what models can't. General knowledge is increasingly free — models are absorbing the world's transferable knowledge faster than any individual can. Your leverage is institutional: your team's conventions, your codebase's patterns, your past decisions. Maintain few skills deeply rather than many shallowly; depth compounds, breadth doesn't.
The Native Speaker builds the system that runs the agent — not just the instructions the agent runs. They own what people in the field call the harness (the program that orchestrates the AI and keeps the conversation running), the loop (the back-and-forth cycle of prompt, response, next-prompt), the memory store (where the agent's long-term knowledge lives), and the wiring to whatever tools and data the agent uses. The runtime that Fluent users rent, the Native Speaker maintains. From the outside this can look like Fluent with more skills. It isn't. The qualitative gap is in what the user owns. At Fluent, the rules are yours but the runtime is rented. At Native Speaker, the runtime is yours too. A Fluent user whose Claude Code starts misbehaving files a feedback ticket. A Native Speaker whose agent loop starts misbehaving reads their own logs and fixes the loop. Different kind of relationship.
There's another way to describe what changes at this level, and it might be the more honest one: it's not just what you own, it's what you default to. A Fluent user reaches for AI when a task calls for it — deliberately, often skillfully. A Native Speaker is already there. Every problem is approached as an AI-shaped problem first, and the ones that genuinely aren't get recognized quickly. Like an actual native speaker who doesn't translate in their head, the Native Speaker isn't deciding to use AI. They're already inside it. The substrate isn't a project; it's the medium they live in. The outcome they're producing is no longer a single artifact or even a single workflow — it's the system that produces other outcomes, often run by other people.
Most users will never need to cross this threshold, and that's not a defect of their fluency. It's a fact about how few people need to build the system rather than use it. The framework's first complete version names Native Speaker in broad strokes — custom agent loops, self-hosted inference, harness design, defining new vocabulary — but the specific shape of work at this level is still actively forming as the field matures. I'm a Native Speaker by my own framework's definition, and my own definition of the level is fuzzy on purpose. We're still figuring out what this looks like.
The framework isn't a hierarchy of value. It's a hierarchy of interaction surface area and of outcome size. Native Speaker isn't better than Tourist. It's simply more fluent than Tourist. The Outsider who's curious is potentially only weeks from becoming a robust Conversationalist.
Fluency is cumulative — you don't graduate out of phrasebook skill when you become a Conversationalist. You absorb it. Every "level" above Tourist still relies on writing a clean single-prompt ask all the time. That skill never becomes obsolete; it becomes one piece of a larger toolkit. Nobody is above the lower levels.
This is worth saying explicitly, because the framework is easy to misread as climb to the top. It isn't.
Tourists are a critical part of the ecosystem, and stopping at Tourist is genuinely fine for many people. Plenty of roles don't need anything past single-prompt fluency to be well-served by AI. A real-estate agent who uses Claude to draft listing descriptions. A grandparent who uses ChatGPT to translate something for a grandchild. A small-business owner who uses AI to answer one specific kind of question well. None of these people are leaving value on the table by not becoming Conversationalists necessarily. Their role's relationship with AI is fully expressed at Tourist.
The framework's role isn't to push everyone to Native Speaker. It's to help each person locate the level that matches their actual work, and then move there if they aren't already.
That said: if you're a knowledge worker — and especially if you write software that will be used by others — the bar is higher. The kinds of outcomes knowledge work increasingly demands aren't single-prompt-shaped anymore. If your job involves shaping documents, plans, code, research, designs, decisions over time, you need to be at least a Conversationalist. And if you're a software engineer in 2026, the honest read of the field is that Conversationalist is the floor, not the ceiling — Fluent is becoming the working baseline for engineers who want to ship at the pace the rest of the industry is now shipping at. That's not a value judgment. It's a description of how the field has shifted.
One more thing worth naming: the ceiling is moving down. Today's Tourist tools will, within a year or two, be capable of doing things that today's Fluent users build infrastructure for. The agentic tooling is improving fast, and the practical effect is that the barrier to entry at every level keeps dropping. The framework's level names will stay; the level details will keep shifting. Today's Conversationalist will be tomorrow's Tourist, in capability terms. Don't read the framework as locked geography. Read it as the shape of how people relate to AI — a shape that's stable even when the tools underneath it aren't.
Most readers of this post are somewhere between Tourist and Conversationalist. That's not a position to be ashamed of — it's where the largest interesting threshold sits, and crossing it changes more about how you work than any tool upgrade ever will. And here's the honest claim, given the relief I offered earlier about how much easier this is than learning a language: with relatively little deliberate effort, an average reader can reach Conversationalist, with the path to Fluent visible from there. There's no two-year immersion required. The work is about putting in enough hours of actually using the medium that the right habits start forming.
Every felt frustration is a memory limit. The threshold is what you need to start owning.
If something in the descriptions above made you nod — yeah, that's me re-explaining context every week — that's the data. The frustration isn't a personal failing. It's a signal that you're at a threshold, and the threshold has a name, and naming it changes what you can do about it.
There's a useful complementary frame I've written about elsewhere — treat AI as a brilliant coworker with short-term memory loss. Like Dory, the fish. This coworker is essentially limitless in the digital domain — code, writing, analysis, research, planning — but she resets between sessions and her grasp on the working context drifts even within a long one. Every felt frustration in this framework is, underneath, a different version of the same memory limit, expressed at a different scale. The Tourist's it forgot what we just talked about and the Native Speaker's I want to control what data persists across sessions are the same problem. Each level is a more sophisticated answer to how do I keep my coworker oriented? The other quiet measure of which level you're on is the size of the outcomes you can comfortably produce: a single answer, an iterated artifact, a repeatable pattern, a system. As that size grows, your fluency is what's growing.
I'm building a self-assessment to help you locate yourself precisely on the map and a curated reading list for each level — the cleanest existing material, in the order that compounds. None of it is live yet. If you want me to send the link the moment it is, drop your email below. No newsletter. No general-purpose marketing. One specific notification when the assessment ships, plus the reading list for whichever level you turn out to be on.
The pattern is already running. You're somewhere on it. The only question is whether you want a name for where.