We’re asking the wrong question about AI at work.

The typical framing goes: “Can AI do this task?” Followed by demonstrations of language models writing code, summarizing documents, or generating reports. But this misses the fundamental challenge.

The bottleneck isn’t capability. It’s context.

Every Job Lives in a Specific Instance

Here’s what I mean: Your job doesn’t exist in a vacuum. It exists in a particular organization, with specific stakeholders, embedded in a unique culture, constrained by particular resources, and governed by unwritten rules you’ve internalized over months or years.

When you draft a budget proposal, you know:

  • Which line items the CFO scrutinizes
  • How your director prefers data presented
  • What “aggressive but realistic” means in your organization
  • Which battles are worth fighting this quarter

When you review a policy document, you understand:

  • The political sensitivities around certain terms
  • How similar policies played out before
  • Which departments need to be consulted (officially and unofficially)
  • The difference between “technically correct” and “will actually work here”

This isn’t just domain expertise. It’s instance knowledge—the accumulated wisdom of operating in this specific environment, not just any environment of this type.

And right now, we have no systematic way to transfer this to AI.

The New Work: Demonstrating, Not Just Delegating

The future of effective AI collaboration isn’t about getting better at writing prompts. It’s about fundamentally changing how we relate to AI systems—from task delegation to expertise transfer.

This means four new practices:

1. Externalizing Your Heuristics

You make dozens of micro-decisions every day guided by rules of thumb you can barely articulate:

  • “If it’s from this department, check the numbers twice”
  • “Always loop in this person before sending to that group”
  • “When the timeline feels tight, it usually is—push back early”
  • “This kind of request means they actually need something else”

These heuristics are invisible to you. They’re how you think. But to an AI agent operating in your environment, they’re the difference between mechanical competence and genuine effectiveness.

The practice: When AI attempts a task, explain not just what to change, but why. Don’t say “revise this paragraph.” Say “revise this paragraph—it’s technically accurate but will read as defensive to our audience, who already doubts our commitment here.”

2. Building Instance Models

Organizational context isn’t static. It’s a living map of relationships, constraints, norms, and history:

  • Who really makes decisions (versus who officially does)
  • What failed initiatives taught you
  • Why certain processes exist (even if they seem bureaucratic)
  • Where informal influence lives
  • What’s changing and what’s entrenched

The practice: Feed your AI agent not just documents, but context. When you brief it on a project, include:

  • “Here’s the org chart, but Sarah’s team actually drives this”
  • “We tried this approach two years ago—here’s what happened”
  • “The written policy says X, but in practice everyone does Y because…”

Think of it as building a model of your organization as it actually functions, not as it appears on paper.

3. Demonstrating Your Work Process

An AI can write a report. But can it write your report—the one that lands because you know exactly:

  • What to emphasize for this audience
  • Where to be direct versus diplomatic
  • Which precedents to cite
  • How much detail to include (not too little, not too much)
  • When to escalate versus resolve quietly

The practice: Don’t just show AI the final output. Show it your process:

  • “Here’s my first rough draft—see how I’m just getting ideas down?”
  • “Now I’m restructuring—notice how I moved the risk section up because leadership reads top-down when rushed”
  • “This revision is about tone—I’m softening the critique but keeping the substance”

Let AI observe how you navigate uncertainty, make judgment calls, and iterate toward “good enough for this context.”

4. Iterative Refinement with Feedback Loops

The goal isn’t perfect delegation. It’s progressive transfer of judgment.

First attempt: AI executes mechanically. Your correction: Not just “fix this,” but “fix this because of these contextual factors.” Second attempt: AI incorporates that context. Your refinement: “Better, but you missed this nuance—here’s why it matters.” Third attempt: AI applies the pattern to a new situation.

Over time, the corrections become less frequent. The AI develops what looks like intuition—really, it’s accumulated instance knowledge.

The Evolution: From Execution to Judgment

Eventually, if you do this well, something shifts.

The AI doesn’t just execute tasks. It starts to think through problems the way you do.

It knows:

  • When to be creative versus conservative
  • When to ask for clarification versus make a judgment call
  • When to stick to the script versus adapt on the fly
  • When “technically correct” isn’t actually correct in context

This isn’t artificial general intelligence. It’s artificial situated intelligence—AI that operates effectively in the specific, messy, idiosyncratic context where you work.

This Requires New Tooling

Right now, we don’t have good systems for this kind of expertise transfer. We need tools that support:

Demonstration Capture
Recording not just outputs but the process of arriving at them. Think: narrated screen recordings of you working, with explicit decision-making commentary.

Heuristic Extraction
AI that asks “why?” at decision points. “I notice you chose approach A over B—what factors drove that choice?”

Context Building
Structured ways to feed organizational knowledge: relationship maps, historical context, cultural norms, informal power structures.

Confidence Calibration
AI that knows when it lacks context and says, “I could guess, but I should ask you about this.”

Human-in-Loop Refinement
Not just review-and-approve, but supervised attempts with structured feedback that AI can learn from.

The Real Question

So we return to the reframe:

The question isn’t “Can AI do this task?”

It’s: “How do I transfer my accumulated expertise, context, and judgment to an AI operating in my specific environment?”

This is a different challenge than we thought we were solving. It’s less about AI capability (which is advancing rapidly) and more about:

  • Visibility: Making implicit knowledge explicit
  • Structure: Organizing tacit expertise so it can be transferred
  • Collaboration: Treating AI as something you train, not just something you use

The organizations that figure this out won’t just be more productive. They’ll have a compounding advantage: every expert doesn’t just do their job—they build artifacts that encode their expertise, making the organization as a whole smarter over time.

That’s the real promise of AI at work. Not replacing humans, but systematically capturing and scaling human judgment in the specific contexts where it matters most.


What are your heuristics? What context would an AI need to be effective in your role? These aren’t abstract questions—they’re the foundation of effective AI collaboration.