Becoming an AI-native operator (extended cut)
Additional thoughts from my guest post on Kyle Poyar's Growth Unhinged
I’ve been somewhat fixated lately on what the future of knowledge work looks like in the AI era.
This has been influenced in large part by a) the never-ending stream of software developer Claude Code maximalism in my X feed and b) contrasting it with my own exciting-but-less-extreme experimentation. It leaves me with a combination of exhilaration and deflation by comparison.
I wrote a piece that dives deep into this topic, which went live today on Kyle Poyar’s Growth Unhinged newsletter. You can read the whole thing here:
Becoming an AI-native operator (Growth Unhinged)
I thought I’d share here some more informal musings that didn’t make it into that piece.
Will Claude Code make us all super-saiyans?
After that piece went live, a friend sent me a note and contrasted it with a very different take on the same subject by Austin Hay:
Austin’s piece is excellent (and very fun to read). Highly recommend it, even though we land on some different conclusions.
His essential argument is that knowledge workers who master AI coding agents will become “super-saiyans” - folks who can traverse the entire knowledge stack from strategy to execution, who can work fluidly across domains, and who can build almost anything.
And I don’t disagree with this potential. I’ve seen first hand how the combination of highly intelligent models + execution-oriented harness + tool access can do amazing things, from cranking out dozens of pages of documentation in a sitting to creating a home-grown eval framework for our AI assistants to a co-writing system for my personal work.
Having this capability at your disposal truly does feel like being a sort of augmented super-human.
So it’s not that I think this is wrong. It’s just I find it doesn’t actually cover all the things that I need to do.
Altitude and collaboration surface area
The issue isn’t that execution tools aren’t powerful. But I think they become less complete in direct proportion to two factors:
how much time you need to spend at strategic altitude
how broad of a collaboration surface area you have
Let me unpack.
Execution tools don’t fly at strategic altitude
CLIs are optimized for DOING things. So you can use them effectively for generating strategic inputs (research, textual synthesis, etc.). And you could use them to produce stand-alone strategic artifacts (like a strategy doc). This could work well for a consultant.
But as a corporate functional leader or executive, much of your time isn’t spent on epic one-and-done strategic manifestos. Operating at strategic altitude happens in a thousand daily interactions: comments in docs or wikis; terse, free-flowing exchanges in emails or Slack; attending meetings and consuming information or giving direction.
A CLI can’t easily do this work for you today, and it tends to be less effective at charting the murky waters of interpersonal relationships, corporate politics, and org dynamics as compared to chat-based tools like ChatGPT or Claude Desktop.
Coding agents aren’t designed for collaborative surfaces
The workflow of a CLI follows traditional software development ergonomics: an individual works on local files, verifies their work in isolation, then commits to shared repositories through a carefully-governed process of continual integration.
But most knowledge work doesn’t actually work this way.
Marketers create drafts in Google docs, ping teammates for feedback, and revise in place.
RevOps creates analyses in spreadsheets, copies tabs to create new scenarios, and comments back and forth with sales leaders on financial models.
Recruiters share candidate profiles with hiring managers and exchange comments and feedback.
An executive team collaborates on a board deck in Google Slides
Of course, there are components of all these tasks that can be automated with coding agents. But the actual places where interaction and collaboration occurs are awkward or inaccessible for CLIs.
These spaces can also be downright DANGEROUS for autonomous agents to play in.
A coding agent gone rogue might bork your local files, but those changes aren’t getting merged to your codebase. The Git-based workflow is designed to protect against it.
There is no such CI/CD failsafe protecting your Google Docs or Confluence KBs. These environments are simply not yet designed for agentic collaboration.
All this leads to a simple axiom that holds true for today (though perhaps not tomorrow):
Coding agents are most effective for roles that produce deliverables independently. They are less effective for roles where much of the work is collaboration and alignment.
Coding tools are useful in inverse proportion to your collaboration surface area.
The last mile problem (or: why can’t Claude Code just build my slide deck)
The one surface area you tend to spend MOST time on as you become more senior in your career is slides.
Your role becomes less about delivering actual work and more about planning, vision, communication, and alignment. And as much as we sometimes loathe them, slides are an excellent vehicle for that.
I collaborate extensively with LLMs for building slides.
They are very, very good at absorbing context (docs, transcripts, etc.), synthesizing a narrative, planning a slide sequence, and giving feedback on existing decks.
They have become moderately capable at actually generating a deck, although there are still gaps (examples: I’ve not yet been able to get perfect adherence to an existing brand template; you have to generate a PPTX file then import to Google Slides, etc.).
And they are still nearly incapable of something as simple as editing an existing Google Slide document.
I am forever taking screenshots of slides and pasting them into a chat box, then making edits myself.
This seeming conundrum (the tools are very good at hard things but struggle with simple things) reflects the challenges of interacting with environments that haven’t provided for agents as first-class users.
The visual affordances that make editing a slide intuitive for a human are perversely complex for an agent.
I’m sure this problem will soon be cracked (perhaps its solved already and I’ve missed it). But until this connective tissue is present, much of your work feels inaccessible for coding agents.
The security conundrum
The most impressive agentic setups I’ve seen are all from independent operators, solopreneurs, or small agencies.
These folks are leading the way (and I’m quite envious of their freedom). But they have luxuries that don’t exist in corporate environments with dedicated Security teams and critical certifications on the line.
You COULD have local MCP connections to all your systems of record, but most teams aren’t ready for this yet. Corporate AI-native work faces a governance problem, not just a technical one.
Zooming out: these limitations are temporary
So to summarize a long series of gripes: it’s not that I’m a CLI skeptic. It’s just that we still need a lot to operate as AI natives in these contexts.
I feel the pain most keenly because I’m working at this intersection every day. The possibilities are so obvious, but the gaps are still real. So in part, this is me throwing a product spec into the void to see what comes back.
On the bright side, it’s obvious these problems are going to be solved soon.
The need is too vast. Someone will take a variant of this spec, feed it to a swarm, and take a bunch of money from Andreesen Horowitz. The future is on the way.





