Levels of AI Intensity

YMMV
Published on 2025/09/06

I went to an AI Leaders Lunch the other day, it was a good "live" reminder of people's different philosophies. Maybe it wasn't as mind-blowing as I thought it would be, but I left the event pretty intrigued.

What hit me first was how differently everyone at the table approached AI. I was surrounded by smart people who've been in tech for years, and they were all over the map. Some were using AI for everything from code reviews to meeting notes. Others were still figuring out if it was worth the hassle. The fascinating part wasn't the variety of approaches, though. It was seeing how the people who'd really made AI work for them operated at a very different level of intensity. To a point, it almost sounded like they really wanted to make AI work and, possibly, achieved that. They were the right combination of stubborn and curious. They were obsessive about refinement. I appreciated the very methodical approach to figuring out what worked. They developed their own systems for prompting, their own ways of breaking down problems, their own methods for getting consistent results. That's no easy task!

Meanwhile, I kept thinking about my own experience and how much it mirrored what I was hearing from the more skeptical people at the table. I've definitely fallen into what I now realize is the "one-shot" trap. You have a task, you throw it at ChatGPT or Claude with a pretty basic prompt, get back something that's close but not quite right, and then feel like you're spending more time fixing it than you would have just doing it yourself. The cycle of frustration is quite exhausting and you soon give up, writing off AI almost completely.

The thing that really got to me was realizing how much my own experience and assumptions were working against me. When I ask AI to add a feature, refactor some code, or add a button I'm unconsciously assuming (or at least I used to) it knows all the context I know. The testing patterns we use, how we like our commits structured, the particular quirks of our codebase. In that moment of disappointment when the output isn't what I expected, I blamed the AI instead of recognizing that I gave it incomplete information.

Listening to these conversations made me think about how many times I've written off AI tools after those first interactions. There's this gap between what you expect based on all the hype and what you actually get when you try to use it for real work. That gap is discouraging, especially when you're hearing stories about other people becoming dramatically more productive. Given all the misinformation nowadays, I don't even know who to believe anymore. Some productivity claims are a little too wild for my taste.

But sitting around that table, I started to see that people who'd pushed through that initial bumps had discovered some useful tidbits. They almost deconstrcuted AI to make the "magic" disappear. Instead of expecting it to read your mind, you start getting better at communicating what you actually need. And so all the structured prompts start to make more sense.

The diversity of approaches was probably the most encouraging part of the whole conversation. Some people were using AI for highly technical automation. Others had found their sweet spot in document work or handling repetitive tasks that used to eat up hours of their week. A few spent a significant amount of time refining their prompts taking inspiration from some general recommendation you find out in the wild (like how to get rid of the em dash).

Thoughts

What struck me during this event, was that none of them had found their happy compromise immediately. They all went through that same frustrating period, but they stuck with it long enough to develop their own methods/evaluations. They'd treated it like learning any other complex skill, with the expectation that it would take time and deliberate practice to get good at it.

I left that lunch with a different perspective on my own AI experiments. Instead of expecting immediate productivity gains, I'm trying to think of it as building a new capability that will pay off over time. The key seems to be pushing through those early failures and treating them as data points rather than reasons to quit. On one of my most recent attempts I approached it like I would when learning a new library or framework. I try something, it doesn't quite work the way I want it, I refine, check the result and repeat the cycle until happiness has been reached. Now the focus is on the prompt instead (whether you like it or not) instead of your test suite or iterative coding approach. I still find joy in coding by hand but I also find joy delegating incredibly rote work to an agent.

0
← Go Back