ADX Uncertainty and AI

Not sure where AI leads us
Published on 2025/07/23

While I often reflect on AI and its impact on my industry, I find it refreshing to read posts like "AI ambivalence" and I'm not too surprised to have so much in common with Nolan. I come from a background in ML and specialized in its applications to Computer Vision (roughly 10 years ago). Being able to apply ML research to something I can easily "look at" has often fascinated me. I'll admit the challenges were intriguing, although somewhat repetitive and even underwhelming at times. Just like Nolan, I ended up switching to general coding.

Jumping to today, AI is everywhere (for better or worse). It feels like it's still very early in this new era. A lot of tools trying to get ahead or assert dominance in the market early on. It's honestly confusing and tiring. I do see a lot of value in how the industry (any industry really) is changing. At any given time you could find me argue in favor and against it. Looking at the glass half-full, AI brings a level of "democracy" we've never seen before. Everyone (or at least everyone with access to a computer) has access to so much knowledge and the barrier of entry has never been this low. If you understand how, and this is not a given, you can learn anything you want and be at least mediocre at it in a much shorter amount of time than ever before. You can write better, draw better, plan better, code better (let's just assume this is universally true for a moment).

The glass is also definitely half-empty. We are reinforcing models that gobble everything we produce and massage it, repackage it, and spit it out. My biggest worry is that we end up creating something that feeds itself in an endless loop. Grabbing this quote from the article:

Although of course, as Cory Doctorow points out, the temptation is to not even try to spot the bugs, and instead just let your eyes glaze over and let the machine do the thinking for you – the full dream of vibe coding.

This is what terrifies me. We let the machine do the thinking and we just get better (possibly) at giving it instructions. We don't write that post anymore we just provide a generic prompt and fix it up a bit. We don't draw as much, we provide a prompt and iterate. We don't code as much, we provide a prompt and iterate. And what we produce ends up in the machine again, and again, and again. There's no prediction yet of where this is going to lead us (probably someone much smarter than me already has some valid prediction).

The reality is that AI can give you an edge if you understand how to use it and avoid letting it do all the thinking for you. As I try to adapt to this new world, I worry about Agentic Developer Experience (I'm not sure if the ADX term has been used anywhere so take it with a grain of salt). How can we think about how developers can become more productive without being too detached from what Agents supporting their work are doing? How can we make it feel like magic while still providing the tools for curious exploration? I occasionally orchestrate Agents in a way that empowers me without crippling me. I haven't found a good balance yet but I feel like I'm getting closer every day.

Thoughts

Just some scattered thoughts as I think about how I envision users to interact with Atlas Stream Processing (that's what I work on nowadays!). I have to consider where we are headed with AI and it's both exciting and daunting. We can be the ones that pave the way or completely miss the mark and doomed to catch-up forever. I am hopeful though, really!

0
← Go Back