Over the weekend I watched a talk with Arthur C. Brooks. He was making a distinction I hadn't heard framed this way before: complicated vs. complex.
Complicated problems have answers. Engineering, logistics, systems, process. You can map them, model them, solve them. They're the domain of the left brain: logical, sequential, analytical.
Complex problems are different in kind. Love. Meaning. Relationships. Grief. These don't have solutions. They have navigation. And we navigate with the right brain. Associative, emotional, meaning-making.
A few months back, I went through a period where I was using AI almost like a therapist. I turned to it with the heavy stuff. Questions about direction, meaning, what I was building toward. The conversations were long. Thoughtful, even. But my thinking kept going darker. I'd come out of a session more tangled than I went in. At the time I couldn't explain why. I just stopped.
Watching Brooks, I understood that I was feeding a left-brain tool right-brain problems. The AI would engage, build a logical structure around the question, offer frameworks and possibilities. Logic applied to meaning doesn't illuminate it. It makes the confusion more elaborate.
The code work is different. Lately I've been shipping faster than I have in years. Bugs, refactors, missing endpoints, things with actual answers. The agent finds the answer. I move on. That part works, because the problems fit.
I'm not sure what this means for the complex stuff. The questions that don't have answers. I don't think it means AI is useless there. But I think something different is required. Maybe just the recognition that you're not looking for a solution. You're looking for someone to sit with you in it.
That's not what I was doing. I was asking for answers to questions that don't have them. And when a very good answer-finding machine finds an answer anyway, it probably takes you somewhere you didn't want to go.