I needed to run two Claude accounts from Zed's agent panel. A personal one and a new work account at the agency. I asked Google: "how to use multiple claude accounts in zed agent panel"
The answer came back confident. Numbered steps. A specific command.
- In the Agent Panel, create a new thread for each account.
- When running the Claude Code external agent, run
/loginwithin that specific thread to log in with a separate account. - Each thread maintains its own conversation history and configuration.
I knew it was wrong before I finished reading.
I'd tried exactly that a few days earlier. Running /login in a thread to switch accounts. It doesn't isolate anything. It overwrites the config the whole Zed process is reading from. Every thread sees the change. The answer described per-thread isolation as if it were a feature. It isn't how the architecture works.
But most people wouldn't know that. They'd try the /login approach, watch it fail, assume they'd done something wrong. Google's AI Overview was structured like documentation. Specific, plausible, timestamped. It would take working knowledge of how the tool actually behaves to recognise it as invented.
There's real Zed documentation on this. The AI Overview didn't reference it, instead it generated something that just sounded right.
I understand that AI tools get things wrong, they do mention that in the smallprint. What I don't understand is why they present wrong answers at the same confidence level as correct ones.
Old Google gave me links. I clicked through. I could see the source, check the date, decide whether to trust it. I was doing the work of evaluation without thinking of it that way. The AI Overview removes that step. I get a conclusion, not a trail.
I cross-checked with Claude, which clearly separated what it knew from what it couldn't verify, and replied: "My money's on the latter, but a five-minute test beats both our reasoning." That's a different relationship with uncertainty than I got from Google.
The real solution came from reading Zed's config schema directly. The schema documents custom agents. What to do with two instances pointing at different config directories is not written up anywhere.
Which means even a good search result wouldn't have helped here. The working answer was in territory that hadn't been written up yet. The AI Overview confidently described something impossible, and the real path was somewhere it couldn't see.
I'm not sure what to do with that except stay suspicious.