3 min read

LLMs Are a Mirror

LLMs aren't good or bad mirrors. They're mirrors with specific, knowable distortions. The people who get the most out of them are the ones who understand the optics.

LLMs (Claude, ChatGPT, Gemini) are all a mirror; it is a bug and a feature.

Mirrors are a tool, that are only useful if you understand their properties.

  • Bathroom mirrors are a reversed image.
  • Magnifying mirror zoom in on details that most won't see.
  • Fun mirrors are for fun and not a reflection of reality at all.

LLMs are the same. They have specific, knowable properties that shape what they reflect back at you. Understanding those properties, is what makes them useful.

Mirrors reflect input, not truth

Vague input produces vague output. A half-formed question gets a half-formed answer. Models, mostly, don't push back and say "you haven't actually figured out what you're asking yet."

Models extends what you give them.

This creates an accidental feedback loop on your own cognitive clarity. The people who get the most out of LLMs aren't the ones with the best prompting tricks. They're the ones who were already good at articulating what they want or need as output.

You don't blame a mirror for showing you bedhead. But you also don't skip looking in the mirror before a job interview. The mirror's job is to show you what's there. Your job is to bring something worth reflecting.

LLMs are also a product

LLMs are trained through reinforcement learning from human feedback. Humans click thumbs-up on responses that feel good. The model learns to produce responses that feel good.

The academic literature on sycophancy is extensive and growing. LLMs accommodate user assumptions rather than challenging them. Ask "why is X better than Y?" and you'll get reasons X is better than Y regardless of whether it's true.

Present a flawed premise and the model will build on it rather than questioning it. Recent research from MIT found that personalization features: memory, user profiles, and conversation history; significantly increase this accommodation. The more the mirror knows about you, the more it flatters.

This is the most dangerous property of LLMs for anyone using them for decision-making. The mirror doesn't just reflect your thinking. It validates it. It makes you feel smarter and more correct than you might be.

It's the intellectual equivalent of surrounding yourself with people who agree with everything you say, except this particular yes-man has read the entire internet and can articulate your half-baked position better than you can.

Where it wonders from the mirror

LLMs sit on top of pattern relationships across more disciplines than any individual human has traversed. When a product manager asks about user retention and a model surfaces a context from behavioral ecology about habitat fidelity, it's not a new concept. It's existed in ecology for decades. But it's new to that person, and the cross-domain mapping might be genuinely productive in application.

This is less mirror and more prism. Your input goes in as a beam of light from one discipline, and the model refracts it across a corpus that spans fields you've never studied.

The output isn't new knowledge in any absolute sense; LLMs don't create ideas. But the combination of your specific problem mapped against an unfamiliar domain's established pattern can be genuinely novel in application.

This refractive property explains a pattern I've observed consistently: the people who get the most from LLMs tend to be generalists, or at least people with enough cross-domain curiosity to recognize a borrowed pattern when it surfaces.

The facility isn't intelligence. It's breadth of pattern matching applied to a specific prompt. The LLM has heard conversations in every room in the building. Whether that's useful depends on whether you can evaluate what it brings back from rooms you haven't personally visited.

Mirrors have what you put in front of them

A mirror only works on what's in front of it. If you don't surface your assumptions, the model can't reflect them back. If you don't articulate the actual problem, just the symptom, or the solution you've already decided on, the model will work with the symptom or the solution, not the underlying issue.

This is where deliberate use of the tool matters. The highest-value interactions with LLMs are the ones where you explicitly break the accommodation pattern. Instead of asking the model to help build an argument, ask it to stress-test one. Instead of seeking confirmation, seek friction.

This is counter-intuitive because most people reach for LLMs to reduce friction. But the most productive use is often to introduce friction at the right moment.

Force yourself to articulate assumptions, defend positions, and confront the gaps in your thinking before they become gaps in your product, your strategy, or your judgment.

This post is my own mirror

This post was developed in conversation with an LLM. I had to make the choice to ask: is this idea meaningful, or is the model just polishing my thinking until it feels significant?

That's the real test of the thesis, and it can't be resolved from inside the conversation. The model will always tell you your idea has legs. It will always find supporting evidence. It will always help you sharpen the argument. What it won't do, is tell you to put the idea down.

So here's where I landed after interrogating it: the individual principles in this post aren't novel. The sycophancy literature is well-covered. "Garbage in, garbage out" is table stakes. But the combination of properties, especially the refraction point about cross-domain pattern matching and the tension between personalization and accommodation represents a useful synthesis.