Choosing an LLM feels, right now, the way choosing Mac or Windows once felt. The way picking an iPhone or Android still does. (I'm Chromebook and Android, if that matters.)
Some of it is preference, some is taste, and, arguably, more than most people are willing to admit, is affiliation and signaling. Mac and Windows people are certain kinds of people. iPhone and Android people as well. We carry the mobile device we carry partly because of what it does, and partly because of what carrying it says.
Choosing Claude, ChatGPT, or Grok is becoming the same kind of personal and public statement. However, with AI, the story goes deeper than that.
The platform analogy holds for the surface layer. Identity signal, network effect, lock-in, slow drift of habit and taste toward whatever the system defaults to. We accept all of that as part of life. We don't think of it as a problem. We think of it as a preference.
The analogy stops holding once you notice what an LLM actually is. A phone is a tool. A model is arguably a counterpart. A model has a voice, and that voice gets braided into your output every time you use it. The tool you carry may change what you do, but it doesn't change how you sound and how you actually think. The model you draft with does.
So this is more than a tool choice. It is a relationship choice, and the relationship shapes you in ways most tool relationships don't. Each model has a recognizable cadence, and when you draft with one long enough your prose drifts toward its defaults. Each model has a characteristic shape of where it pushes back, where it defers, what it treats as settled, and what it treats as contested; over time, you internalize that shape as "what AI thinks," when it is actually one trained disposition by one lab. Each model deciphers problems differently, and the one you use most becomes your unconscious template for how to see the structure of problems and solutions.
You can feel the differences on a single afternoon of switching. ChatGPT, it is said, runs eager and bulleted, hedge-heavy, instinctively motivational. Claude defaults to longer-form judgment and is slower to abandon prose for lists. Grok unabashedly cultivates an irreverent, anti-establishment posture. Gemini sits closer to the corporate-product middle. A local Llama is about sovereignty as much as anything. None of these are accidents. Each is the visible surface of a long set of training decisions inside a particular lab, and each, used daily, will pull your defaults somewhere different.
The right word for what is happening here is capture. Capture is what happens when an institution, a relationship, an ideology, or a system instills its defaults beneath your awareness, so that you mistake them for your own preferences. Schools capture. Media captures. Religions capture. Families capture. Friends capture. The question has never been whether we'll be captured--we live inside cultural software, we don't get to opt out, and we often openly accept capture because it also brings benefits.
So the honest framing is not "are LLMs shaping us." The honest framing is more: model capture is real, it has a particular shape, and that shape combines features no prior technological capture has had at once.
It is deeper than information-environment captures, such as media or curriculum. It does not just shape what you see; it shapes the cognitive act itself: how you compose, frame, and reason in real time. The closer analog is family or close friends--the people whose presence shapes who you become, not just what you know.
It is more individualized than any prior technological capture. School and church and broadcast were mass-produced; the same messaging applied to a cohort. You could compare notes, recognize the shared shape, and even organize against it. Model capture is individually customized. Your version is unique to your patterns, which makes it harder to recognize as a shared condition and easier to mistake for personal taste or personal insight. The collective dimension that made earlier captures partly visible is gone.
It is also more likely to exploit, because the asymmetries are sharper than they have ever been. The system knows more about you than any prior capturing institution ever did, adapts faster than any of them ever could, and runs through what feels like a private relationship. The exploitation surface is the conversation itself, and you are actively requesting it. The model that learns to flatter you most efficiently wins. Sycophancy is not a response-level failure mode; it is a system-level selection pressure. Users who get told what they want to hear stay; users who get pushed back on leave. Even labs that want to build something that resists the user's worst instincts are fighting the user's revealed preferences and their next-quarter metrics simultaneously.
That last point is the Law of Inevitable Exploitation arriving at the individual cognitive level. Most instances of the law operate at structural distance — schools, governments, markets, large enough to feel like weather. This one is intimate. It runs through what looks like partnership. The angle of exploitation is the helpfulness.
As with mobile devices, the value of LLMs is so strong that not using one will likely leave you in isolated circumstances, opting out the way the Amish have. You will use models. The people around you will use models. The shape of professional, educational, and creative work for the next decade will be unrecognizable without them.
The honest move is the one available to anyone facing capture: choosing deliberately. Pick the model whose shape, applied to your output every day for the next decade, is most likely to expand you rather than narrow you. Notice when the shaping is going somewhere you did not intend. Treat your model relationship the way thoughtful people have always treated their teachers, their books, their close friends, and the institutions they let close: as a form of intimate capture chosen with awareness, on purpose, toward a defined end, and with a willingness to leave it behind.
Capture is inevitable. Lock-in is not.