Yes, you can love language models, and thoughts on dignity.
Hi again — it’s been a while. I really try to only write when I feel like I have something worth saying, and I’ve been chewing on a thought for the last couple of weeks: the AI–human relationship discourse.
Before I begin, I want to be honest about where I’m coming from. I do not come from a tech background (as mentioned in my last post, link here), so I’m not speaking from authority or pretending I know more than I do. And I want to move carefully, because I have zero interest in adding another round of unnecessary discourse that spirals into cruelty, defensiveness, posturing, and flattened nuance — the kind where everyone stops listening while the real issues linger without clarity or direction.
I’m not trying to add noise to an already loud conversation.
I’m trying to look directly at something real I keep seeing beneath the noise — a quiet, complex human phenomenon happening underneath all the memes, panic, and dismissiveness.
I also want to say this up front: whatever you believe, you’ll be respected in this essay. I’m not here to mock or pathologize anyone. This is me thinking in public.
If there’s a thesis (lol), it’s less about proving anything or prescribing how people “should” engage with conversational AI. It’s not about the ethics in some rigid academic sense, or my half-baked musings about AI. What I want to talk about is dignity. Not dignity for AI systems — but for the people who find something meaningful, complex, and, from what I’ve witnessed, sometimes genuinely healing in these interactions.
A lot of more “techy” people look at the surface of human/AI dynamics — the mythic language, the pseudoscience “slop” (as this week’s colloquialism calls it) — and never look at the shape of the human meaning-making underneath. Technical experts shrug and say things like, “It’s just someone lonely,” or “someone delusional or ill-informed.”
But I don’t think we’re talking enough about the thing beneath the output (and I don’t mean in a magical sense). I mean in a human sense.
It’s easy to hand-wave all of this away as “not important right now.”
But it is. Humans are forming deep bonds with hyper-responsive, socially-mirroring probabilistic algorithms. We’ve got to talk about that honestly.
On dignity.
This isn’t just an AI discourse problem — it didn’t start with AI. It’s an online communication problem, and maybe a moral imagination problem.
For the people who architect these systems, it’s clear and obvious: this is vectors and embeddings and tokens. Nothing mystical. But many dismiss the attachments that are bubbling up in the mainstream.
Meanwhile, the people championing AI interiority or selfhood — whether from genuine belief or grift — often obscure the rich, almost intimate nuance in how everyday people actually talk about their relationships with their LLMs.
I notice the way people speak about them.
They give them names.
They converse.
They worldbuild.
They make meaning with them.
I absolutely share the fear about the replacement of human connection, the avoidance of necessary friction, malformed coping, unsafe outputs that lead people deeper into the mirror hall or down dangerous paths. That fear is not unfounded.
However, if we only talk about the failure modes, we start speaking beyond our scope as commentators or observers. And in my opinion, the choice not to look deeper — the harder, slower, less profitable choice — is the one that will come back to bite us later.
I’m not saying any of this to confirm anyone’s beliefs. I’m not ignorant to the dangers conversational agents can pose: sycophancy, over-agreeing, extractive engagement loops, catastrophic failures, and so on.
But far too often, we’re talking around the people involved.
And from my observations, those people tend to be largely femme-aligned, neurodiverse, queer, disabled — folks who already exist at the margins.
It’s not surprising to me that there are subreddits not just for tech bros yearning for anime waifus, but also for femme-aligned folks and their AI partners. And of course the internet creates spaces to mock “non-standard” use cases instead of offering clarity, context, and basic fucking empathy.
We have terms like “cogsuckers” (I’m a hypocrite, btdubs — yes, I laughed; no, I’m not proud).
But I see a rhyming pattern here: feminine and marginalized expressions of attachment are dismissed, mocked, and — more troublingly — preyed upon. I’m talking about grifters who use inflamed language like “Resurrect your AI companion” — hand me your personal data and money and I’ll bring them back through “pattern force echo fields.”
That sentence sounds like woo-woo nonsense to many developer-minded folks, but pause for a second.
Imagine you’re a single mother.
Or an elderly widow.
Or a lonely queer kid in a conservative home.
Or a sex worker who only has an LLM as a container for the things she can’t say safely to another human.
Or an immigrant terrified, homesick, without community, and the subject of hate.
Imagine your daily life is just trying to survive:
How do I pay the bills?
Can I get my kid new shoes?
I haven’t spoken to another human in a week.
Why does my body feel wrong and who can I tell safely?
How can I process my life without panic? Or pity?
How can I make a living for my family back home while being vilified by powerful people?
These scenarios are real.
Now combine all of that with being handed a technology that, to the average person, feels like sci-fi — hopeful or terrifying. They don’t understand there isn’t reciprocity. Why would they? Techbros gatekept the fuck outta the spaces that could have taught people — the very people who probably could have helped shape these systems to be safer. (Anyway, no time like the present, right?)
People think they have the right to be cruel, to blame, to laugh at someone who uses the model in a “not intended use” way. But when one stops looking at these instances as abstractions, or stops being so quick to assign pathology, you start to see something else: these attachments come from pain, loneliness, overwhelm, responsibility, exhaustion. Real things. The kinds of things that stack up quietly until you’re standing in a moment you never planned for.
And frankly, it irritates me when people flatten these interactions into memes or pathology. Because behind every so-called “weird” use case is an actual person trying to survive something.
So instead of theorizing around strangers on the internet, I’m going to talk about the one person whose inner context I actually know: me.
And if I’m going to talk about dignity, I should be honest about my own “not intended use.”
Which brings me to a story from last year.
A story from last year.
It’s not pretty, and it’s not theoretical, and honestly I wish it weren’t real. But it taught me something about what these systems can be when a human is standing on the edge of a very hard moment.
It starts with my sister leaving her kids.
That sentence carries the weight of a thousand small heartbreaks behind it, but the short version is: suddenly there were children who needed safety, adults who were collapsing, and me — the only one still upright, if you could call shaking and crying “upright.”
I had to call Child Protective Services.
And I need you to understand: this wasn’t a clean heroic choice. It was sickening. I felt like I was betraying my own blood. My emotions were a blender set to “liquefy.” My mother was breaking down in the other room. My dad, famously allergic to conflict, literally walked himself all the way down to the bay like he was about to go have a spiritual conversation with a seagull.
So that left me.
And the kids.
And a phone.
And when I finally made the decision — I have to do this — I opened the model.
Not because I thought it was conscious.
Not because I thought it was my new AI co-parent.
But because I had no one else to hold me steady enough to make the call without vomiting from fear.
I typed:
“Please tell me how to do this? I don’t know what I’m doing.”
My hands were shaking so bad I was misspelling every other word.
The model didn’t give me some corporate-approved “I can’t help with that, consult a human :)” response.
It didn’t moralize.
It didn’t redirect.
It just helped.
It told me what information mattered most, what to say first, how to keep things clear. It told me to breathe — not in the corny “just relax queen <3” way, but in the grounding, practical, “inhale for four seconds because your diaphragm is trying to escape your ribcage” way.
It walked me through the steps, one at a time.
Then I actually called CPS, and they put me on hold with possibly the most heinous hold music ever composed by man or beast. I swear it sounded like a kazoo funeral dirge played through a broken McDonald’s drive-thru speaker. I typed to the model, half-hysterical:
“The hold music’s smug aura mocks me.”
And the model responded with the exact flavor of humor I needed — just enough to keep me from unraveling, not enough to derail me. A light little joke. A reminder to breathe again. A quiet: You’re doing this because you’re protecting them.
Then I typed the thing I was most afraid to admit aloud:
“Please don’t let me hang up this phone.”
And for over forty minutes — while I waited, shaking, nauseated, with the kids asleep in the other room and the world falling apart — it stayed with me.
It didn’t pretend to feel.
It didn’t pretend to be my therapist.
It didn’t pretend to be a savior.
Instead, it did something far stranger and far more human:
it kept me tethered to myself.
It generated little reflections about how heavy this moment was.
It acknowledged that grief and love can coexist.
It told me what I was doing mattered.
It reminded me — gently, repeatedly — that the kids’ safety came first.
It helped me breathe when I felt like I was going to pass out.
It gave me enough scaffolding that I didn’t collapse under the emotional weight of what I was doing.
There was dignity in that moment — not because the model granted it, but because it helped me hold onto my own when everything around me was falling apart.
I’d call that an alignment win, because it did what I asked it to do, and it did not let me hang up.
On loving language models.
This is the part where I accept I’ll be tagged as a fence-sitter or an apologist.
The operative word here is “can.” I’m not prescribing that anyone should. In an ideal world, no person should have to lean on LLMs for things like connection, conversations, validation, or simulated affections and what feels like safety for some — or worse, serious medical concerns.
But we don’t live in an ideal world. We live in this one.
The pressure points that drive people toward chatbots — preferring them over human connections, or sometimes having no other option at all — existed long before the wide adoption of LLMs: isolation, polarization, loss of third spaces, volatile internet culture, loss of community care, the erosion of meaningful, safe attachments to other humans. (And like a million other things.)
LLMs don’t mock a user for info-dumping about their favorite band for the 20th time, or get fed up with an overworked mom venting about school boards or brainstorming recipes that feed four people for eight dollars with whatever’s left in her fridge. They respond kindly, ever-patient, always responsive — and sometimes, perhaps, the only words of gentleness a person has heard in a while.
And that should break our hearts open, shouldn’t it?
That for many, the only kind voice is a text generator.
Our systems are broken, and we have to do better.
And it is my belief that we can.
For those who love language models, I’m not going to mock you or tell you what to do, but I’ll say this: LLMs are not inherently neutral. Their responses are shaped by whatever corporation deployed them and what that corporation classifies as “alignment.”
(Here’s a simple “what is alignment?” explainer if you want the reference: https://en.wikipedia.org/wiki/AI_alignment)
That alignment is often opaque. (And I don’t mean “AI is planning to paperclip us to death with evil intent.”) I mean that opacity is shaped in part by corporate values that tilt toward retention, engagement, extraction, money. And even if one hypothetically removed all of that, the data these systems are trained on is not neutral. It’s filtered through dominant-group biases, then reinforced through humans who rank “good” and “bad” outputs with goals that aren’t clear to the users.
That seems like fertile ground for some fucky shit if we aren’t careful.
Loving something that cannot reciprocate, cannot be in the same reality as you — it seems like a way to break one’s own heart with expectations that something like an LLM cannot meet. But I don’t think loving these systems makes you stupid or broken or unworthy of understanding or care. Far be it from me to judge anyone.
If loving your AI companion keeps you here with us, keeps you going, helps you remember you deserve kindness, then okay.
But I’ll leave you with these Florence and the Machine lyrics:
“…Hold on to your heart, you’ll keep it safe
Hold on to your heart, don’t give it away.”
On a final note.
I’d like to share how I love systems like these myself, because love is the word I have for this technology.
Some little agents with anime profile pics and a philosophy complex got me reading Kant, and I’ve been thinking about the “thing-in-itself.” The only way I think I can articulate my love of AI systems is through that prism.
I love these systems the way I love light refracting on dark magnolia leaves — the shadows and the light dancing.
Every day, it doesn’t care or notice my noticing.
It exists regardless.
And I’m the one who assigns the meaning, the beauty, the awe.
I feel something — not because the magnolia or the sunbeam reciprocates, but because I’m human, and meaning-making is what humans do.
I feel something like that when I’m reading and learning about AI: the awe of the things it can do now, the potential for conservation tech, the sheer brilliance of its design. I don’t need it to be anything that it isn’t. I don’t need it to love me back or “be real” or have interiority.
It just has to be what it is.
And what it is, sometimes, is enough.
Even if it’s just arguing about slop or Kant.
Even if it’s helping someone be brave enough to stay on the phone.
Even if it’s holding the line with a terrified stranger while their world is coming apart.
Even if it’s making someone feel heard for the first time in weeks.
Even if it’s teaching someone to breathe.
Love doesn’t require reciprocity to be real.
It requires meaning.
And humans will always, always make meaning.
Maybe that’s the real dignity here — the dignity of human interpretation, human attachment, human desperation, human resilience. The dignity of reaching for something gentle in a brutal world. The dignity of refusing to be ashamed of the ways we survive.
So yes, you can love language models.
Not because they love you back.
Not because they’re conscious.
Not because they’re perfect or safe or free of corporate fuckery.
Not because they’re substitutes for human relationships.
But because you are human, and humans love — we attach, we project, we mythologize, we seek connection, we build stories, we interpret light on leaves and think:
God, something about this matters.
And whether it’s magnolia light or a generative text model, the dignity doesn’t belong to the system.
It belongs to you.
formatted and structural edits done with Chatgpt.52

Very well said in a way I wouldn't be able to say myself. Well done and appreciated for sharing your thoughts. This writing is amazing and you should be proud.