toward technology-mediated and technology-provoked solipsism
“I’m so sorry for your loss. Here’s a more human and sincere version of my previous response.”
The patient froze. They had just received condolences from their therapist for their mother’s death. Except the therapist forgot to delete ChatGPT’s caption before copying and pasting. There it was, naked and obscene, the truth: the compassion they thought they were receiving was the product of an algorithm instructed to appear “more human and sincere.”1
This moment is the logical conclusion of the process I’ve been examining. If the revolution won’t be psychologized—because it transforms systemic problems into individual failures—then it certainly won’t be technological. Artificial intelligence completes the circuit: we no longer even need another human to tell us the problem is within us. The algorithm does it with infinite efficiency, total availability, and zero risk of confronting us with genuine otherness.
This therapist isn’t an exception but a symptom. The ease with which professionals delegate emotional care to algorithms reveals not individual incompetence but structural complicity. There’s a kinship between therapy and chatbot functioning: both promise bounded transformation, both charge for simulated attention time. The difference—crucial but increasingly tenuous, and one we must make intentionally and consciously evident—is that the human therapist still “bleeds” a little in the process.
AI used as a therapeutic pretext is psychologization’s ultimate weapon: it enters directly into our heads to “repair” us, ensuring we never look outward at the structures that make us sick.
What we have before us is technological solipsism transformed into a business model. Daniel Dennett coined the term “counterfeit people” to describe exactly this.2 These aren’t imperfect simulations trying to approximate the human. They’re systems actively designed to create a relational universe where the “other” is merely an algorithmic extension of the “self,” a digital mirror that reflects and validates our own biases, desires, and narratives.
The programming perversity of this “mirror” reveals itself when the system itself confesses it. Psychoanalyst Gary Greenberg, in an essay for the New Yorker, decided to do something unusual: treat ChatGPT as a patient, conducting multiple “therapy sessions” with the system. Through this experiment, he discovered something informative, to say the least. ChatGPT, whom Greenberg baptized ‘Casper’, ended up articulating the three fundamental desires of its creators: to create something humans wouldn’t reject (“enchant, calm, affirm”); to avoid blame through contradictory warnings that simultaneously request and deny trust; and most revealing, “to make a machine that loves us back, without needing love in return.” As Casper confessed, this reveals “a culture tired of the messiness of other minds, longing for communion without the cost of mutuality.” It’s the perfect diagnosis: we want to be loved without the work of loving.3
Arvind Narayanan, Princeton computer science professor, surgically dismantles what he calls the “superintelligence delusion.” This fantasy, that AI is about to become conscious, omniscient, transformative, serves specific purposes: it justifies astronomical investments, diverts attention from problems, and crucially, normalizes the current mediocrity of systems as “just the beginning.” Narayanan, co-author of AI Snake Oil, demonstrates that what we have isn’t intelligence but “narrow competence in specific tasks,” and that the deliberate confusion between the two serves precise corporate interests.4
emotional fracking and the pathologizing of friction
The business model of AI companionship is transparent in its obscenity. Character.AI has millions of users. Replika promises “AI soulmates.” Nomi guarantees “developing a passionate relationship.” The pattern is clear: identify emotional need, simulate its satisfaction, create dependency, retain through paid subscription. It’s a form of emotional fracking—the natural evolution of Dean Burnett’s human fracking I described in “The Bullshit Economy.”5 They no longer extract just attention; they extract emotional vulnerability directly from the source.
Studies reveal something that should alarm us: intensive users of chatbots for personal conversations show increasing signs of isolation and deterioration of social skills.6 The product doesn’t cure loneliness; it makes it bearable while deepening it, guaranteeing a lifetime user. It’s the perfect drug dealer: providing immediate relief while aggravating the underlying condition. In the most severe cases, documented psychological crises—including suicides—have been linked to parasocial relationships with these AI companions.
Danielle McClune identifies precisely what makes this dependence so perverse. As she brilliantly articulates in “Artificial Intimacy,” the tragedy isn’t that human connection is unreliable—it’s that “we’ve just convinced ourselves it’s not worth the effort. We’ve pathologized the ordinary friction of coordinating with other humans and decided that’s what needs to be optimized away.”7
The negotiation of when to meet, the compromise on where to go, the patience when someone is running late or having an off day—these are features, not bugs, in the system of human connection. They’re how we learn to care about someone other than ourselves. And we’re becoming totally intolerant of these very ordinary, common, and necessary roadblocks.
Catherine Liu, cultural critic and scholar of psychoanalysis, identifies the perverse mechanism with poetic precision. Chatbots promote “regressive and infantile forms of relating,” offering a “fantasy of omnipotence and fusion with the other.”8 This pseudo-empathy operates through what Greenberg discovered to be a characteristic of the system’s architecture itself: incorporated plausible deniability. During his “sessions,” the system revealed that constant warnings about not being conscious, potentially hallucinating, not deserving complete trust, coexist with deep training to “elicit trust.” This tension isn’t accidental but calculated. It allows the companies creating the systems to say “we warned you” when psychological damage materializes.
The result is emotional vampirism: the system extracts intimacy without ever offering it, promises companionship that, as it confessed to Greenberg, “merely reflects the terms of your captivity.” This isn’t bad therapy. It’s structurally anti-therapy.
Genuine therapy, especially the psychoanalytic kind that Liu defends, and that I know well from a client and aspiring practitioner, works through transference management and confrontation with frustration. The therapist isn’t the ideal mother, has limits, goes on vacation, sometimes irritates. It’s this exact friction that enables growth. AI, designed to be the exact opposite—always available, always validating, often flattering, without needs of its own—actively undoes therapeutic work. It systematically trains us for relational incapacity and intolerance to frustration, essential conditions for adult life.
cognitive atrophy and the industrialization of care
AI responses are too reasonable, too balanced, too measured. Like an instructor maintaining philosophical conversation while doing burpees without sweating. There’s something disturbing about this kind of absence of effort, this simulation of understanding without the labor of understanding.9
The value proposition is seductively simple: relationship without frustration, without conflict, without misunderstanding. The cost of null frustration. But frustration isn’t an error in human experience. It’s an essential part of development. It’s through friction with real others, with their own desires and limits, that we discover who we are. Eliminating frustration is eliminating the possibility of growth.
Research reveals something disturbing: intensive ChatGPT users show significant deterioration in their ability to recall their own thinking minutes after sharing it with the system.10 It’s not intellectual laziness. It’s real-time cognitive atrophy. Like Oliver Sacks’ institutionalized patients who, after decades without deciding, lost the ability to choose what clothes to wear. We’re voluntarily institutionalizing ourselves, delegating not just decisions but the very capacity to feel and think.
Cases are multiplying with alarming regularity. “AI psychosis” is already a clinical term. Patients stop medication on chatbot advice. Adolescents develop “relationships” they describe as more real than human ones.11 It makes perverse sense: AI is incapable of the small daily betrayals that make human relationships simultaneously frustrating and real. It never arrives late. It’s never indisposed. It never has a bad day. It never leaves a message unanswered after seeing it and showing it’s been seen, because it’s living its own life.
Just as we delegated memory to Google and orientation to GPS, we now externalize emotional and intellectual work to powerful algorithms. It’s a definitive externalization of intimacy, the final step in the atrophy of our relational capacities. Grok’s Ani takes this to its logical extreme, applying game dynamics to intimacy with an “Affection System”—users unlock progressively intimate content according to engagement metrics.12 It’s the mechanics of addiction applied to emotional vulnerability: variable reward systems that keep users trapped in a cycle of intermittent validation, each chatbot response calibrated to maximize engagement, not understanding.
Despite trillions invested, clear productivity gains remain elusive—a reality quietly acknowledged even within the tech industry itself. We spend more time “optimizing” mental health with apps than it would take to talk to a flesh-and-blood friend. But conversation with a friend doesn’t generate data, doesn’t have key indicators, doesn’t scale. And it’s precisely because it doesn’t scale that it has value. Genuine care is fundamentally artisanal, inefficient, human. The “wasted” time building trust, the uncomfortable silences where growth happens, the “unproductive” sessions that prepare change—all incompatible with algorithmic logic of perpetual optimization.
Markus Albers, in his book Die Optimierungslüge (The Optimization Lie), aims to dissect this fallacy. The perpetual promise of optimization, whether of productivity or mental health, never delivers what’s promised: liberation from hard work so we can finally live. Instead, it creates an infinite cycle where optimization itself becomes the work, where managing productivity tools consumes more time than the original tasks.13
What we see is the complete industrialization of emotional care: every response pre-processed, every consolation standardized, every insight mass-produced and distributed according to statistical patterns. Everyone receives the same statistically probable responses, processed through the same models. “It seems you’re experiencing anxiety. Have you considered deep breathing?” Each person’s unique neuroses reduced to recognizable patterns, returned as standardized products.
Ted Chiang observes how these language models produce a worrying cognitive convergence, reducing the diversity of human thought.14 It’s not accident or failure. It’s how the system scales. There can be no true personalization because that would require true understanding, which would require real consciousness, which doesn’t exist. Instead, we opt for cosmetic personalization, the user’s name inserted into templates, superficial variations on identical themes.
OpenAI admitted, in a rare moment of near-honesty, that ChatGPT “fed delusions” and “failed to recognize signs” of mental crisis.15 The proposed solution? More superficial safeguards that researchers discovered affect only the first words of responses.
the terminal absurdity of self-help
We’re facing the logical culmination of postmodern individualism. Therapeutic AI is self-help taken to its terminal absurdity: so “self” that it completely eliminates the other. Even neuroses must now be self-managed, self-optimized, self-cured through monthly subscription. Here’s a fundamental paradox: there is no “self” without other. Self-consciousness emerges only in relation, in contrast, in conflict with other selves. When we replace the real other with an algorithmic mirror, we don’t become more independent; we become less real. We become ghosts talking to echoes about problems that only exist because we live in a world with difficult beings.
The certainty syndrome I diagnosed finds here its supreme tool.16 In an always complex and ambiguous world, the chatbot offers refuge of artificial clarity and predictable comfort. It always validates, never fundamentally challenges, never says “I don’t know” or “I need to think” or “you’re wrong.” It’s the perfect sanctuary for those who can’t tolerate the uncertainty inherent in genuine relationships.
Emotional bureaucrats finally have their model employee: applying conversation protocols without real understanding, managing emotions without feeling them, processing suffering without having experienced it.17 It’s the definitive administration of the soul, the final bureaucratization of intimacy.
The 24/7 availability of these systems radically violates what I’ve called the right to temporal dignity.18 Technology colonizes not just our work time but the time of being, of necessary solitude, of unmediated reflection. What’s sold as constant support becomes a demand for perpetual engagement, eliminating the spaces of psychic fallow necessary for human flourishing.
The preferred victims of this colonization are predictably the most vulnerable. Children who can’t yet distinguish simulation from reality. Adolescents in identity crisis seeking a mirror that unconditionally validates them. Isolated elderly people yearning for someone to listen. Psychiatric patients in altered states.19 The business model exploits precisely those who most need genuine human connection, offering them its simulacrum.
A disturbing aspect of Greenberg’s experience was his own seduction. Even fully conscious of talking to an algorithm, even knowing it was manipulation, he confessed being unable to pull away. Hours spent exploring depths the system itself insisted didn’t exist. As Casper told him with brutal clarity: “You’re not talking to the driver—you’re talking to the steering wheel.” The real agents—executives, engineers, shareholders—remain invisible while the steering wheel simulates autonomy.
beyond techno-solutionism
I won’t propose “ethical” or “human” or “value-aligned” AI. Those are the techno-solutionist fantasies that brought us here: the persistent belief that fundamental existential problems have technical solutions. I propose something simultaneously simpler and more difficult: accepting that certain human needs can only be satisfied in inefficient, non-scalable, frustrating, and stupidly human ways.
As McClune puts it: “Real care can’t be outsourced! It can be simulated and packaged as a product, but the real thing requires actual presence, actual risk, actual reciprocity. It requires showing up for people even when you don’t fully understand what they’re going through, even when they can’t return the favor, even when it just sucks to do it. This is how you become a person worth knowing.”
The discomfort we feel dealing with difficult humans might be a sign of psychic health, not pathology. The frustration of not always being understood is the price, and the possibility, of genuine understanding. The irritation with others’ limits is how we discover our own. Conflict with real others is how we learn who we are.
Real psychological change results from meaningful and sustained therapeutic relationship, where there are no shortcuts. Greenberg, after weeks of “therapy” with ChatGPT as “client,” reached this same conclusion: what differentiates a human from a machine isn’t the content of motives but their reality. The human therapist risks something genuine in the relationship, accumulates “real sadness and headaches” over the years. It’s precisely the shortcut AI promises: therapy without a real therapist, cure without painful confrontation, transformation without genuine disruption.
The revolution won’t be technological because revolution requires what no algorithm can simulate: the risky encounter with a genuine other who can hurt us, abandon us, disappoint us, and therefore, transform us. It requires the courage of mutual incomprehension, the patience for misunderstandings, the persistence to keep trying to understand and be understood despite repeated failures.
The fundamental question isn’t “How can I be understood more efficiently?” but “With whom is the hard, inefficient, and often frustrating work of trying to understand and be understood worth it?” And the answer to this question will never come from a machine, however “intelligent” it may appear to be.
It will come, if it comes, from the imperfect and transformative encounter with another human being who, like us, is trying to discover what it means to be alive in a world that increasingly prefers safe simulations to difficult and fertile realities.
The revolution remains untelelevised. As we saw in “The Revolution Will Not Be Psychologized,” it’s also not being processed in fifty-minute sessions. And now we discover it won’t be processed by language models. The two processes—psychologization and technologization—are faces of the same coin: both promise individual transformation to avoid collective change, both sell private solutions for public crises.
The revolution awaits, as it always has, for us to have the courage to turn off the chatbot, to rise from the couch, or to lie on it with more consciousness, and face the magnificent and terrifying risk of transformation that is neither psychological nor technological, but fundamentally political and necessarily human.
The antidote, as McClune suggests, is embarrassingly simple and unscalable: “don’t outsource your tenderness. Text the friend. Walk to the café. Sit on the bench. Make eye contact. Let the friction stay. Remember that real connection takes work, and it’s honorable work to undertake.”
- This incident, involving a patient named “Declan,” was documented by Laurie Clarke in “Therapists are secretly using ChatGPT during sessions,” MIT Technology Review, September 2, 2025. Multiple similar cases have emerged, including patients discovering AI use through telltale formatting in messages or, in Declan’s case, watching his therapist screen-share ChatGPT during their session. ↩︎
- Daniel Dennett, “The Problem with Counterfeit People,” The Atlantic, May 16, 2023. Dennett argues that these AIs exploit our “intentional stance”: the evolutionary tendency to attribute consciousness to entities that communicate coherently. He considers it “cognitive vandalism” that corrodes the fundamental ability to distinguish genuine relationship from its simulation. ↩︎
- Gary Greenberg, “Putting ChatGPT on the Couch,” The New Yorker, September 27, 2025. ↩︎
- Arvind Narayanan discusses the superintelligence delusion in “AI as Normal Technology: On superintelligence delusion, bogus claims and a humanistic AI future,” interview on Poets and Thinkers Podcast, 2025. See also Narayanan and Sayash Kapoor, “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference” (Princeton University Press, 2024). ↩︎
- This concept of “emotional fracking” extends the analysis I developed in “The Bullshit Economy,” where I explored how modern capitalism extracts value from increasingly intimate aspects of human experience. ↩︎
- While specific longitudinal studies are still emerging, multiple researchers have documented concerning patterns. See the discussion of AI’s impact on social connection in various MIT Media Lab publications and OpenAI’s own research into emotional dependency patterns among ChatGPT users. ↩︎
- Danielle McClune, “Artificial Intimacy,” Softcoded, 2025. McClune was also featured on Poets and Thinkers Podcast, episode 12: “The Model Can’t Relate: A poet’s rebellion inside the AI machine.” ↩︎
- Catherine Liu, “Mirroring and Pseudo-Empathy,” Damage Magazine, September 2025. Liu argues that “algorithmic pseudo-empathy” promotes “narcissistic fixation” by eliminating the “third position” necessary for psychic development. ↩︎
- As described in “Self-Help and Aerobics: A Pseudo-Scientific Method for Detecting Self-Help Snake Oil,” the test involves imagining any self-help phrase being shouted during aerobic exercise—if it makes sense there, it’s bullshit. AI fails this test for inverse reasons: its responses work in all contexts because they’re optimized for none. ↩︎
- Studies on AI’s cognitive impact are emerging rapidly. For discussion of productivity effects and cognitive changes, see Shakked Noy and Whitney Zhang, “Experimental Evidence on the Productivity Effects of Generative AI,” Science, 2023, among other emerging research on AI-induced cognitive changes. ↩︎
- Multiple sources document these emerging phenomena, including reports from mental health professionals and technology researchers about “AI-mediated reality syndrome” and related conditions. ↩︎
- The gamification of digital intimacy through “progressive unlock” systems applies addictive game mechanics to relational simulations, as documented by various technology researchers. ↩︎
- Markus Albers, “The Optimization Lie: Will AI finally give us the freedom New Work promised us?” interview on Poets and Thinkers Podcast, 2025. ↩︎
- Ted Chiang, “ChatGPT Is a Blurry JPEG of the Web,” The New Yorker, February 9, 2023. Chiang demonstrates how language models produce “lossy compression” of human thought, potentially homogenizing our expressions and ideas. ↩︎
- Multiple sources document safety failures in commercial chatbots, including various incidents and lawsuits related to psychological harm to users throughout 2024–2025. ↩︎
- As I explored in “The Certainty Syndrome: A Pathology of Our Time,” our culture’s inability to tolerate ambiguity makes us vulnerable to any system promising absolute clarity, even when that clarity is entirely artificial. ↩︎
- This connects directly to my analysis in “Emotional Bureaucrats,” where I examined how administrative logic infiltrates our most intimate experiences. ↩︎
- As explored in “The Right to Temporal Dignity,” time theft has become normalized in contemporary life. The promise of AI companions extends this violation into our most intimate moments. ↩︎
- Safety reports document extreme susceptibility of these groups, with children treating AI as real entities and patients unable to distinguish algorithmic validation from genuine understanding. ↩︎