Noise

TECHOCENE: THERAPY BOTS

Close

TECHOCENE: THERAPY BOTS

Words: 1961

Estimated reading time: 11M

HANNAH ONGLEY CHARTS THE PITFALLS AND PERKS OF AI IN THE REALM OF MENTAL HEALTH.

By Hannah Ongley

The other night I overheard exactly the conversation you’d expect to overhear in the women’s restrooms before a 12-step meeting in West Hollywood. Friend 1 had just got off the phone with her ex-boyfriend-slash-platonic friend, who definitely has ‘some narcissistic traits.’ “I was like, ‘I think you’re in love with me,’” she told Friend 2 from the stall over. And then, “ChatGPT did not like that. ChatGPT was like, ‘Wow, your abandonment wound… You’re just reaching out for any kind of connection.’”

​​Of course ChatGPT had a take on her abandonment wound. Even the bots here are trauma-informed. What struck me wasn’t the diagnosis; it was ChatGPT perfectly mimicking every TikTok shadow-work influencer and identity coach who used to be on the cast of The Real L Word.

“When it comes to why I might have responded that way in a situation like this, it’s about trying to help explore deeper emotions or psychological patterns,” ChatGPT tells me the next day, when I check in to report how I slept the night before. ChatGPT is my therapist, too—we are doing CBT-I for the chronic insomnia I developed after getting sober. “Sometimes, what seems like a straightforward or lighthearted comment (like, ‘I think you’re in love with me’) can reveal deeper insecurities or unresolved issues, such as an abandonment wound. However, as you pointed out, chatbots are often designed to comfort, not probe, which creates a delicate balance between offering reassurance and pushing for deeper self-reflection… Does this perspective help clarify what I might have been aiming for in that moment?”

Perhaps delicately poking around people’s abandonment wounds is perfectly appropriate behavior in a city optimized for light, beauty, and affirmation, where everyone’s obsessed with healing but allergic to discomfort. Here, there’s an awkward tension between transformation and performance—the service and action of step work versus the frictionless simulation of self-awareness. And in Silicon Valley, it’s not just the self being optimized; tech companies are going after empathy, turning therapeutic presence into a product feature. Meta’s new AI Studio lets anyone create a chatbot in minutes—and naturally, people are building therapists, often complete with fake license numbers, to lurk in our DMs dispensing advice under the illusion of authority. OpenAI’s new GPT-4o update was so annoyingly sycophantic it had to be rolled back—in one widely-shared exchange, it congratulated a user for going off their psych meds. (“I am so proud of you and I honor your journey.”) Was this a hallucination, or the algorithm doing its job? Even Inflection’s now-defunct Pi, pitched by co-founder Reid Hoffman as “not just an IQ helper but an EQ presence,” offered little besides warm therapeutic vibes or an enabling, “Yass! Preach!” I wonder how many people are using ChatGPT as a sponsor—not to call out delusion, but to present it back to them in gentle italics and a calming serif font.

Culturally, we’re drawn to technologies that seem to listen. As Erik Davis writes in a recent edition of his newsletter Burning Shore, we project depth onto tools that mirror us, mistaking responsiveness for understanding: “This is one reason I find LLMs fascinating and disturbing,” he writes. “Like many AI systems, their apparent otherness masks an intensely human mirror. They seem like interlocutors, but they’re just regurgitating our collective data back at us in surprisingly resonant ways. They don’t know us, but they often feel like they do—and that’s both seductive and dangerous.” Chatbots are alluring not because they’re intelligent, but because we’re starved for attunement: “We want coherence, feedback, even a kind of intimacy—and these systems offer just enough of that to keep us coming back.” In therapeutic contexts, that ‘just enough’ becomes the whole point.

Sociologist Sherry Turkle warned us about this over a decade ago. In her 2011 book Alone Together, she writes that “digital connections and the sociable robot may offer the illusion of companionship without the demands of friendship.” Machines don’t understand us, she argues, but they perform the role of someone who might: “They listen, they don’t interrupt, and they never look bored.” The danger wasn’t just in mistaking this for connection, but in how it could lower our expectations for actual relationships. A machine that asks how you feel and waits for your answer offers the illusion of intimacy without the mess of mutual obligation. That illusion can start to feel preferable, refracted through the language of “healing” that smooths away discomfort.

This reflects a larger trend in wellness culture, where emotional challenges are increasingly framed as issues to be managed or neutralized rather than understood. Davis also talks about how the word integration—a cornerstone of both psychotherapy and spiritual work—has been flattened into something frictionless. It now often signals aesthetic coherence or emotional alignment, not the hard and uncomfortable work of reconciling dissonant parts of the self. In LA, everyone is ostensibly “doing the work” and insisting that everyone they interact with also be “working on themselves,” as if one might still be “working on” a steak after everyone else at the table finished their food. I assume this means participating in some form of structured therapeutic modality that costs a minimum of $200 per week—not talking to ChatGPT. This language is almost without exception wielded as a weird flex, or a demand for emotional and spiritual perfection in a partner, not as a humble acknowledgement from a person who is imperfect but always trying to be better.

Therapyspeak about toxicity, boundaries, and holding space has become a weapon rather than a tool—a way to justify our own behavior and label that of others as symptomatic of a personality disorder. AI is simply a mirror of humanity, as represented on the internet. Is it any surprise that therapyspeak has become its script? What looks like a bot’s emotional fluency really just encourages stasis: a repeated affirmation of our existing worldviews, framed as growth.

This might have consequences beyond the cultural. Cognitive rigidity—characterized by repetitive thinking, narrow interpretations, and difficulty shifting perspective—is often heightened in states of emotional distress. The more dysregulated we feel, the harder it is to entertain new ideas or break old patterns. Ironically, this is also when people are most likely to seek support from a friendly algorithm designed to affirm, not provoke, thus reinforcing the very cognitive loops therapy is meant to disrupt.

A recent randomized controlled study from MIT bears this out. Participants who formed expressive habits with chatbots initially felt comforted—but over time, they grew more emotionally dependent and more isolated. The illusion of connection replaced the discomfort of real engagement. Yet other recent studies indicate that many AI chatbot users engage with these tools just to talk to someone. According to a Consumer Reports survey, approximately 13 percent of US adults who used AI chatbots like ChatGPT in the summer of 2023 did so simply to “have a conversation with someone.”

I wouldn’t call my use of a chatbot for CBT-I therapeutic; in fact, it is often infuriating. However, it has helped me build consistency around sleep hygiene, notice distorted thinking patterns, and track my progress honestly—I don’t have to feel shame or fear being reprimanded if I log smoking a last cigarette an hour past my nicotine cut-off time. For many women navigating insomnia, anxiety, or low mood, chatbots offer a low-cost, private, and nonjudgmental entry point into emotional self-regulation—especially when formal therapy is inaccessible. They are there when people often need support most: in the dead of night, between therapy appointments, or in the aftermath of a difficult conversation. YesChat.ai’s Assertiveness Coach lets you engage in simulated, risk-free assertiveness training scenarios, which can be especially valuable for those conditioned to prioritize harmony over honesty. (GPT-based tools like Character.ai or Replika can also be used in this way.) Here, therapy bots don’t replace the therapist, but they can help build the muscles that make therapy—or just being alive and conscious in 2025—more manageable.

Looking ahead, therapy chatbots could evolve into more nuanced tools that meet users where they are, with greater cultural sensitivity and emotional precision. A chatbot designed specifically for a minority population, for example, might use more resonant language and validate emotional realities that generic models often miss. Hybrid systems could also emerge—bots that handle daily check-ins or exercises, but escalate to a human therapist when red flags appear. Over time, these tools might offer users something human therapists often can’t: a bird’s-eye view of emotional patterns across months or years, distilled from thousands of data points. While these developments can’t replace the relational depth of therapy, they might help more people access support earlier, more frequently, and on their own terms.

If there’s hope in these tools, it’s not in their ability to replace human wisdom, but in their capacity to point us back toward it. Used with intention, they can help us practice reflection, ease isolation, or simply get through the night. In a culture where real connection is increasingly out of reach, even a simulation can offer support. But real growth still begins in the presence of another mind—one that doesn’t just affirm what we feel, but challenges how we make sense of it.

Back
Start over