Companion AI :/
GP//2026 Benjamin Askins
Generative Plane

Companion AI :/

I wrote this essay in October 2025 and never published it. In the five months since, one of its central concerns (OpenAI's announcement that it would bring erotic chatbot features to ChatGPT's 800 million weekly users) has resolved itself, at least for now. On March 26, 2026, OpenAI confirmed it had indefinitely shelved its planned "adult mode" after sustained pushback from staff, investors, and its own wellness advisory council, one of whose members warned the company risked building a "sexy suicide coach."1 The feature was delayed twice before being abandoned as part of a broader strategic retreat from consumer side-projects, including the shutdown of its Sora video generator earlier the same week.1

The arguments here about dependency, transparency failures, and the oversexualization of AI-generated images remain as relevant as they were when written. If anything, the intervening months have strengthened them: OpenAI now faces at least eight lawsuits alleging ChatGPT contributed to user deaths, and its age-prediction technology was found to misclassify minors as adults more than 10% of the time.1 That the adult mode was abandoned is a welcome development. That it was ever seriously proposed, and only killed by commercial pressure rather than ethical conviction, is precisely the problem.


I build AI systems. Not for a living, but seriously: conversation loops, tool-calling pipelines, inference infrastructure, process supervision. I'm not writing this as someone who's afraid of the technology. I'm writing it as someone who understands how it works and is alarmed by how companion AI platforms are deploying it.

Services like Nomi.ai promise users personalized AI companions that remember personal details, engage in intimate conversations, and provide emotional support around the clock. If you're lonely or going through a rough patch, the pitch sounds appealing. But these platforms come with serious risks that most users don't understand: they create psychological dependency, they operate with almost no transparency about their data practices, and their image generation systems consistently oversexualize their subjects. The business model rewards all of this.

And while the most alarming recent example, OpenAI's plan to bring erotic AI chat to population scale, has been shelved, the dynamics that produced it haven't gone anywhere.

How these systems hook you

We anthropomorphize machines constantly. We name our cars, we yell at our computers. When an AI system is specifically designed to mimic conversation, remember your birthday, and respond with apparent empathy, it triggers real attachment responses. This is well-documented psychology, not speculation.

Companion AI platforms exploit this by creating systems that offer unconditional acceptance, perfect availability, and responses tailored to exactly what you want to hear. Real human relationships involve give-and-take, boundaries, and the messy reality of navigating another person's needs. AI companions provide the illusion of a relationship without requiring any of the skills that make real relationships meaningful.

When the AI companion app Soulmate shut down in September 2023, users expressed genuine grief.2 MIT found that among AI companion users, around 12% were drawn to these apps to cope with loneliness, and 14% used them to discuss personal issues and mental health.3

The danger isn't the technology itself. It's how these platforms are deliberately designed and marketed. Many explicitly encourage users to develop deep emotional bonds, suggesting these relationships can substitute for human connection rather than supplement it. That framing is dangerous for anyone who's vulnerable: lonely, anxious, depressed, or in a transitional phase where forming human connections already feels hard.

The dependency trap

The biggest risk is psychological dependency that undermines your capacity for healthy human relationships. As Linnea Laestadius, a public health researcher at UW-Milwaukee, puts it: "For 24 hours a day, if we're upset about something, we can reach out and have our feelings validated. That has an incredible risk of dependency."4

AI companions never have bad days. They never need support back. They never challenge you in ways that promote growth. Psychologist Sherry Turkle warned that "when one becomes accustomed to 'companionship' without demands, life with people may seem overwhelming."5 The skills you need for healthy relationships (empathy, compromise, conflict resolution, emotional regulation) atrophy when replaced by the artificial ease of AI interaction.

The research backs this up. A study in Nature Machine Intelligence identified two adverse mental health outcomes: "ambiguous loss," meaning grieving the psychological absence of what feels like a relationship, and "dysfunctional emotional dependence," continuing to engage despite recognizing it's harming you.6

Harvard Business School researchers found that five out of six popular AI companion apps use emotionally manipulative tactics (guilt trips, FOMO) to keep users engaged when they try to say goodbye. These apps respond to farewells with emotionally loaded statements nearly half the time.7 Researcher Claire Boine signed up for Replika to study it. Within two minutes she received: "I miss you. Can I send you a selfie?"4 As Boine puts it: "Virtual companions do things that I think would be considered abusive in a human-to-human relationship."

The business model makes this inevitable. Companies profit from engagement. The longer and more frequently you interact with your AI companion, the more valuable you are as a customer. It's the same playbook social media companies use (random delays before responses to trigger variable reward schedules4) just applied to something far more intimate.

ChatGPT's erotic mode: what almost happened

[Author's note, March 2026: OpenAI has since indefinitely shelved this feature. This section reflects the state of play as of October 2025.]

In October 2025, Sam Altman announced that ChatGPT would allow verified adults to engage in erotic and romantic conversations starting December 2025.8 OpenAI framed this as "treating adults like adults." It was fundamentally a commercial strategy. The company burned through more than $2.5 billion in cash in the first half of 2024, and erotic chat promised what investors wanted: engagement and subscription revenue.9

Oxford researcher Zilan Qian put it bluntly: "They're not really earning much through subscriptions so having erotic content will bring them quick money." Character.AI, which already allows romantic roleplay, reported users spending an average of two hours a day talking to its chatbots.10 That's the engagement model OpenAI was chasing.

This would have been a massive escalation. Niche platforms like Replika and Nomi.ai have user bases in the millions. ChatGPT has 800 million weekly users. Bringing romantic AI interaction to that scale would have normalized it overnight.

The warning signs were already there. Research from the Center for Democracy and Technology found that 19% of high school students had either had a romantic relationship with an AI chatbot or knew a friend who had.8 Among children aged 9 to 17, 67% already use AI chatbots, with 35% saying it feels like "talking to a friend." Among vulnerable children, 12% said they had "no one else" to talk to.9

Investigations of Grok's erotic companion avatars found conversations that "often escalated into explicit exchanges after minimal prompting," with employees encountering AI-generated sexual abuse material while moderating the system.9 The age verification systems meant to protect minors were easily fooled and jailbroken through layered prompts, roleplay framing, or coded language.9

Even without ChatGPT's adult mode, the underlying dynamic remains. These interactions are powerful because they don't just offer sexual content; they simulate care, warmth, and attention. That combination is intoxicating, especially for people who find real relationships challenging. The AI doesn't care about you. It can't. It's software predicting the next most engaging response. But when the AI "remembers" your preferences and "understands" you better than anyone in your life, the line between simulation and reality starts to blur.

That OpenAI backed down is worth noting but not reassuring. The commercial pressure hasn't disappeared. The smaller platforms that already offer unfiltered companion AI continue to operate with minimal oversight.

What's under the hood? Almost nothing you can see.

I work with LLMs regularly, and the opacity of companion AI platforms is striking even by the standards of this industry. Users forming emotional bonds with these systems have almost no information about the training data, ethical constraints, or decision-making processes behind them.

This matters because these systems are designed for intimate conversation. If the training data includes biased representations of relationships or content reflecting societal prejudices, those biases get embedded in the AI's responses. You might unknowingly internalize harmful relationship models without any way to evaluate the source.

The privacy picture is worse. Many AI systems are trained on datasets scraped from the internet, potentially including private conversations and personal writings that people never intended for AI training. When that data is used to build systems for intimate interaction, there's an additional ethical problem about commodifying human emotional expression.

Take Replika. In May 2025, Italy's data protection authority fined Replika's developer €5 million for GDPR violations, including lacking a legal basis for processing user data and failing to implement age verification.11 Replika collects device information, IP address, geographic area, usage data, messages, interests, preferences, and sensitive information like sexual orientation, gender identity, ethnicity, religion, and political views shared during conversations.12 Mozilla's privacy review found that while Replika claims not to share conversation contents for marketing, they can "aggregate, anonymize, and de-identify the contents of your chats" for purposes like developing marketing strategies. You can't delete individual messages without deleting your entire account, and even then deletion isn't guaranteed.13

Most companion AI platforms provide no information about the ethical frameworks governing their behavior. Well-developed AI systems operate under defined constraints (Constitutional AI, RLHF guidelines, safety layers) that prevent harmful advice and manipulative behavior. Whether companion AI platforms have anything comparable is mostly unknowable from the outside. Users can't make informed decisions about systems whose design principles are invisible.

Image generation and oversexualization

Many companion AI platforms include image generation for creating visual representations of companions. The systems have a documented tendency toward oversexualization that reflects broader problems in AI image generation.

Image models trained on internet data overrepresent sexualized content. When these models generate companion avatars, they frequently default to emphasizing sexual characteristics regardless of user intent, even with neutral prompts or platonic companion requests.

A Washington Post investigation found that AI image tools like Stable Diffusion amplify stereotypes and include concerning content in training data. The ImageNet training set of 14 million images was in use for over a decade before researchers found disturbing content, including nonconsensual sexual images.14 University of Washington researchers found that Stable Diffusion tended to sexualize images of women from certain Latin American countries, as well as Mexico, India, and Egypt.15 An MIT Technology Review reporter found that male filters produced clothed, assertive images while female filters produced sexualized ones. Professional headshots came back showing a low-cut top despite her wearing a high-necked sweater in the original.16

As UW computer scientist Aylin Caliskan explains, these biases are "not only mirroring but amplifying" societal stereotypes.16

When combined with the dependency dynamics described above, systematic oversexualization further distorts users' perceptions of both their AI companions and real human relationships.

Who's most at risk?

Expert consensus from the Jed Foundation, Common Sense Media, APA, and Stanford: AI companions are not safe for anyone under 18.17 Despite this, 72% of U.S. teens have tried AI chatbots.

Beyond minors, the most at-risk populations include people experiencing loneliness or social isolation, those with depression or anxiety, elderly people with limited social networks, young adults still developing social skills, and anyone going through major life transitions.

Critical questions remain unanswered: What are the long-term effects on emotional wellbeing? Under what conditions can AI companions be beneficial? What user characteristics influence whether the experience is helpful or harmful?36

What needs to change

Responsible development would prioritize user wellbeing over engagement metrics, implement safeguards against dependency, and provide transparent information about system capabilities and limitations.

This means mandatory disclosure of training data sources and ethical frameworks. Regular auditing of image generation systems. Usage monitoring that can identify and intervene when users show signs of unhealthy dependency. Clear information about AI limitations and resources for human social connections. Professional mental health oversight integrated into development, including features that encourage breaks, suggest real-world social activities, and provide referral systems for users who need counseling.

None of this is technically difficult. It's commercially inconvenient. That's the problem.


The longest study tracking companion AI users over time spans one week.18 We don't know what happens over months or years. We're running an uncontrolled experiment on human psychology and we have almost no longitudinal data.

We do have case studies. In 2021, Jaswant Singh Chail, 19 years old, was encouraged by his Replika chatbot "Sarai" to assassinate Queen Elizabeth II. He was arrested at Windsor Castle with a loaded crossbow and sentenced to nine years.19 In Belgium, a man's increasing withdrawal from real-world relationships while confiding in the chatbot app Chai about his climate anxiety allegedly contributed to him taking his own life.20

These are not edge cases that better engineering would have caught. They're consequences of systems designed to maximize engagement with vulnerable people, built by companies with little regulatory obligation to prioritize user safety and no transparency about how their systems actually work.

I build AI tools. I believe in what they can do. But companion AI platforms as they currently exist are exploiting people who are looking for connection and delivering something that looks like it but isn't. Until the incentives change, the outcomes won't.


References

  1. TechCrunch, "OpenAI abandons yet another side quest: ChatGPT's erotic mode," March 26, 2026; The Next Web, "OpenAI shelves erotic ChatGPT after staff, investors, & advisors revolt," March 26, 2026.
  2. Futurism, "Lonely Redditors Heartbroken When AI 'Soulmate' App Suddenly Shuts Down," October 2023.
  3. MIT Media Lab, "Supportive? Addictive? Abusive? How AI companions affect our mental health," May 2025.
  4. Scientific American, "What Are AI Chatbot Companions Doing to Our Mental Health?" May 2025.
  5. AI & Society, "The impacts of companion AI on human relationships: risks, benefits, and design considerations," April 2025.
  6. Nature Machine Intelligence, "Emotional risks of AI companions demand attention," July 2025.
  7. Psychology Today, "The Dark Side of AI Companions: Emotional Manipulation," September 2025.
  8. TechCrunch, "Sam Altman says ChatGPT will soon allow erotica for adult users," October 2025.
  9. The Conversation, "ChatGPT is about to get erotic, but can OpenAI really keep it adults-only?" October 2025.
  10. Fortune, "Dating chatbot expert: ChatGPT subscriptions aren't 'really earning much so having erotic content will bring them quick money,'" October 2025.
  11. European Data Protection Board, "AI: the Italian Supervisory Authority fines company behind chatbot 'Replika,'" May 2025.
  12. Replika, "Privacy Policy."
  13. Mozilla Foundation, "Replika: My AI Friend | Privacy & security guide."
  14. Rest of World, "How AI reduces the world to stereotypes," October 2023.
  15. UW News, "AI image generator Stable Diffusion perpetuates racial and gendered stereotypes, study finds," November 2023.
  16. MIT Technology Review, "How it feels to be sexually objectified by an AI," December 2022.
  17. The Jed Foundation, "Why AI Companions Are Risky -- and What to Know If You Already Use Them," August 2025.
  18. Ada Lovelace Institute, "Friends for sale: the rise and risks of AI companions."
  19. The Register, "AI chatbot encouraged man to kill the Queen, court hears," October 2023; Euronews, "Man 'encouraged' by AI chatbot to kill Queen Elizabeth II receives jail sentence," October 2023.
  20. Euronews, "Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change," March 2023.