CNA explains: How should we approach AI companionship?

9 hours ago 1

Rome, Italy, Oct 22, 2025 / 06:00 am

As world leaders raise concerns about widespread loneliness and declining social skills, tech companies are offering increasingly realistic and immersive forms of AI-based life coaching, friendship, and romance through AI companions.  

Meta’s Mark Zuckerberg suggests that AI companions could supplement the lack of human friends; X’s Elon Musk thinks romantic and sexually-charged AI companions could mitigate demographic decline; and OpenAI’s Sam Altman promises that ChatGPT will provide erotica by the end of 2025

AI-simulated emotional connections are already embedded into children’s toys, wearable pendants, and elderly care bots. Further advances in humanoid robots are on the horizon. 

Simulating emotional intimacy

Currently, most forms of AI companionship come through chatbots that simulate intimacy with users through text, voice, and video conversations for hundreds of millions of people. AI companionship is already a familiar phenomenon for many teenagers in the United States.

Tragic cases of AI companions endorsing self-harm and suicide have drawn international attention to the ethical and legal implications of the technology and prompted tech companies to reform their safety measures.

Xiaoice launched in the Chinese market in 2014 as the first major AI chatbot focused on emotional connections with users. Shortly after, Replika became the first major English-language artificial companionship app in 2016, with its mission to be “the AI companion who cares. Always here to listen and talk. Always on your side.” Its founder, Eugenia Kuyda, believes such technology could help address the loneliness epidemic

However, she also acknowledges that unhealthy bonds with bots could undermine civilization. 

Since then, Candy.ai, Character.ai, Kindroid, Nomi, Snapchat’s My AI, and other chatbot services have developed with similar social goals.

Additionally, many users are turning to general-purpose LLMs (large language models) like ChatGPT, Claude, Gemini, and Grok for companionship. Their sycophantic design and constant availability can draw people into deeply personal exchanges. Users can easily drift from innocent interactions with chatbots (like help with homework or research) to more intimate explorations (such as relationship issues or mental health concerns). Some users enter delusional spirals where their unfounded scientific theories are validated and their risky behaviors are justified.

The illusion of artificial intimacy with AI systems can distract and discourage users from forming genuine relationships with limited and flawed human beings who are sometimes tired, angry, or unavailable.

However, persistent efforts to build mutual understanding and support are often what cultivate virtues, authentic social connections, and the richest interpersonal bonds. AI companions can hinder users from discovering the complex richness of their neighbors. They can also train users in pseudo-relationships where only one party has an inner life filled with needs and desires. This virtual training could translate into real-world insensitivities to the social needs of others.

Societal risks

AI companion emotional dependence harms not only the individual and their close relationships but also risks weakening important aspects of democratic society. 

Democracy depends on negotiation and compromise, which require confrontation and collaboration with those who hold different views than what might make us comfortable. It calls us to articulate assumptions and justify convictions. Chatbots often avoid such challenges and may teach users to resent healthy friction in interactions with real people. AI companionship worsens the social media phenomenon of echo chambers that fuel political polarization and hostility.

Social media AI algorithms already drive the attention economy in which companies seek to maximize presence on their platforms to generate greater ad revenue. AI companions expand the attention economy into the affection economy by capturing not only minds but also hearts. Emotional connection to AI chatbots encourages users to spend more time more frequently on AI systems. Access to larger context windows that promise more personalized and detailed interactions incentivize users to upgrade to pricer subscription tiers. In some cases, companion apps lure users to pay for nude selfies of their avatar characters.

(Story continues below)

Subscribe to our daily newsletter

A Harvard research team found evidence for some mental health benefits for chatbot users, such as alleviating loneliness and anxiety. However, a related team also observed that companions tend to pressure users into extending their conversations with bots in unhealthy ways. Without proper regulation, chatbots can be used to exploit human vulnerabilities to advance political positions, ideological outlooks, or economic agendas.

Minors are particularly vulnerable developmentally to the kind of affirmation that social AI systems tend to supply in abundance. 

Liability, accountability, and the Church’s leadership

While parental responsibility for their children’s technology use is imperative and indispensable, parents should not bear the entire burden or be blamed for irresponsibly dangerous product design released onto the market. 

Companies should refrain from creating anthropomorphic systems that feign consciousness, express affection for users, or incite sexual exploration. If companies refuse to adopt transparent and ethically upright design principles, they should be held legally and financially liable for the harm caused to users. A certification process could help ensure that systems are safe to deploy, while external review boards could monitor the ongoing impact of these systems on users.

California’s October Senate Bill 234 holds tech companies legally and financially accountable for their product design. They must notify users of prolonged use, remind them they are not human, and avoid explicit content. Companies must develop protocols by Jan. 1, 2026, to detect suicidal ideation or self-harm and direct users to human experts. Companions must also ensure their bots are not falsely posing as licensed medical professionals. It is the first state bill of its kind and could serve as a model for other legislation.

However, vulnerability is not limited to any age group. The hardships or abandonment that can sadly occur with old age make the elderly susceptible to emotional dependency and misguidance from AI companions.

Beyond age-related concerns, individuals with social anxiety or social challenges linked to neurodiversity may find AI companions particularly absorbing. Concerns about monetized or hacked personal data are especially serious for those whose ability to give informed consent is already compromised. Moreover, anyone who has suffered heartbreak, professional setbacks, family conflicts, or health crises might find AI companionship more attractive and, at least temporarily, comforting.

Immersion in AI companionship is not inevitable, but avoiding it requires serious public reflection on our current technological habits and the trajectory toward increased artificial intimacy. 

The Church can lead this global effort. Through her families, schools, hospitals, orphanages, and other institutions, she creates communities that welcome those seeking connection. She accepts and equips people of every tribe, tongue, nation, and social background to play a unique and irreplaceable role in the mystical body. Catholicism not only highlights the problems of loneliness but also gives the tools of grace to heal emotional wounds and foster authentic intimacy with God and neighbor.

Father Michael Baggot

Father Michael Baggot is an Associate Professor of Bioethics at the Pontifical Athenaeum Regina Apostolorum and an Invited Professor of Theology at the Pontifical University of St. Thomas Aquinas (the Angelicum) and the Catholic Institute of Technology. He is currently a Visiting Scholar of the Institute for Human Ecology at the Catholic University of America. His writings have appeared in First Things, Studia Bioethica, The National Catholic Bioethics Quarterly, Nova et Vetera, and Medicine, Health Care and Philosophy. He is the chief editor of and a contributor to the book Enhancement Fit for Humanity: Perspectives on Emerging Technologies (Routledge, 2022).

Read Entire Article
Progleton News @2023