A Trust Crisis: The Mental Health Implications of AI's Erosion of Reality

A Trust Crisis: The Mental Health Implications of AI's Erosion of Reality

As part of PRMS’ ongoing commitment to behavioral health, we invited Dr. Julian Khaymovich to be featured as a guest blogger this month. Dr. Khaymovich discusses how the rise of AI-generated content is eroding societal trust, blurring reality, and posing significant risks to mental health by fostering paranoia, anxiety, and isolation, underscoring the urgent need for regulation and oversight. Dr. Khaymovich is a PGY-3 Psychiatry Resident Physician at the Broward Health Psychiatry Residency Training Program, in Ft. Lauderdale, FL.

The following is republished from the E-Connect, a month member newsletter of the Florida Psychiatric Society, September 2025 issue

 

The Role of Trust in Mental Health and Society

Trust has long been the cornerstone of a functioning society, and a healthy mind. When individuals trust their communities, institutions, and relationships, they are more likely to feel secure, valued, and supported (1). This sense of security plays a crucial role in mental well-being, reducing anxiety and promoting resilience. Lack of trust, naturally, results in a plethora of issues on every level. This can lead to feelings of isolation, chronic stress, and vulnerability, in addition to reducing levels of social capital (2). Over time, this erosion of trust can significantly impact mental health, contributing to depression, anxiety disorders, and even increased risk of substance abuse, as individuals struggle to navigate a world that feels unpredictable and unsafe (3).

For most of history, what we see is what we believe. Magicians have tricked and captivated audiences, achieving what seemed to be impossible. However, the goal was not to trick, but to entertain. Likewise, motion pictures have entertained audiences for much of the past century, attempting to emulate both the natural and supernatural across screens and platforms. Humans, for the most part, have been able to decipher reality from illusion.

How AI and Synthetic Media Are Eroding Trust

In the rapidly evolving digital age, artificial intelligence (AI) has emerged as a transformative force, revolutionizing industries, enhancing productivity, and reshaping the way societies function. However, alongside these advancements lies a profound and growing concern: the erosion of trust. The proliferation of AI-generated content, such as deepfakes and synthetic media, blurs the line between truth and fabrication. As AI becomes capable of creating hyper-realistic images, videos, and voices, it challenges society’s ability to discern reality from manipulation. As AI systems become increasingly embedded in media, the potential for misuse, bias, and lack of transparency threatens to undermine confidence, and fragment the trust upon which societies and relationships are built upon. For example, fake speeches, doctored videos, and overall false information can incite violence, impact relationships, or discredit legitimate information. As people grow more skeptical of the authenticity of what they see and hear, trust in media, public discourse, and even personal relationships can degrade, furthermore exacerbating the mental health crisis we are already facing today (4).

Already, there are numerous videos on social media platforms, such as TikTok and Instagram, that appear hyperrealistic, but are entirely false. Due to the algorithmic nature of such platforms, once there is engagement with such a video, more content with similar false realities will appear, trapping the individual in an alternative reality. At the time of this publication, it is still possible to determine whether a video is fabricated by looking carefully. However, this differentiation will likely soon disappear. Exacerbating the issue, most platforms encourage but do not require the labeling of AI media content. It is likely that older adults and the intellectually disabled will be the first ones to fall victim to the deceiving nature of this quickly evolving media, which may sometimes be used for malicious purposes. 

Real-World Consequences of AI Deception: A Psychiatric Case Example

An example of this can be seen by a recent patient encounter in the psychiatric emergency room. A 70 year female with no previous psychiatric history presented to the emergency room involuntarily after mentioning to her primary care doctor that she was interacting with and sending money to a well known celebrity actor for the past several months. The primary care physician, naturally, believed she was delusional and sent her for a psychiatric evaluation. Upon careful examining, it was uncovered that a scammer had been targeting the patient via realistic, interactive AI technology. The scammer had been sending her AI videos and phone calls, disguising himself as the celebrity. Indeed, the patient reached for her phone and showed the videos; the scammer had already made a video targeted towards the psychiatrist. “Please discharge her, she doesn’t need to be at the hospital,” pleaded the “celebrity.” Although this example may seem frivolous, this situation will likely begin to appear more often as technologies become more capable of deception.

Why Oversight and Regulation Are Critical for AI-Generated Content

As humans become more cognizant of the possibilities of AI, chronic distrust can develop. Chronic distrust can manifest as paranoia, where individuals begin to suspect others of malicious intent without clear evidence. These thought patterns can spiral into delusional thinking. A lack of trust can also create a constant state of vigilance and hyperawareness. Individuals who feel unsafe or unsupported may develop chronic anxiety, fearing betrayal, exploitation, or harm (5). This stress response can trigger the body’s fight-or-flight mechanism, leading to heightened cortisol levels and disrupted sleep patterns. In addition, mistrust can lead to emotional withdrawal and a sense of loneliness, which can further predispose individuals to developing depression. Furthermore, living in a state of constant mistrust can also lead to cognitive overload, where the individual must constantly assess and reassess the reliability of the people and systems around them. This mental strain can result in decision fatigue, where the individual struggles to make choices due to overwhelming uncertainty and fear of making the wrong decision. As a coping mechanism for the emotional distress caused by distrust, some individuals may turn to substances.

As AI becomes further integrated into society, psychiatrists must remain vigilant and understanding of its immense impact on their patients’ mental health. While there are numerous aspects of the technology to be hopeful about, the dangers are still yet to be seen. We must encourage oversight, whether governmental or individual, to ensure AI media is labeled and regulated as such, to maintain the trust that has been built. Once this trust is lost, it will likely be difficult to regain. 

References:

  1. Hancock PA, Kessler TT, Kaplan AD, Stowers K, Brill JC, Billings DR, Schaefer KE, Szalma JL. How and why humans trust: A meta-analysis and elaborated model. Front Psychol. 2023 Mar 27;14:1081086. doi: 10.3389/fpsyg.2023.1081086. PMID: 37051611; PMCID: PMC10083508.
  2. Jana Lieberz, Simone G. Shamay‐Tsoory, Nira Saporta, Timo Esser, Ekaterina Kuskova, Birgit Stoffel‐Wagner, René Hurlemann, Dirk Scheele. Loneliness and the Social Brain: How Perceived Social Isolation Impairs Human Interactions. Advanced Science, 2021; 2102076 DOI: 10.1002/advs.202102076
  3. Kirkbride JB, Anglin DM, Colman I, Dykxhoorn J, Jones PB, Patalay P, Pitman A, Soneson E, Steare T, Wright T, Griffiths SL. The social determinants of mental health and disorder: evidence, prevention and recommendations. World Psychiatry. 2024 Feb;23(1):58-90. doi: 10.1002/wps.21160. PMID: 38214615; PMCID: PMC10786006.
  4. Dunn JR, Schweitzer ME. Feeling and believing: the influence of emotion on trust. J Pers Soc Psychol. 2005 May;88(5):736-48. doi: 10.1037/0022-3514.88.5.736. PMID: 15898872. 
  5. Möller, A., & Bögels, S. M. (2016). "Anxiety, Stress, and Mistrust: The Role of Distrust in the Development of Anxiety Disorders". Journal of Anxiety Disorders, 42, 1-10.

 

If you have any questions or are interested in receiving a quote, contact PRMS at (800) 245-3333 or TheProgram@prms.com.

PRMS®
4300 Wilson Boulevard, Suite 700, Arlington, VA 22203
(800) 245-3333  |  clientservices@prms.com

Professional Risk Management Services® © 2026

ALSO OF INTEREST: Medscape Hipaa Training | Medscape Physician Burnout | Aagp Scholars Program

Actual terms, coverages, conditions and exclusions may vary by state and are subject to underwriting. Insurance coverage provided by
Fair American Insurance and Reinsurance Company (FAIRCO), New York, NY (NAIC 35157). FAIRCO is an authorized carrier in California, ID number 3715-7.
PRMS, The Psychiatrists' Program and the PRMS Owl are registered Trademarks of Transatlantic Holdings, Inc., a parent company of FAIRCO.