Mimee Xu

Fellow at NYU ILI

Researcher on AI Privacy/Security

PhD NYU Courant 2025 | Ex-Google, Baidu, Meta, ByteDance

Privacy of the Mind

Happy to share that an extended version of PoTM is now accepted to PLSC 26".

We look forward to discussing Privacy of the Mind!

Abstract (Extended)

Having an AI mold their responses to you, ground you, fill in the blanks, and reflect back your thoughts and feelings may sound dreary to some privacy scholars, yet it has become rapidly commonplace. An increasing number of people use AI to process emotions, sharing personal turmoil with chatbots for reflective analysis (Anthropic, 2025; OpenAI, 2025). Such an intimate activity is now mediated with AI but "without human intervention," rightfully creating public apprehension (Østergaard, 2023).

Using chatbots for personal emotional processing – as opposed to for companionship (Skjuve et al., 2021) – hints at a privacy relevance unsighted before. This self-introspective process externalizes inner voice and outsources inner work which, until recently, could not be undertaken individually without the use of traditional instruments like personal diaries, trusted friends, or psychotherapists. We examine, through a deeply technical lens, whether this phenomenon reflects a salient form of privacy in the age of AI that ought to be recognized and studied.

We propose Privacy of the Mind by demarcating emotional processing from explicitly cognitive AI workloads such as editing and searching, because it entails an engagement of the mind that is deeply interior. If this form of privacy is under-recognized, users may self-censor or otherwise refrain from fully utilizing these systems. However, personal use patterns do not arise from a vacuum. From AI modeling (e.g., scaling, pretraining, post-training, reasoning), systems architecture work (e.g., efficient serving, model routing) and product decisions (free-to-use), we identify design choices that map to fluidity, parotting, and sycophancy while maintaining continuity, availability, and responsiveness – factors that may sustain oversharing and exacerbate psychological dependency. At the same time, the emerging focus on personalization and memory could potentially mitigate existing harm, highlighting that similar affordances that facilitate privacy harms can also enable emotional support. Technological progress is then not the problem itself, but an opportunity to implement in conjunction with legal developments.

We argue that US law implicitly recognizes protections for privacy of the mind, most prominently in the Fifth Amendment, which protects against self-incrimination and is framed in case law as protecting individuals from being "compelled to disclose the contents of his own mind" (Curcio v. United States (1957) 354 U.S. 118, 128). Pre-AI chatbots, diaries, conversations with friends, and therapy provided windows into individuals' inner worlds. These spaces had varying legal and social protections: diaries (Will, 1994) and personal conversations (Leib, 2007) were safeguarded by social norms and relational trust, with some protection under civil privacy law; therapy enjoyed the strongest protection through professional confidentiality, breached only in rare cases such as the duty to report harm or the duty to protect (Gutheil & Appelbaum, 2019).

AI chatbots don't fit existing legal responses to other settings of the privacy of the mind. They're not therapists, friends, or simple textual representations of the mind such as diaries. They are non-human agents that can respond, learn, retain, and influence - creating an unprecedented context for emotional processing. They can learn the individual's mind, access memories, and even mimic or influence it. This creates novel risks absent from previous forms of mind transparency: risks of exposure (Solove, 2006). - revealing the mind to others, the state, and AI companies themselves - and of intrusion - mental pollution through external influence, radicalization via echo chambers, and mental health implications (Head, 2025)

In this project we wish to highlight this unique dimension of personal use in AI chatbots as a foundation for reimagining the notion of the privacy of the mind in the AI era and the way the law should understand and protect it from exposure or intrusion. This project will be based on US case law on relevant and adjacent issues framing the protection of the mind, psychology literature, and technology design. By integrating a deep awareness of technological design to inform the notion of privacy of the mind, this project highlights emotional use of AI chatbots as a foundation for rethinking how law should understand and protect the privacy of the mind in the AI era - particularly against forms of exposure and intrusion that existing doctrines were not designed to address.

Bibliography

Bublitz, Christoph. "My Mind Is Mine!? Cognitive Liberty as a Legal Concept." In Cognitive Enhancement, edited by Elisabeth Hildt and Andreas Francke, 233-264. Dordrecht: Springer, 2013.

Cohen, Julie E. "What Privacy Is For." Harvard Law Review 126 (2013): 1904-1933.

Leib, Ethan J. "Friendship & the Law." UCLA Law Review 54 (2006-2007): 631-707.

Ligthart, Sjors. "Mental Privacy as Part of the Human Right to Freedom of Thought?" In The Law and Ethics of Freedom of Thought, Volume 2, edited by J.C. Bublitz and M.J. Blitz, 191-215. Palgrave Macmillan, 2026.

McCain, Miles, Reiichiro Nakano, Johannes Heidecke, and Jared Mueller. "How People Use Claude for Support, Advice, and Companionship." Anthropic, 2025. Schiller, Sandrine R., Camilo Miguel Signorelli, and Filippos Stamatiou. "The Intercepted Self: How Generative AI Challenges the Dynamics of the Relational Self." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 8, no. 3 (2025).

Skjuve, Marita, Asbjørn Følstad, Knut Inge Fostervold, and Petter Bae Brandtzaeg. "My Chatbot Companion - A Study of Human-Chatbot Relationships." International Journal of Human-Computer Studies 149 (2021): 102601. Solove, Daniel J. "A Taxonomy of Privacy." University of Pennsylvania Law Review 154, no. 3 (2006): 477-564.

Thomas G. Gutheil and Paul S. Appelbaum. Clinical Handbook of Psychiatry and the Law. 5th ed. Philadelphia: Wolters Kluwer, 2019.

Will, Daniel G. "Dear Diary - Can You Be Used Against Me - The Fifth Amendment and Diaries." Boston College Law Review 35, no. 4 (1993-1994): 965-986.

“Chatbot Psychosis.” Wikipedia, Wikimedia Foundation, https://en.wikipedia.org/wiki/Chatbot_psychosis. Accessed 20 Jan. 2026.

Østergaard, Søren Dinesen. “Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?” Schizophrenia Bulletin, vol. 49, no. 6, 29 Nov. 2023, pp. 1418–1419. Oxford UP, doi:10.1093/schbul/sbad128.

Earlier Idea: Privacy of the Mind

Working title: Privacy of the Mind: Emotional Processing, Confidentiality, and the Role of Tech and Law in AI-mediated Self

Talk to us about your thoughts!

Having a chatbot mold their responses to you, ground you, fill in the blank, or otherwise reflect back your thoughts and feelings may sound dreary to some privacy scholars, but it is a force unstoppable. People increasingly use ChatGPT for emotional processing. This self-introspective process externalizes inner voice and outsources inner work, which until recently cannot be explicitly completed without the use of traditional instruments like personal diaries, trusted friends, or modern psychotherapists. Such an intimate activity is not always an individualistic endeavor, however, as institutions where we share our own intimate thoughts abound: confessions, circles, AA-meetings. What distinguishes diaries, friends, and therapists apart is a lack of intrusion of the Other. We may expect inner voice, when externalized, to still preserve a lack of judgment that allows for the flow of emotional expression.

The widespread use of ChatGPT for emotional processing therefore may hint at a privacy relevance unseen before. We ask, is this an emergent form of privacy in the age of AI that ought to be recognized and studied?

We share a philosophical and ethical basis for the intimate use case as an extension of or a witness to the self, or an interception to the becoming of self, map it to modern psychology and technological design, before comparing it with analogies like diaries. Lastly, we apply these analogies’ relevant law and regulations and attempt to contour the notion of Privacy of the Mind.