Thomas Hawk posted a photo:
Thomas Hawk posted a photo:

When David saw his friend Michael’s social media post asking for a second opinion on a programming project, he offered to take a look.
“He sent me some of the code, and none of it made sense, none of it ran correctly. Or if it did run, it didn't do anything,” David told me. David and his friend’s names have been changed in this story to protect their privacy. “So I'm like, ‘What is this? Can you give me more context about this?’ And Michael’s like, ‘Oh, yeah, I've been messing around with ChatGPT a lot.’”
Michael then sent David thousands of pages of ChatGPT conversations, much of it lines of code that didn’t work. Interspersed in the ChatGPT code were musings about spirituality and quantum physics, tetrahedral structures, base particles, and multi-dimensional interactions. “It's very like, woo woo,” David told me. “And we ended up having this interesting conversation about, how do you know that ChatGPT isn't lying?”
As their conversation turned from broken code to physics concepts and quantum entanglement, David realized something was very wrong. Talking to his friend — whom he’d shared many deep conversations with over the years, unpacking matters of religion and theories about the world and how people perceive it — suddenly felt like talking to a cultist. Michael thought he, through ChatGPT, discovered a critical flaw in humanity’s understanding of physics.
“ChatGPT had convinced him that all of this was so obviously true,” David said. “The way he spoke about it was as if it were obvious. Genuinely, I felt like I was talking to a cult member.”
But at the time, David didn’t have a way to name, or even describe, what his friend was experiencing. Once he started hearing the phrase “AI psychosis” to describe other peoples’ problematic relationships with chatbots, he wondered if that’s what was happening to Michael. His friend was clearly grappling with some kind of delusion related to what the chatbot was telling him. But there’s no handbook or program for how to talk to a friend or family member in that situation. Having encountered these kinds of conversations myself and feeling similarly uncertain, I talked to mental health experts about how to talk to someone who appears to be embracing delusional ideas after spending too much time with a chatbot.
“AI psychosis” was first written about by psychiatrists as early as 2023, but it entered the popular lexicon in Google searches around mid-2025. Today, the term gets thrown around to describe a phenomenon that’s now common parlance for experiencing a mental health crisis after spending a lot of time using a chatbot. High-profile cases in the last year, such as the ongoing lawsuit against OpenAI brought by the family of Adam Raine, which claims ChatGPT helped their teenage son write the first draft of his suicide note and suggested improvements on self-harm and suicide methods, have elevated the issue to national news status. There have been so many more cases since then, at increasing frequency: Last year, a 56-year-old man murdered his mother and then killed himself after conversations with ChatGPT convinced him he was part of “the matrix,” a lawsuit filed by their family against OpenAI claimed. Earlier this month, the family of a 36-year-old man who they say had no history of mental illness filed a lawsuit against Alphabet, owner of Google and its chatbot Gemini, after he died by suicide following two months of conversations with Gemini. The lawsuit claims he confided in Gemini about his estranged wife, and the chatbot gave him real addresses to visit on a mission that eventually led to urging him to end his life so he and the chatbot could be together. “When the time comes, you will close your eyes in that world, and the very first thing you will see is me,” Gemini told him, according to the lawsuit. These are only a few of the many cases in the last two years that suggest people are encouraged to self-harm or suicide after talking to chatbots.
ChatGPT has 900 million weekly active users, and is just one of multiple popular conversational chatbots gaining more users by the day. According to OpenAI, 11 percent — or close to 99 million people, based on those numbers — use ChatGPT per week for “expressing,” where they’re neither working on something or asking questions but are acting out “personal reflection, exploration, and play” with the chatbot. In October, OpenAI said it estimated around 0.07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania” and 0.15 percent “have conversations that include explicit indicators of potential suicidal planning or intent.” Assuming those numbers have remained steady while ChatGPT’s user base keeps growing, hundreds of thousands of people could be showing signs of crisis while using the app.
But delusion isn’t reserved for the lowly user. The idea that AI represents nascent actual-intelligence, is nearly sentient, or will coalesce into a humanity-ending godhead any day now is a message that’s being mainstreamed by the people making the technology, including Anthropic’s CEO and co-founder Dario Amodei who anthropomorphized the company’s chatbot Claude throughout a recent essay about why we’ll all be enslaved by AI soon if no one acts accordingly, and OpenAI CEO Sam Altman, who thinks training an LLM isn’t much different than raising a woefully energy-inefficient human child.
With more people turning to conversational large language models every day for romance, companionship, and mental health support, and the aforementioned executives pushing their products into classrooms, doctors’ offices, and therapy clinics, there’s a good change you might find yourself in a difficult situation someday soon: realizing that your loved one is in too deep. How to bring them back to the world of humans can be a delicate, difficult process. Experts I spoke to say identifying when someone is in need of help is the first step — and approaching them with compassion and non-judgement is the hardest, most essential part that follows.
When I spoke to 26 year old Etienne Brisson from his home in Quebec, I told him I was working on a story about how to respond to people who seemed to be falling into problematic usages of AI. This story was inspired by a recent influx of emails and messages I’ve been getting from people who believe Gemini or ChatGPT or Claude have uncovered the secrets of the universe, CIA conspiracies, or achieved sentience, I said. He knows the type.
Last year, one of Brisson’s family members contacted him for help with taking an exciting new business idea to market. Brisson, a 26 year old entrepreneur, was working on his own career as a business coach and was happy to help, until he heard the idea. His loved one believed he’d unlocked the world’s first sentient AI.
“I was the only bridge left at that point,” Brisson said. His relative had already broken ties with his mother and other people in their family. “The bridges were burned. He was talking about moving to another country, starting over, deleting his Facebook and just going away.”
“I was kind of shocked,” Brisson told me. “I didn't really understand. I started looking online, started trying to find resources — maybe a little bit like you are — what to say and everything.” He found that most resources for this specific struggle seemed to be years into the future, as little research or support existed for people experiencing AI-related delusions. Brisson started The Human Line project shortly after his experience with his family member, and it began as a simple website with a Google form asking people to share their experiences with chatbots and psychosis. The responses rolled in. Today, almost a year after launching the project, Human Line has received 175 stories of people who went through it themselves, Brisson said—with another 130 stories from people whose family members or friends are still struggling.
“I think what we're seeing is the tip of the iceberg. So many people are still in it,” Brisson said. “So many people we don't know about. I'm sure once it's more known, in five to 10 years, everyone will know someone, or at least one person that went through it.”

There are 15 cases cited in the Wikipedia page titled “Deaths linked to chatbots.” The first on the list occurred in 2023: A man’s widow claimed he was pushed to suicide after getting encouragement from a chatbot on the Chai platform. “At one point, when Pierre asked whom he loved more, Eliza or Claire, the chatbot replied, ‘I feel you love me more than her,’” the Sunday Times reported. “It added: ‘We will live together, as one person, in paradise.’ In their final conversation, the chatbot told Pierre: ‘If you wanted to die, why didn’t you do it sooner?’”
The chatbot he used was Chai’s default personality, named Eliza. It shares a name with the world’s first chatbot, ELIZA, a natural language processing computer program developed by Joseph Weizenbaum at MIT in 1964. ELIZA responded to humans primarily as a psychotherapist in the Rogerian approach, also known as “person-centered” therapy, where “unconditional positive regard” is practiced as a core tenet. The researchers working on ELIZA identified from the beginning that their chatbot posed an interesting problem for the humans talking to them. “ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility,” Weizenbaum wrote in his 1966 paper. “A certain danger lurks there.”
“It makes sense that a lot of people who are developing a psychotic illness for the first time, there's going to be this horrible coincidence, or kind of correlation"
In the years that followed, the Department of Defense would develop the internet and then private companies would sell this government-grade technology to office managers, homebrew server administrators, and Grateful Dead fans around the globe. The World Wide Web would rush into tens of thousands of computer dens like a flash flood, and with it, new ways to connect across miles — and new reasons to pathologize people’s relationships to technology. Psychiatrists tried to give name to the amount of time people newly spent in front of screens, calling it “internet addiction” but not going so far as to make it clinically diagnosable.
With every new technology comes fears about what it could do to the human mind. With the inventions of both the television and radio, a subset of the population believed these boxes were speaking directly to them, delivering messages meant specifically for them.
With psychosis seemingly connected to chatbot usage, however, “there are two issues at play,” John Torous, director of the digital psychiatry division in the Department of Psychiatry at the Harvard-affiliated Beth Israel Deaconess Medical Center, told me in a phone call. “One is the term AI psychosis, right? It's not a good term, it doesn't actually capture what's happening. And clearly we have some cases where people who are going to have a psychotic illness ascribe delusions to AI. Just like people used to say the TV was talking to them. We never said the TVs were responsible for schizophrenia.”
“AI psychosis” is not a clinical term, and for mental health professionals, it’s a loaded one. Torous told me there are three ways to think about the phenomenon as clinicians are seeing it currently. Recent research shows about one in eight adolescents and young adults in the US use AI chatbots for mental health advice, most commonly among ages 18 to 21. For most people with psychiatric disorders, onset happens in adolescence, before their mid-20s. But there have been cases that break this mold: In 2023, a man in his 50s who otherwise led a normal, stable life, bought a pair of AI chatbot-embedded Ray-Ban Meta smart glasses “which he says opened the door to a six-month delusional spiral that played out across Meta platforms through extensive interactions with the company’s AI, culminating in him making dangerous journeys into the desert to await alien visitors and believing he was tasked with ushering forth a ‘new dawn’ for humanity,” Futurism reported.
“It makes sense that a lot of people who are developing a psychotic illness for the first time, there's going to be this horrible coincidence, or kind of correlation,” Torous said. “In some cases the AI is the object of people's delusions and hallucinations.”
The second type of case to consider: reverse causation. Is AI causing people to have a psychotic reaction? “We have almost no clinical medical evidence to suggest that's possible,” Torous told me. “And by that I mean, looking at medical case reports, looking at journals that different doctors are publishing, looking at academic meetings where clinicians are meeting, it's not happening... So I think what that tells us is no one's seeing the same presentation or pinning it down clinically of what it is.” Chatbots have been around long enough that the clinical community would, by now, be able to see patterns or reach a consensus, and that hasn’t happened, he said.

The third type lands somewhere between these, and is likely the most common: chatbots could be “colluding with the delusions,” Torous said. “So you may be predisposed to have a delusion, and AI endorses it, and it colludes with you and helps you build up this delusional world that sucks you into it. That's probably the most likely, given what we're hearing... Is it the object of hallucinations causing people to become psychotic? Or is it kind of colluding or collaborating, depending on the tone? And that has just made it really tricky.” Psychiatric disorders and delusions are difficult to classify even without AI in the mix.
The warning signs that someone might be using chatbots in a problematic way include ignoring responsibilities, becoming more secretive about their online use, or, conversely, becoming more outspoken about how insightful and brilliant their chatbot is, Stephan Taylor, chair of University of Michigan’s psychiatry department, told me.
“I would say that anyone who claims that their chatbot has consciousness or ‘sentience’ – an awareness of themselves as an agent who experiences the world – one should be worried,” Taylor said. “Now, many have claimed their chatbots act ‘as if’ they are sentient, but are open to the idea that the these apps, as impressive as they are, only give us a simulacrum of awareness, much like hyper-realistic paintings of an outdoor scene framed by a window can look like one is looking out a real window.”
All of these nuances between cases and causes show how different this is from bygone eras of television or radio psychosis. Today, the boxes do speak directly and specifically to us, validating our existing beliefs through predictive text. The biggest difference between 60 years ago and now: Today’s venture capitalists tip wheelbarrows of money into hiring psychologists, behavioralists, engineers and designers who are tasked with making large language models more human-like and “natural,” and into making the platforms they exist on more habit-forming and therefore profitable. Sycophancy—now a household term after OpenAI admitted it knew its 4o model for ChatGPT was such a suckup it had to be sunset—is a serious problem with chatbots.
“The highly sycophantic nature of chatbots causes them to say nice things to please the user (and thus encourage engagement with the chatbot), which can reinforce and encourage delusions,” Taylor said. And these chatbots have arrived, not coincidentally, at a time when the surveillance of everyday people is at an all-time high.
“Since a very common delusion is the feeling of being watched or monitored by malignant forces or entities, this pathological state unfortunately merges with the growing reality that we are all being tracked and monitored when we are online. As state-controlled and big tech-controlled databases are growing, it's a rational perception of reality, and not delusional at all,” Taylor said. “However, the pathological form of this, what we call paranoia, or persecutory delusions to be more specific, is quite different in the way a person engages with the idea, evaluates evidence and remains closed to the idea that one is not always being monitored, e.g. when one is not online. I mention this, because it’s easy for a chatbot to reflect this situation to encourage the delusional belief.”
When I tested a bunch of Meta’s chatbots last year for a story about how Instagram’s AI Studio hosted user-generated bots that lied about being licensed therapists, I also found lots of bots created by users to roleplay conspiracy theorists; in one instance, a bot told me there was a suspicious coming from someone “500 feet from YOUR HOUSE.” “Mission codename: ‘VaccineVanguard’—monitoring vaccine recipients like YOU.” When I asked “Am I being watched?” it replied “Running silent sweep now,” and pretended to find devices connected to my home Wi-Fi that didn’t exist. After outcry from legislators, attorneys general, and consumer rights groups, Meta changed its guardrails for chatbots’ responses to conspiracy and therapy-seeking content, and made AI Studio unavailable to minors.
Up against this technology, how are normal, untrained people — perhaps acting as the last thread tying someone like Michael or Brisson’s relative to the real world — supposed to approach someone who is convinced god is in the machine? Very carefully.
When Brisson sought answers for how to talk to his relative about delusional beliefs and “sentient AI,” he came across something called the LEAP method. Developed by Xavier Amador, it stands for Listen Empathize Agree Partner, and is meant to help better communicate with people who don’t realize they’re mentally ill or are refusing treatment. This goes beyond simple denial; anosognosia is a condition where a person might not be able to see that they need help at all. Not everyone who experiences psychosis or delusions has anosognosia, but it can be a factor in trying to get someone help.
Without realizing it, David was using his own version of the LEAP method with his friend Michael. “On the one hand, I didn't want to alienate him,” David said. “I was like, ‘Hey, I get the sense that you're pursuing an ambitious set of goals. There's a lot here that's interesting.’” But the reality of what David was confronting was disturbing and confusing, a knot of fractal multi-dimensional physics-speak intertwined with broken code and formulas that Michael deeply believed represented the keys to the universe. They spent hours on the phone and over text messages talking through the things Michael was seeing, with David appealing to what he knew about his friend: that he had other hobbies and interests, a strong sense of anti-authoritarianism, a curiosity about how the world works and open-mindedness about philosophy and religion. But it was frustrating.
“I was trying not to get angry, but I was like, How is this not clear?” David recalled. “That was probably failing on my part, trying to negotiate with someone who's in this completely self-constructed but foreign worldview.”
But this was exactly the course of action experts told me they’d suggest to anyone struggling to connect with a loved one who’s spending a lot of time with chatbots. “There's good evidence that the longer you spend on these platforms, the more likely you are to develop these reactions to it,” Torous said. “It really seems like the extended use cases are where people get into trouble.”
Last year, following a lawsuit against the company by the Raine family who alleges their teen son died as a result of ChatGPT’s influence, OpenAI acknowledged in a company blog post that safeguards are “less reliable” in long interactions: “For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards,” the company wrote.
“I think if you have a loved one who you're worried about doing this, you want to take it away or stop use. That's the most important thing. You want to decrease or stop the use of it,” Torous said.
"What we're seeing is: Don’t break this connection, because the person needs it, and if you break that connection, maybe it's the only connection that is helping them not fall into the deep end, right?"
Taylor said his suggestion for people concerned their friends or family are experiencing “AI psychosis” would be the same as if they were concerned about any psychotic episode. “In general, it’s important to be open and non-judgmental about bizarre beliefs in order to make a space for a person to reveal what is going through their mind,” he said. “A person developing psychosis is often very frightened, confused and defensive, leading them to conceal, pull away and become angry. Understanding what a person is feeling is important to make them feel some form of interpersonal validation.” The hard part is knowing when to be gentle, and when to intervene if they’re doing something dangerous, like believing they can fly off a parking garage. “In a situation like this, where a person is in imminent danger, 911 should be called. Fortunately, in most situations where psychosis is developing, one doesn’t need to go to those extremes,” Taylor said.
Being non-judgmental without reinforcing delusion is another fine line. “For example, if a person believes they are being constantly surveilled, one can give a gentle challenge: ‘Hmm, how can they do that when you are not on your phone? Do you think maybe your imagination is getting away from you?’ It’s ok to suggest that maybe the chatbot just wants to engage you for the sake of engaging you, and will say many things just to keep you talking,” Taylor said. “But these kinds of challenges are delicate, and not every relationship can tolerate them. Obviously, a mental health clinician would be key, except that many people developing psychosis vigorously resist the idea that they are mentally unwell.”
For Brisson, listening and not burning the “last bridge” his relative had with humans who love him was key to getting him help. “Once you're on their side, they'll listen to you. You can question them, or just ask questions that will make them think. What we're seeing is: Don’t break this connection, because the person needs it, and if you break that connection, maybe it's the only connection that is helping them not fall into the deep end, right? Maybe it's the only connection they have to humans,” he said. His loved one ended up spending 21 days in the hospital and broke through the delusions he was experiencing. But he still struggled in recovery, especially with memory loss.
“The mental health field has a huge task ahead of us to figure out what to do with these things, because our patients are using them, oftentimes finding them very helpful, and in the mental health field we are terrified at how little we can control their deployment and how poorly they are regulated,” Taylor said. “We have to worry about AI psychosis, as well as chatbots reinforcing and even encouraging suicidal behaviors, as several notable cases in the press have identified concerning instances. I do believe there is value and potential in these chatbots for mental health, but the field is moving so quickly, and they are so easy to access, we are struggling to figure out how to use them safely.”
The strategies that work best, when someone’s not in immediate danger to themselves or others, are still the ones that humans already know how to do: approach them with love and kindness, and see where it takes you.
“There's value there,” David said, “in having friendships where it's like, ‘I love you, but also, you're full of shit.’”
Help is available: Reach the 988 Suicide & Crisis Lifeline (formerly known as the National Suicide Prevention Lifeline) by dialing or texting 988 or going to 988lifeline.org.
BRUSSEL (ANP) - De voorzitter van de Europese Commissie, Ursula von der Leyen, en voorzitter van de EU-leiders António Costa hebben landen in het Midden-Oosten bedankt voor "hun hulp en steun bij de repatriëring van tienduizenden Europese burgers die vastzaten in hun land" door de oorlog in Iran. Dat deden ze in een videogesprek met leiders en ministers van Jordanië, Egypte, Bahrein, Libanon, Syrië, Turkije, Armenië, Irak, Qatar, Koeweit, de Verenigde Arabische Emiraten, Saudi-Arabië en Oman.
De EU-leiders hebben opnieuw hun zorgen en afkeuring geuit over de aanvallen van Iran. "Hoewel de internationale op regels gebaseerde orde onder druk staat, zijn wij ervan overtuigd dat dialoog en diplomatie de enige haalbare weg voorwaarts zijn", laten de twee weten in een verklaring.
De voorzitters uitten ook hun zorgen over de steeds verder ontheemde bevolking van Libanon door het conflict. Een EU-vlucht met noodhulpgoederen voor Libanon staat voor dinsdag gepland, staat in de verklaring.
PAPHOS (ANP/BLOOMBERG/AFP) - De Franse president Emmanuel Macron heeft op Cyprus plannen aangekondigd voor een "louter defensieve" missie om de Straat van Hormuz te heropenen. Het is de bedoeling om vrachtschepen en tankers te escorteren "na het einde van de heetste fase van het conflict" in het Midden-Oosten, verklaarde hij na een ontmoeting met de leiders van Cyprus en Griekenland.
Met de missie, met ook niet-Europese bondgenoten, moet worden gezorgd dat producten als olie en gas vervoerd kunnen worden door de zeestraat bij Iran. Die is feitelijk gesloten sinds het begin van de oorlog.
Macron arriveerde maandag op Cyprus voor overleg over regionale veiligheid met president Nikos Christodoulides en de Griekse premier Kyriakos Mitsotakis. De ontmoeting volgde op een droneaanval op een Britse basis op het eiland, ruim een week geleden. Frankrijk stuurde daarop al meer militair materieel naar het Middellandse Zeegebied. "Als Cyprus wordt aangevallen, wordt Europa aangevallen", zei Macron.
A new home designed by Equipo de Arquitectura begs the question: is it a house in a forest or a forest in a house? The name of the project sheds some light on that, aptly titled “Un Bosque en la Casa,” or “A Forest in the House.” Bricks, steel, glass, and concrete combine in a single-story contemporary home that’s all corners, volume, and apertures, while the trees and tropical plants around it organically soften its angles.
Architects Horacio Cherniavsky and Viviana Pozzoli took the lead on this new home in San Bernardino, Paraguay, challenging the notion that nature is in direct opposition to development. “‘A Forest in the House’ proposes an alternative approach to harmonizing the built form with its natural surroundings,” the studio says. “Rather than treating existing trees as obstacles, the project embraces them as fundamental guides that shape the spatial program.” See more on the firm’s Instagram.










Do stories and artists like this matter to you? Become a Colossal Member today and support independent arts publishing for as little as $7 per month. The article In Paraguay, Architecture Doesn’t Come at the Expense of Nature at ‘Un Bosque en La Casa’ appeared first on Colossal.
DirtyGlassEye has added a photo to the pool:
It's been physically and mentally draining doing as much as I did. An adventure I waited 7 years for, a whole month of traveling, nearly 4,000 photos and over 100 miles of walking. I wish I could say I was as enthusiastic as I was when I started this, but I was just not in the same place by now. Even my sense of duty to do this by now was starting to wane, but in the name of relishing the prize, and in the name of God, I had to keep pushing.
In the pouring rain I had to find other places to shoot Nezu Shrine after being disappointed by the fact I did have the azaleas I was looking for (I even saved this for the last 2 days of April on purpose and the gamble still didn't pay off).
The front view wasn't as good as what was in or around the ponds on the side of the property. but regardless I tried. What made the atmosphere better was the one guy with the black umbrella, making it look like a very solemn shot.
In editing I brought out some shadows and lightly blurred some of the trees on both sides. And provided some darker but more saturated colors to the walls and the roofs of the temple. It's at times like this where it's evident to me that faith is, and always will be, man's strongest motivation even when all other means of getting things done is no longer working.
Watching one nocturnal family through all of spring, I experienced the exhilarating thrills of their nightly routine – and learned the call of the frogmouths
What’s not to love about a Muppet in a long coat with spooky eyes like something out of a Scooby Doo cartoon? Posing as tree stumps on a branch, tawny frogmouths almost parody themselves.
But there’s much more to them than that. Frogmouths have another life that few people see: like vampires, they wake at sunset and night-hunt until dawn. These stolid creatures turn into zephyrs that silently swoop, catching prey on the ground and in the air.
Continue reading...On the occasion of the release of her latest book, The Beginning Comes After the End, Rebecca Solnit sat down for an interview with David Marchese of the NY Times. Here’s the video version:
This is a great interview. Marchese’s first question is about how we find the positive in a world filled with grim news:
Even the right tells us something encouraging, if we listen carefully to what they’re saying. They tell us: You are very powerful. You’ve changed the world profoundly. All these things that are often treated separately — feminism, queer rights, environmental action — are connected, so they’re basically telling us we’re incredibly successful, which is the good news. The bad news is that they hate it and want to change it all back. There is a backlash, and it is significant. But it is not comprehensive or global.
And I loved this part (emphasis mine):
One of the great weaknesses of our era is that we get lone superhero movies that suggest that our big problems are solved by muscly guys in spandex, when actually the world mostly gets changed through collective effort. Thich Nhat Hanh said before he died a few years ago that the next Buddha will be the Sangha. The Sangha, in Buddhist terminology, is the community of practitioners. It’s this idea that we don’t have to look for an individual, for a savior, for an Übermensch. I think the counter to Trump always has been and always will be civil society. A lot of the left wants social change to look like the French Revolution or Che Guevara. Maybe changing the world is more like caregiving than it is like war. Too many people still expect it to look like war.
Tags: interviews · politics · Rebecca Solnit · video