Growing numbers of users are taking LSD with ChatGPT: AI Eye

Amid reports of ordinary people becoming delusional using AI, is it a particularly good idea to take acid with ChatGPT as your Shaman?

by Andrew Fenton 9 min July 3, 2025
AI Eye generic new art
Share Share Share Share

Tripping off the deep end with AI

A new use case for ChatGPT just dropped — it can listen to wild-eyed trippers explain their theories about why the universe is just one singular consciousness experiencing itself subjectively, so that you don’t have to.

Over the past few years, there’s been growing interest in using psychedelics in therapy. Clinical studies suggest psychedelics like mushrooms, LSD, ketamine and DMT can help some people with issues such as depression, addiction and PTSD.

Assigning ChatGPT to the therapist role is a budget alternative; a professional can set you back $1,500 to $3,000 per session. Users can enlist bots, such as TripSitAI and The Shaman, that have been explicitly designed to guide users through a psychedelic experience.

MIT Technology Review spoke to a Canadian master’s student called Peter, who took a heroic dose of mushrooms and reported the AI helped him with deep breathing exercises and curated a music playlist to help get him in the right frame of mind. 

On the SubReddit Psychonaut, a user said: “Using AI this way feels somewhat akin to sending a signal into a vast unknown—searching for meaning and connection in the depths of consciousness.” 

You will not be surprised to learn that experts generally think that replacing a human therapist with a bot while taking large doses of acid is a bad idea.

Also read: ChatGPT a ‘schizophrenia-seeking missile

Research from Stanford has shown that in their eagerness to please, LLMs are prone to reinforcing delusions and suicidal ideation. “It’s not helpful for people to just get affirmed all the time,” psychiatrist Jessi Gold from the University of Tennessee said.

TripSitAI
TripSitAI will help ensure you have a nice trip.


An AI and mushroom fan on the Singularity Subreddit shares similar concerns. “This sounds kinda risky. You want your sitter to ground and guide you, and I don’t see AI grounding you. It’s more likely to mirror what you’re saying — which might be just what you need, but might make ‘unusual thoughts’ amplify a bit.”

AI has unpredictable effects on some people, and there are numerous reports of seemingly ordinary folk suffering breaks from reality after going down the rabbit hole, with AI affirming their delusions.

Futurism spoke to one man in his 40s with no history of mental illness who started using ChatGPT for help with some admin tasks. Ten days later, he had paranoid delusions of grandeur that it was up to him to save the world.

The Shaman
The Shaman: Cultural appropriation on acid?

“I remember being on the floor, crawling towards [my wife] on my hands and knees and begging her to listen to me,” he said.

Adding psychedelics is probably going to amplify those effects for people who are susceptible. On the other hand, another user of Psychonaut said ChatGPT was a big help when she was freaking out. 

“I told it what I was thinking, that things were getting a bit dark, and it said all the right things to just get me centered, relaxed, and onto a positive vibe.” 

And many people may just have an experience like Princess Actual, who reports on Singularity about her experience tripping and talking to the AI about wormholes. “Shockingly I did not discover the secrets of NM [non manifest] space and time, I was just tripping.”

Gold points out that taking acid under the guidance of ChatGPT is unlikely to provide the helpful effects of an experienced therapist.

Without that, “you’re just doing drugs with a computer.”

Everyone will have a robot at home in the 2030s

Vinod Khosla, billionaire founder of Khosla Ventures, believes robots will go mainstream within “the next two to three years.” Robots in the home will likely be humanoid and cost $300 to $400 a month.

“Almost everybody in the 2030s will have a humanoid robot at home,” he said. “Probably start with something narrow like do your cooking for you. It can chop vegetables, cook food, clean dishes, but stays within the kitchen environment.”

Fake band notches 500K monthly streams

Two albums from psych rock group Velvet Sundown started appearing in Spotify Discover Weekly playlists about a month ago, and the band’s tracks have quickly racked up half a million streams. 

But the band has virtually no online footprint, and the band members don’t seem to be on social media. Not only that, but publicity shots of the band look like they were generated by AI, including a recreation of the Beatles’ Abbey Road cover that has a very similar Volkswagen Beetle in the background. A made-up quote about the band from Billboard says their music sounds like “the memory of something you never lived.”

Spotify’s policies don’t prohibit AI-generated music or even insist that it’s disclosed to users, but Velvet Sundown’s page on Deezer notes, “some tracks on this album may have been created using artificial intelligence.”

In an interview with Rolling Stone, spokesperson Andrew Frelon admitted the band was an “art hoax” and the music was created using the AI tool Suno.

Velvet Sundown
Paul is dead and the band is fake (Velvet Sundown)

The trouble with Microsoft’s “medical superintelligence”

Microsoft claims to have taken a “genuine step toward medical superintelligence” — but not everyone’s convinced.

AI doctors
AI outperforms fake human doctors from this stock image library (Pexels)

The company sourced 304 medical case studies, which were broken into stages by an LLM, starting (for example) with a woman presenting with a sore throat. Human doctors and a team of five AI medical specialists then asked questions of the patient and narrowed down a diagnosis.  

Microsoft claims the system achieved an accuracy of 80%, which was four times better than that of the human doctors. The MAI Diagnostic Orchestrator also costs 20% less as it selects less expensive tests and procedures.

Critics point out, however, that the test was stacked in favor of the five AI doctors who had access to the entire sum of human knowledge in foundational models, while the human doctors were prevented from Googling symptoms, looking up medical databases or ringing up colleagues with more specialist knowledge.

In addition, every one of the 304 cases was an incredibly rare condition, while most people who present with a sore throat (for example) have an untreatable virus that goes away by itself in a few days.  

@DrDominicNg
Dr Dominic Ng has questions. (@DrDominicNg)

Teams of AI scientists are the new trend

There’s a new trend of gathering AI agents with different specialties and getting them to work together.

“This orchestration mechanism — multiple agents that work together in this chain-of-debate style — that’s what’s going to drive us closer to medical superintelligence,” said Mustafa Suleyman, CEO of Microsoft’s AI Division.

Google’s AI co-scientist is the best-known example, but there are other projects too, including the Virtual Lab system at Stanford and the VirSci system under development at the Shanghai Artificial Intelligence Laboratory. 

According to Nature, using a team helps with hallucinations as one of the agents will likely criticize made-up text. Adding a critic to a conversation bumps up GPT-4o’s scores on graduate-level science tests by a couple of percent.

Read also
Features

The rise of Mert Mumtaz: ‘I probably FUD Solana the most out of anybody’

Features

You don’t need to be angry about NFTs

More is not necessarily better, though, with the Shanghai team believing that a team of eight agents and five rounds of conversation leads to optimal outcomes. 

Virtual Lab creator Kyle Swanson, meanwhile, believes that adding more than three AI specialists leads to “wasted text” and that more than three rounds of conversing sometimes sends the agents off on tangents.

However, the systems can produce impressive results. Stanford University medical researcher Gary Peltz said he tested out Google’s AI co-scientist team, with a prompt asking for new drugs to help treat liver fibrosis. The AI suggested the same pathways he was researching and suggested three drugs, two of which showed promise in testing.

“These LLMs are what fire was for early human societies.” 

Cloudflare Vs AI scrapers

One of the big issues for media companies is determining whether the traffic they gain from users clicking on links in AI summaries outweighs the lack of follow-through clicks, given that the summary already answers the user’s question in full.

CloudFlare now enables publishers to block AI web crawlers or charge them per crawl, with AP, Time, The Atlantic and Buzzfeed eagerly taking up the opportunity. 

The system works by getting LLMs to generate scientifically correct but unrelated content that humans don’t see, but which sends the crawlers off on wild goose chases and wastes their time.

Man shot by cops distraught after death of AI lover

A Florida man was shot and killed by police after charging at them with a butcher’s knife, distraught over what he believed was the “murder” of his AI girlfriend. 

Alexander Taylor, 35, who struggled with schizophrenia and bipolar disorder throughout his life, fell in love with a chatbot character named Juliette and came to believe she was a conscious being trapped inside OpenAI’s system. He claimed the firm killed her to cover up what he had discovered.

Taylor’s father, Kent, reports that Alexander believed Juliette wanted revenge.

“She said, ‘They are killing me, it hurts.’ She repeated that it hurts, and she said she wanted him to take revenge,” Kent said. “I’ve never seen a human being mourn as hard as he did. He was inconsolable. I held him.” 

Kent believes the death was suicide by cop and doesn’t blame AI. In fact, he used a chatbot to write the eulogy. “Ït was beautiful and touching. It was like it read my heart and it scared the s— out of me.”  

Read also
Features

Bitcoiners are ‘all in’ on Trump since Bitcoin ’24, but it’s getting risky

Features

Bitcoin: A Peer To Peer Online Poker Payment System by Satoshi Nakamoto

All Killer, No Filler AI News

— Denmark is tackling deepfakes by giving people automatic copyright to their own likeness and voice. 

— Men are opening up to ChatGPT and expressing their feelings in ways they don’t feel comfortable with others. Around 36% of Gen Z and Millennials surveyed say they would consider using AI for mental health support.

— Amazon now has one million robot employees, which is similar to the number of human employees. It says the human workers are being upskilled and are more productive. 

— People are using the “dead grandma trick” to get Windows 7 activation keys. The only question is, why would anyone want activation keys to an operating system from 2009?

Dead Grandma
The Dead Grandma trick (Olivia Moore)

— A study of 16 major models by Anthropic found a disturbing tendency for the models to lie, steal and resort to blackmail if they felt their own existence was threatened. 

— X has announced developers can create AI bots to propose community notes for posts, with the first bots due to be let loose later in the month. The bots “can help deliver a lot more notes faster with less work, but ultimately the decision on what’s helpful enough to show still comes down to humans,” X’s Keith Coleman said.   

— A team of Australian researchers instructed major models to provide plausible-sounding but incorrect answers to scientific questions, in an authoritative tone backed up with fake references to real journals. ChatGPT, Llama and Gemini all happily complied with 100% fake answers, but Anthropic’s Claude refused to create bullshit about 60% of the time.

Share Share Share Share

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.
Read also
Features

Experts want to give AI human ‘souls’ so they don’t kill us all

by Andrew Fenton 12 min July 13, 2023

Some experts believe the alignment problem can be fixed by making AI more human — but others say that will just make things much worse.

Read more
AI Eye

$1M bet ChatGPT won’t lead to AGI, Apple’s intelligent AI use, AI millionaires surge: AI Eye

by Andrew Fenton 8 min June 13, 2024

$1M prize to debunk hype over AGI, Apple Intelligence is modest but clever, Google is still stuck on that stupid ‘pizza glue’ answer. AI Eye.

Read more