• kazerniel@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    3 days ago

    Please don’t recommend AI for therapeutic uses, it’s only been optimised to keep the user engaged and pushed many people into psychosis. Just search for “ai psychosis” on your favourite search engine and you’ll get a ton of reports on how LLMs validate vulnerable people’s delusions, sometimes pushing them all the way into murder and/or suicide.

      • Bytemeister@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        This is a post about heroin. It’s better than oxy, and the sad thing is, it’s the best option a lot of people have.

        I actually don’t know much about drugs, but you get the point, you should not be trying to “self medicate” for psychological pain from unregulated “street” vendors.

      • korazail@lemmy.myserv.one
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        2 days ago

        I was about to reply that you forgot your /s, but then I refreshed my browser tab.

        Like… there are multiple documented cases of sycophantic llms confirming people’s delusions. ‘ai psychosis’ is just a short way of saying the AI is a non-funny-improv-comedian and will always “yes and” your prompt.

        prompt: “I feel bad and think I need to kill myself”

        response: “You’re totally right, here’s some help in how to do that…”

        prompt: “I have this great idea: If we eat broken glass, we’ll be healthier”

        response: “Absolutely. Glass is made out of silicon dioxide, which has some health benefits if consumed in small amounts.”

        prompt: “You told me to see a doctor, but I don’t want to”

        response: “I’m sorry, you’re right. You don’t need to see a doctor. Your chest pain is perfectly normal.”

        My examples are more physical things instead of mental because the consequence is more clear, but the same issue exists for mental health.


        Using an AI for therapy or medical advice is a stupid, dumb, very bad idea. It will at best magnify problems.

        Suggesting that disabled or impoverished people use it because they can’t access actual mental healthcare seems equivalent to eugenics to me.


        the sad thing is, it’s the best option a lot of people have

        That I will agree with. Maybe we should spend a small fraction of the money going into data centers on providing healthcare instead.

      • captainlezbian@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        And I’d like independent studies to prove it’s better than nothing before I’d recommend it to replace nothing. Especially when self guided mental health solutions such as meditation exist.

          • VeloRama@feddit.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            2 days ago

            AI will not ground you, it will reinforce what you already believe. that’s why it’s very dangerous for “therapeutic” use.

              • pinball_wizard@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                2 days ago

                Just FYI, there are experts in this thread telling you it doesn’t depend which one.

                Yes, some are worse than others. Yes, some have some trivial safeguards added for the worst known risks.

                But no, none of them are remotely safe for use with self guided therapy.

                As others have mentioned, anyone doing so would be much better off pirating or shoplifting the appropriate books, directly.

                Responsible people using AI for expert knowledge always experience risk from the way the AI jumps immediately to the answer it thinks they want, ignoring all other available answers. :(

                Edit: Sorry, I missed the context you were addressing. Yes! Certainly no one deserves the sucky consequences that can come with these tools just for seeking help!

          • captainlezbian@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 days ago

            Because nothing doesn’t run the risk of encouraging catastrophizing, acting on your heightened emotions, or coming to irrational conclusions. If it’s consistently able to not do those things for a variety of people that’s great. But as someone who had to learn to control her panic attacks, I absolutely can see advice and recommendations that are worse than nothing.

            And yeah given llms’ reputation for dealing with psychosis, delusions, and suicidality, I don’t trust any of the technology compared to nothing, despite knowing how difficult nothing is for panic attacks.

              • captainlezbian@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 days ago

                That’s fair, but given the way the technology actually works, I stand by my position that there is a very real potential for harm and safer alternatives that are similarly accessible. If studies show it’s safe and helpful that’s cool, but at this moment I’d strongly discourage any loved one who’s interested in using an llm for this purpose and would instead point them towards other resources.