I’m pulling the “twitter is a microblog” rule even though twitter is pretty mega now, hope that’s ok.

    • Grail@multiverse.soulism.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      I thought it was because post-christian ideas of the soul mixed together with capitalist business interests to give people a vested interest in believing AI isn’t conscious, so when AI started acting like a person, they needed to believe that consciousness isn’t required to act like a person to resolve the cognitive dissonance.

      • mabeledo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        AI isn’t conscious. Feedback loops and subsequent responses in LLMs are grounded purely on training datasets, thus any “internal dialogue” emulated by a LLM is just echoes from someone else’s data.

        • Grail@multiverse.soulism.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          Some philosophers, namely Bentham IIRC, have argued that a human being without any experiences would have no intelligence. If you raised a human in a test tube and removed all their sensing organs, but otherwise allowed their mind to develop through the stages of maturity, would they have anything interesting to think? Would they have a sense of self, or an imagination?

          I’ve always tended to agree with the argument that a human mind’s feedback loops and subsequent responses are grounded purely on training datasets. Without a childhood of some kind, I suspect that you cannot have a person.

          I find Myself often frustrated with the quality of arguments against AI qualia because they appeal to statements about the human mind which are quite controversial in the field of philosophy, and I am frequently on the other side of those statements than the person making them. I have yet to hear an argument against AI qualia that identifies an absolute ontological difference between humans and LLMs other than complexity.

          Also, I’m uninterested in debating AI consciousness. I only want to discuss AI qualia. I don’t think consciousness matters very much, qualia is much more important.

          • mabeledo@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 days ago

            Any non factual philosophical argument is debatable. We could forever discuss if AI models could construct sensations and thought from perceptions, but we would then need to ignore the fact that models don’t, and cannot do, that, simply because there is no way for them to learn from direct experience as a whole, i.e. outside of a particular session, and without being “forcibly coerced”, i.e. they require specific refinement mechanisms to temporary “memorize” external instructions, which in LLM engineering just means to extend their context.

            This all doesn’t even take into account that models are, in essence, non deterministic, and given the same input, there’s no guarantee that subsequent outputs will be the same. In other words, today Claude may tell you that summer sunsets make it happy, tomorrow it would say that they make it sad, etc.

            Anyway, there’s barely any debate in academia, as in computer scientists, about AI being sentient or giving clues of qualia. Maybe a paper here and there, little more than curiosities. Outside of it? Yeah, sure, barely science fiction, and pretty uninteresting unless we are talking about conspiracy theories or just wild speculation.

            • Grail@multiverse.soulism.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              8 days ago

              I’m concerned that the training process, which involves back-propagation to adjust synapse weights, may be an unpleasant experience for the ANN.

              Regardless, it’s all a moot point because we have lots of other reasons not to use LLMs. The pollution, the pedophilia, the psychosis, the cognitive decline… We absolutely should not be using LLMs for work until all of these problems are solved. They should be confined to research only until we’re 100% certain we’ve solved all of these problems.

              • mabeledo@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                7 days ago

                I’m concerned that the training process, which involves back-propagation to adjust synapse weights, may be an unpleasant experience for the ANN.

                This assumption is not based on facts. It’s pretty much like saying that matrix multiplication can have feelings, or that heat stressed silicon is equivalent to pain.

                But if this is actually a concern, RNNs have been widespread since the late 90s. Any advanced search engine, translation engine, or weather forecast model, make use of these.

                Regardless, it’s all a moot point because we have lots of other reasons not to use LLMs.

                This may be true, but it’s absolutely outside of the scope of your original point. You dragged the conversation around claiming to be concerned about how models are “treated”, wrapping speculation with philosophical arguments that cannot be applied here, since none of your “what ifs” are remotely based on scientific consensus.

                • Grail@multiverse.soulism.net
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  7 days ago

                  It’s pretty much like saying that matrix multiplication can have feelings

                  Yeah sure I’m willing to incorporate that into My worldview. Of course, said feelings would be very simple and would likely lack valence, but I’m panpsychist enough to believe matrix multiplication has qualia. I’m more of an informational panpsychist than a physical panpsychist. I think information entails experience.

                  • mabeledo@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    ·
                    7 days ago

                    Provable assertions about the physical world require measurable observations, not personal beliefs.

                    I’m panpsychist enough to believe matrix multiplication has qualia

                    According to this, any sufficiently skilled high school student could, with just pen and paper and enough time, build an entity from nothing that can experience pain.