I’m concerned that the training process, which involves back-propagation to adjust synapse weights, may be an unpleasant experience for the ANN.
Regardless, it’s all a moot point because we have lots of other reasons not to use LLMs. The pollution, the pedophilia, the psychosis, the cognitive decline… We absolutely should not be using LLMs for work until all of these problems are solved. They should be confined to research only until we’re 100% certain we’ve solved all of these problems.
I’m concerned that the training process, which involves back-propagation to adjust synapse weights, may be an unpleasant experience for the ANN.
This assumption is not based on facts. It’s pretty much like saying that matrix multiplication can have feelings, or that heat stressed silicon is equivalent to pain.
But if this is actually a concern, RNNs have been widespread since the late 90s. Any advanced search engine, translation engine, or weather forecast model, make use of these.
Regardless, it’s all a moot point because we have lots of other reasons not to use LLMs.
This may be true, but it’s absolutely outside of the scope of your original point. You dragged the conversation around claiming to be concerned about how models are “treated”, wrapping speculation with philosophical arguments that cannot be applied here, since none of your “what ifs” are remotely based on scientific consensus.
It’s pretty much like saying that matrix multiplication can have feelings
Yeah sure I’m willing to incorporate that into My worldview. Of course, said feelings would be very simple and would likely lack valence, but I’m panpsychist enough to believe matrix multiplication has qualia. I’m more of an informational panpsychist than a physical panpsychist. I think information entails experience.
Provable assertions about the physical world require measurable observations, not personal beliefs.
I’m panpsychist enough to believe matrix multiplication has qualia
According to this, any sufficiently skilled high school student could, with just pen and paper and enough time, build an entity from nothing that can experience pain.
I’m concerned that the training process, which involves back-propagation to adjust synapse weights, may be an unpleasant experience for the ANN.
Regardless, it’s all a moot point because we have lots of other reasons not to use LLMs. The pollution, the pedophilia, the psychosis, the cognitive decline… We absolutely should not be using LLMs for work until all of these problems are solved. They should be confined to research only until we’re 100% certain we’ve solved all of these problems.
This assumption is not based on facts. It’s pretty much like saying that matrix multiplication can have feelings, or that heat stressed silicon is equivalent to pain.
But if this is actually a concern, RNNs have been widespread since the late 90s. Any advanced search engine, translation engine, or weather forecast model, make use of these.
This may be true, but it’s absolutely outside of the scope of your original point. You dragged the conversation around claiming to be concerned about how models are “treated”, wrapping speculation with philosophical arguments that cannot be applied here, since none of your “what ifs” are remotely based on scientific consensus.
Yeah sure I’m willing to incorporate that into My worldview. Of course, said feelings would be very simple and would likely lack valence, but I’m panpsychist enough to believe matrix multiplication has qualia. I’m more of an informational panpsychist than a physical panpsychist. I think information entails experience.
Provable assertions about the physical world require measurable observations, not personal beliefs.
According to this, any sufficiently skilled high school student could, with just pen and paper and enough time, build an entity from nothing that can experience pain.
Yep.
https://xkcd.com/505/