Some good points, but poor comparison. Excel is deterministic, AI is not.
Yes, you can ALWAYS trust Excel, after configuring it correctly ONCE. You can NEVER trust AI to produce the same output given the same inputs. Excel never hallucinates, AI hallucinates all the time.
You can, at that will cause the same output on the same input if there is no variation in floating point rounding errors. (True if the same code is running but easy when optimizing to hit a round up/down and if the tokens are very close the output will diverge)
The point the people (or llm arguing against llms) miss is the world is not deterministic, humans are not deterministic (at least in a practical way at the human scale). And if a system is you should indeed not use an llm… Its powere is how it provides answers with messy data… If you need repeatability make a scripts / code ect.
(Note I do think if the output is for human use it’s important a human validate its useful… The llms can help brainstorm, can with some tests manage a surprising amount of code, but if you don’t validate and test the code it will be slop and maybe work for one test but not for a generic user.
You can, at that will cause the same output on the same input if there is no variation in floating point rounding errors. (True if the same code is running but easy when optimizing to hit a round up/down and if the tokens are very close the output will diverge)
There are more aspects to the randomness such as race conditions and intentionally nondeterministic tiebreaking when tokens have the same probability, apparently.
I actually think LLMs are ill suited for the vast majority of things people are currently using them for, and there are obviously the ethical problems with data centers bringing new fossil fuel power sources online, but the technology is interesting in and of itself
There are more aspects to the randomness such as race conditions and intentionally nondeterministic tiebreaking when tokens have the same probability, apparently.
Yeah, in addition to what the commenter above said about floating points and GPU calculations, LLMs are never fully deterministic.
So now you finally admit that LLMs are not truly deterministic and only near-deterministic.
I’ve told you that from the beginning, but you were too smug, to first admit that major LLM provider systems are not deterministic, and then too smug to look up what near-deterministic systems are and do some research, and barking up the wrong tree.
This is not hard stuff to understand, if you understand computing.
LOL, you clearly have no clue how floating points work in computing. What an imposter you are. Go back to your AI for more “computing” advice, Mr. “Software Engineer”.
You could at least go and verify if your AI is lying to you.
Even when proven wrong, you still don’t give up LMAO 🤣
I’m not gonna bother anymore with you, just talking to a dumb AI here.
Enjoy your “deterministic” AI and good luck in life.
Not true. While setting temperature to zero will drastically reduce variation, it is still only a near-deterministic and not fully deterministic system.
You also have to run the model with the input to determine what the output will be, no way to determine it BEFORE running. With a deterministic system, if you know the code you can predict the output with 100% accuracy without ever running it.
You also have to run the model with the input to determine what the output will be, no way to determine it BEFORE running. With a deterministic system, if you know the code you can predict the output with 100% accuracy without ever running it.
This is not the definition of determinism. You are adding qualifications.
I did look it up and I see now there are other factors that aren’t under your control if you’re using a remote system, so I’ll amend my statement to say that you can have deterministic inference systems, but the big ones most people use cannot be configured to be by the user.
Deterministic systems are always predictable, even if you never ran the system. Can you determine the output of an LLM with zero temperature without ever having ran it?
And even disregarding the above, no, they are still NOT deterministic systems, and can still give different results, even if unlikely. The variation is NOT absolute zero when the temperature is set to zero.
Deterministic systems are always predictable, even if you never ran the system. Can you determine the output of an LLM with zero temperature without ever having ran it?
You don’t have to understand a deterministic system for it to be deterministic. You are making that up.
And even disregarding the above, no, they are still NOT deterministic systems
I conceded that setting temperature to 0 for an arbitrary system (including all the remote ones most people are using) does not mean it is deterministic after reading about other factors that influence inference in these systems. That does not mean there are not deterministic implementations of LLM inference, and repeating yourself with NO additional information and using CAPS does NOT make you more CORRECT lol.
I said I was wrong in that my statement was overly broad and not applicable to the systems most people are using in my initial response to you, then clarified that it is not an intrinsic character of the technology at large but that the implementations that are most used have it.
You apparently think that conversations are a battle with winners and losers so the fact you were right that the biggest systems are nondeterministic for reasons outside of temperature configuration means it doesn’t matter why, doesn’t matter that those factors don’t have to apply to every inference system, and doesn’t matter that you have no idea what determinism means.
In any case talking to you seems like a waste of time, so enjoy your sad victory lap while I block you so I don’t make the mistake of engaging you assuming you’re an earnest interlocutor in the future.
Some good points, but poor comparison. Excel is deterministic, AI is not.
Yes, you can ALWAYS trust Excel, after configuring it correctly ONCE. You can NEVER trust AI to produce the same output given the same inputs. Excel never hallucinates, AI hallucinates all the time.
You can actually set it up to give the same outputs given the same inputs (temperature = 0). The variability is on purpose
You can, at that will cause the same output on the same input if there is no variation in floating point rounding errors. (True if the same code is running but easy when optimizing to hit a round up/down and if the tokens are very close the output will diverge)
The point the people (or llm arguing against llms) miss is the world is not deterministic, humans are not deterministic (at least in a practical way at the human scale). And if a system is you should indeed not use an llm… Its powere is how it provides answers with messy data… If you need repeatability make a scripts / code ect.
(Note I do think if the output is for human use it’s important a human validate its useful… The llms can help brainstorm, can with some tests manage a surprising amount of code, but if you don’t validate and test the code it will be slop and maybe work for one test but not for a generic user.
There are more aspects to the randomness such as race conditions and intentionally nondeterministic tiebreaking when tokens have the same probability, apparently.
I actually think LLMs are ill suited for the vast majority of things people are currently using them for, and there are obviously the ethical problems with data centers bringing new fossil fuel power sources online, but the technology is interesting in and of itself
Yeah, in addition to what the commenter above said about floating points and GPU calculations, LLMs are never fully deterministic.
So now you finally admit that LLMs are not truly deterministic and only near-deterministic.
I’ve told you that from the beginning, but you were too smug, to first admit that major LLM provider systems are not deterministic, and then too smug to look up what near-deterministic systems are and do some research, and barking up the wrong tree.
This is not hard stuff to understand, if you understand computing.
And yet, LLMs are not deterministic.
LOL, you clearly have no clue how floating points work in computing. What an imposter you are. Go back to your AI for more “computing” advice, Mr. “Software Engineer”.
You could at least go and verify if your AI is lying to you.
Even when proven wrong, you still don’t give up LMAO 🤣
I’m not gonna bother anymore with you, just talking to a dumb AI here.
Enjoy your “deterministic” AI and good luck in life.
Not true. While setting temperature to zero will drastically reduce variation, it is still only a near-deterministic and not fully deterministic system.
You also have to run the model with the input to determine what the output will be, no way to determine it BEFORE running. With a deterministic system, if you know the code you can predict the output with 100% accuracy without ever running it.
This is not the definition of determinism. You are adding qualifications.
I did look it up and I see now there are other factors that aren’t under your control if you’re using a remote system, so I’ll amend my statement to say that you can have deterministic inference systems, but the big ones most people use cannot be configured to be by the user.
Deterministic systems are always predictable, even if you never ran the system. Can you determine the output of an LLM with zero temperature without ever having ran it?
And even disregarding the above, no, they are still NOT deterministic systems, and can still give different results, even if unlikely. The variation is NOT absolute zero when the temperature is set to zero.
You don’t have to understand a deterministic system for it to be deterministic. You are making that up.
I conceded that setting temperature to 0 for an arbitrary system (including all the remote ones most people are using) does not mean it is deterministic after reading about other factors that influence inference in these systems. That does not mean there are not deterministic implementations of LLM inference, and repeating yourself with NO additional information and using CAPS does NOT make you more CORRECT lol.
So you admit that you were wrong to begin with. And now you’re just grasping at straws to not be completely wrong.
Right back at you buddy.
I said I was wrong in that my statement was overly broad and not applicable to the systems most people are using in my initial response to you, then clarified that it is not an intrinsic character of the technology at large but that the implementations that are most used have it.
You apparently think that conversations are a battle with winners and losers so the fact you were right that the biggest systems are nondeterministic for reasons outside of temperature configuration means it doesn’t matter why, doesn’t matter that those factors don’t have to apply to every inference system, and doesn’t matter that you have no idea what determinism means.
In any case talking to you seems like a waste of time, so enjoy your sad victory lap while I block you so I don’t make the mistake of engaging you assuming you’re an earnest interlocutor in the future.