

You’re using AI to turn research papers you don’t understand into science education videos you also don’t understand? What fresh kind of hell.
I actually watched the video and it was surprisingly good, but you might want to tell the AI to give more consideration to the broader context of the field next time, because studies don’t exist in a vacuum. For all we know this paper could be the logical next step in the field (which would lend it credibility) or it could go firmly against well established theory (which would make it an extraordinary claim that needs extraordinary proof).










“Liable” means they might post a correction later that nobody will see because corrections aren’t sexy to algorithms. Big deal. LLM vendors are liable in practically the same way.