AI can’t be all that bad. The problem I’m always seeing with AI is a double-edged sword. You have corporations shoving AI in just about everything, treating it like its a cure for cancer and that really rubs people the wrong way. Then, on a more of a society level, you’ve got people who use AI for an assortment of things like making art with AI and still accredit themselves as an artist to people who treat AI like a therapist when it is not advised to.
However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.
It’s good as an assistant.
Most of my qualms with AI aren’t in the usage of AI, but in its creation (water usage, mass layoffs, etc.—you’ve heard it all before).
To me it’s like asking “What are some good uses for slaves?” (An extreme example to show the point, I’m not trying to say AI is the same as slavery).
Like yeah I could find good uses for it, but should it exist in the first place?
I actually find it pretty helpful for tech support stuff. It doesn’t always get it right, but it’s usually at least in the right general area and TBH it beats going through endless forums where the answer is buried among 8 pages of people bickering about nothing, or those ones where someone has your exact problem and then replies “nm I fixed it” and doesn’t say what they did.
LLMs tend to be a “jack of all trades, master of none”. You are likely to find them useful for helping you with something you are inexperienced at, but not at something you are an expert in. However, because they lie a lot, it’s best to double-check your information, but the LLM can still be helpful with the ”you don’t know what you don’t know” issue.
Converting PDFs into HTMLs or RFT/TXT docs witout OCR typos. Until recently, it was almost impossible to turn a scanned book from PDF into doc or TXT, because the output of copying and pasting or converting using PDF tools was illegible. AI now can do a “deep AI seek” (look it up) into the texts.
I am converting a textbook into an audiobook in HTML (paragraph highlighting with manual sync) with an integrated popup glossary into every word (with grammar and meaning) and dictionary lookup if clicked.
Besides, as an apendix to each chapter, I add all the explanations from the book.
I took the ~4 500 words of the book and asked for a grammar analysis and meaning lookup to create a glossary. The IA joyfully skipped many terms but that is something I will fix when each chapter is finished. Now I am being punished with waiting despite having paid $20.
Learning, exploring concepts and ideas.
I’ve used it to summarize long documents.
Curating massive music libraries. I’ve been using a small embedding model to organise my music for DJing, and being able to generate a t-sne plot clustered on perceptual similarity has been wonderfully useful.
I’ve also found CLIP models useful for searching videos, just embed a screenshot every couple of min of footage and query with a description of the scene.
And as bad as generated subtitles can be, when the only other option is nothing at all they are pretty nice to have.
Vibe coding slop you don’t need to work in production
translation is pretty good.
they want to make ai npcs on games, which could be awesome if we can ever reduce the system requirements for running it.
I tried out a game/demo thing that was a tester for AI NPC dialogue. I asked an NPC to tell me about himself and he replied that he could not connect to server lol
There’s that one silly vampire game which uses AI NPCs, I think it’s kind of fun looking from people I saw play it
I have a script that uses yt-dlp to get subtitles off a YouTube video and summarises the main points for me with a language model so that I don’t have to watch a 20 minute top10 list video that could’ve been a buzzfeed article.
The whole thing is fully vibe engineered too.
Chatbots? Basically nothing. Any interaction I have with one leads to spending more time verifying its output, inevitably finding many mistakes, and eventually finding a primary source for what I’m actually looking for. The best actual impact it has is forcing me to narrow down my nebulous question into what I actually specifically want, but the bot itself is contributing very little to that.
Neutral nets in general have limited real usefulness in analyzing large batches of data when other purpose-built analysis software doesn’t exist.
“AI” is a misnomer and there is absolutely zero evidence to suggest that we’re even on a path toward actual AI, sometimes called AGI, though they’re also changing that to just mean a profitable LLM which is fucking hilarious.
Any task you use a bot to do, you will become worse at that task. For mass data analysis, that’s fine, poring over reams of data is already a skill that other technology has largely obsoleted. But using it to do research, to read or write for you, or god forbid to make actual decisions and think for you, are very slippery slopes that are already causing a lot of the general public to seriously erode their basic mental capabilities.
In computational biology / biotechnology, LLMs are being trained on biological sequences and can then be used to generate new genes or genetic variants. These genes can be placed into bacteria who are then fed with e.g. sugar to make them produce various valuable molecules from renewable resources instead of from crude oil using conventional chemistry. There is also work on enabling plastic biodegradation this way.
Anything that’s fuzzy and impossible to automate with traditional algorithms, but that also has a reasonably high tolerance for error. It just makes up stuff a good portion of the time, you see.
However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.
Watch out, personal finance is not one of those things.
- Searching a large dataset with a vague search criteria.
- Real-time feedback when studying a foreign language (since accuracy is less important than quantity).
- Apparently in medicine they’re using generative AI for something meaningful, but I’m not entirely convinced it is actually generative AI and I’d need to do more research.
- Sometimes it can help in learning to program and in sanity-checking code security.
If you’re thinking of protein design it is, just with a sequence instead of natural language text. Although it’s not just a straight LLM, there’s some kind of physics awareness engineered in as well.




