ChatGPT made a big splash last fall and large language models (LLMs) in general continue to be a hot topic. My current favorite is using Bing AI (https://bing.com/chat) because it combines current search results with the information it already has in its model.
But there’s still a lot of confusion about what is powering these sites. How do they know things? Are they sentient? Stuff You Know Should Know did a great episode called Large Language Models and You and I think it’s worth a listen. I’ve heard LLMs previously described as “word salad”, and this episode gives another good example of explaining that it’s like an iteration on autocomplete in a text box. The algorithm just knows what words are most likely to come next and which words are related to each other. It has no concept of what it is saying. It only knows that those words are most likely to go together when it sees your prompt. So there’s no sentience or actual knowledge happening here, which is probably good but it’s also bad because it means that ridiculous answers can come out and be presented as fact.
The episode covers all of this and then also does a good job of how incredibly fast things are improving. Give it a listen if this topic interests (or scares) you.