Hallucinating would imply thinking the supermarket is on Mars, and driving to boca chica.I learned why LLMs hallucinate (assuming I actually get it). I'm only posting this bc I had the same concerns for FSD. Plus, I was curious why/how LLMs made stuff up, especially when I saw it with my own eyes. I even shared some of that BS on this forum, lol it was so convincing too.
Since LLMs are told to pick ONLY the next word, sometimes they make a poor choice (conflicting info) and this sets up the problem. It is then forced, word by word, to select words that aren't so likely. It literally corners itself into a lie by using false information out there. No surprise - the data is literally the Internet and it's full of it.
With FSD, as long as the training data is clean, this should not happen. As much as some people want us to believe FSD goes rogue, there is no reason for it to hit that one tree if 99.999% of the users do not hit the tree. The "liars", in our little FSD world, do not have successful outcomes - they do not "get away with it" like on the internet. The tree hits back and it's obviously not useful for the training set.
Distant objects can look like a stop sign - if you can call that a hallucination, maybe. Like this Burger King sign. The vehicle response proceeded cautiously slowing to 24 mph, but then resumed once it got a little closer. It does not stop. Further, this same problem is now gone just up the street from me.
One could say FSD is getting "less iffy" over time.
misinterpreting complex situations is a possibility, but with billions of training miles,
it should be rare.
Moreover, even if it hallucinates, it will err on the side of safety, as that is
the default choice.
Last edited: