I wonder how they will prevent hallucinations with this method?
Overall it’s interesting how the car can function at a basic level (though obviously nowhere near good enough in this demo and probably not for a few years yet - if ever (remember prior rewrites)!).
I assume there is typical Elon nonsense here and it is not actually end to end - can someone listen to the video again and say whether Ashok actually confirmed it is true end to end or does he have a plausible case to say he did not? I assume he cares about his credibility though perhaps not.
It’s a shame that we cannot take Elon’s statements at face value. Tragedy really - the value of credibility becomes clear in these situations.
But anyway, I am sure it is partially true, but I am doubtful that there aren’t any elements that require substantial coding of rules.
Maybe they will be able to get to the first 9 on Chuck’s turn? Then the March of 9’s can begin (finally)!
Anyway, looking forward to it. Should be a multi-year journey to something that is a good L2 aid (possibly in a different form than shown here), slightly better than our current one. Very exciting.
Um. I've been reading casually on these hallucinations that everybody (properly) is focused upon.
First off: A great many of these seem to be the LLM in question attempting to answer a question that somebody wants a predetermined answer to. Like those lawyers in NY/NJ who wanted a Good Reason for this airline contract of carriage terms to be invalidated. The LLM, trying to meet those demands, just Made Up Stuff and, when asked about it later, Made Up More.
Or being asked some $RANDOM question and getting back a $RANDOM answer that might
look ok, but it's nonsense.
The difference here is effective feedback. That's what the Dojo is all about: Take the NN/AI with all its weights, feed into it in the Dojo a zillion scenarios with desired outcomes that are Known True; if the AI/NN misbehaves, one sees it right off, adjusts weights, and tries again. If the Bad Answer keeps on coming up like a bad penny, then Something Is Done About It.
All right. That's my argument for Pro. I'm still having qualms.
Look: When one is walking somewhere, one doesn't particularly worry about one's balance. Or taking steps. That's being done behind the scenes by NNs built into not just our brains in our skulls, but in the grey matter that lives in the spinal cord. Think: Reflexes, like the doc hitting one's tendons with that little medical hammer.
A heck of a lot that makes us go is handled by NNs without us thinking about it. And, even having said that, NNs can
learn: Even worms can be trained to navigate a simple maze.
It's our forebrains where reasoning, at least for us mammals, takes place. As an example, I strongly suspect that the reasoning process that results in people solving Calculus problems is
probably not directly NN stuff (although, the fact that one
learns Calculus and gets better at it over time argues that it may be), but is rather this froth on top of the base NN that is in operation.
And we're good enough at it so, if one is asked, "Why did you do such-and-such?" most of us can come up with a logical progression. "A car was coming up quick on the right so I didn't move that way." "I noticed a bird diving towards my head, so I ducked."
But, what with weights and all, one can't ask a NN what brought it to a particular plan of action - it just does it. Like balancing without falling over is for us.
All those 300+ lines of C++ code that Elon was referring to were attempts (possibly fairly successful attempts, or not, depending upon who one listens to) to mimic our ability to come up with
reasons that we do things while driving.
But.. I dunno about you guys, but when I'm driving down ye road, often, idly, I'll be thinking of something else. Or listening to music. Or chatting with another person in the car. Implication: I'm not, exactly,
thinking about what I'm doing.. I'm letting my built-in NN do the work. Or mostly letting a trained NN do most of the work, with the actual brain doing some little supervision on top. Think about walking on a forest trail: Yes, one is multitasking: Balancing, looking for one's next step, smelling, looking around for who-knows-what, and I have no idea how many other tasks. How much of that is conscious thought and how much isn't?
It may be that coders and Musk have hit upon a different way of looking at all this that results in a more efficient allocation of resources, faster reaction times, and multitasking, all at once. Wowsers.