Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
How can they continue to let us use v11… while this superior version is available?

Tesla has a simulator where they can test different versions against a host of scenarios both real and fictional. They probably have data showing that v12 isn't yet as generally performant as v11.

v12 might have only been trained on Palo Alto footage, for e.g.
 
Glad he was able to to stream v12. E2E is probably similar to what the big boys are running albeit with more complete sensor suites.

Overall it looks interesting but still lots of questions. One of the biggies - does HW3 and/or HW4 NNs have capacity to accept the seemingly endless scenarios of training data needed to make this work? Performance all comes down to NN weights and a mythical complete set of training data.

Maybe the most refreshing info from this stream was Elon finally admitting V11.x is a kludgy mess.
 
Glad he was able to to stream v12. E2E is probably similar to what the big boys are running albeit with more complete sensor suites.

Overall it looks interesting but still lots of questions. One of the biggies - does HW3 and/or HW4 NNs have capacity to accept the seemingly endless scenarios of training data needed to make this work? Performance all comes down to NN weights and a mythical complete set of training data.

Maybe the most refreshing info from this stream was Elon finally admitting V11.x is a kludgy mess.
The inference computers in HW3/4 don't work on scenarios. Dojo will take all the scenarios and provide the inference computer on a need to know basis.
 
  • Funny
Reactions: AlanSubie4Life
No, but it mis-interpreted the LEFT TURN lane light that does go green, for the GO STRAIGHT lane that the car was in which had TWO SOLID REDS and started to proceed into the intersection (to to 7 mph), where clearly turning cars from the OPPOSING direction were coming from left to right in front of the direction that the EM TESLA was going to go straight on. So, it was a full intervention, EM even says “ok, intervention” but then says something like ‘let’s not call it an intervention” since I think he possibly correctly interprets it as a total fail, like say running a red light - which it would be considered, or stop sign, etc. It’s about 19:50-21:00
That was a pretty basic traffic light configuration to not have training data for. It also was slow to respond and/or acknowledge the yellow to red stop light in the neighborhood.
 
Tesla has a simulator where they can test different versions against a host of scenarios both real and fictional. They probably have data showing that v12 isn't yet as generally performant as v11.

v12 might have only been trained on Palo Alto footage, for e.g.
Maybe…. They did note that they are currently testing V12 at dozens of locations around the world so likely not just a Palo Alto code only version.
 
I wonder how they will prevent hallucinations with this method?

Overall it’s interesting how the car can function at a basic level (though obviously nowhere near good enough in this demo and probably not for a few years yet - if ever (remember prior rewrites)!).

I assume there is typical Elon nonsense here and it is not actually end to end - can someone listen to the video again and say whether Ashok actually confirmed it is true end to end or does he have a plausible case to say he did not? I assume he cares about his credibility though perhaps not.

It’s a shame that we cannot take Elon’s statements at face value. Tragedy really - the value of credibility becomes clear in these situations.

But anyway, I am sure it is partially true, but I am doubtful that there aren’t any elements that require substantial coding of rules.

Maybe they will be able to get to the first 9 on Chuck’s turn? Then the March of 9’s can begin (finally)!

Anyway, looking forward to it. Should be a multi-year journey to something that is a good L2 aid (possibly in a different form than shown here), slightly better than our current one. Very exciting.
 
Last edited:
I wonder how they will prevent hallucinations with this method?

Overall it’s interesting how the car can function at a basic level (though obviously nowhere near good enough in this demo and probably not for a few years yet - if ever (remember prior rewrites)!).

I assume there is typical Elon nonsense here and it is not actually end to end - can someone listen to the video again and say whether Ashok actually confirmed it is true end to end or does he have a plausible case to say he did not? I assume he cares about his credibility though perhaps not.

It’s a shame that we cannot take Elon’s statements at face value. Tragedy really - the value of credibility becomes clear in these situations.

But anyway, I am sure it is partially true, but I am doubtful that there aren’t any elements that require substantial coding of rules.

Maybe they will be able to get to the first 9 on Chuck’s turn? Then the March of 9’s can begin (finally)!

Anyway, looking forward to it. Should be a multi-year journey to something that is a good L2 aid (possibly in a different form than shown here), slightly better than our current one. Very exciting.
I don't think they will have LLM hallucinations because they are not predicting the most likely next word whilst not understanding what the end result will be.

March of 9s on UPL is tough. If needed they can avoid UPLs and just turn right when cross traffic is greater than 50mph etc.

Multi month journey to L4?
 
I assume there is typical Elon nonsense here and it is not actually end to end - can someone listen to the video again and say whether Ashok actually confirmed it is true end to end or does he have a plausible case to say he did not? I assume he cares about his credibility though perhaps not.

It’s a shame that we cannot take Elon’s statements at face value. Tragedy really - the value of credibility becomes clear in these situations.

But anyway, I am sure it is partially true, but I am doubtful that there aren’t any elements that require substantial coding of rules.
Elon kept saying there are no lines of code for almost all the basic driving rules. Ashok never really agreed or disagreed (other than vague responses), I wonder how much of what Elon said was correct?

It's got to the point that I don't believe anything from anyone at Tesla unless they are under deposition or testifying in court - and even then they don't tell the whole truth about things or "cannot recall". I'm sure we will find out in a few years that this video was not E2E but was showing what they aspire E2E to be. Possibly some elements were E2E built on a foundation of V11 coding.

Is it really a good idea to train a new "driver" by showing it masses of videos and not giving it the rules? That's not how we teach human drivers, we certainly hard-code the rules in them, by teaching and testing. How could it have missed the red light rule then? I wonder if it would have continued accelerating into the line of left-turning cars, or at what point it would have attempted to avoid the collision.
 
I wonder how they will prevent hallucinations with this method?

Overall it’s interesting how the car can function at a basic level (though obviously nowhere near good enough in this demo and probably not for a few years yet - if ever (remember prior rewrites)!).

I assume there is typical Elon nonsense here and it is not actually end to end - can someone listen to the video again and say whether Ashok actually confirmed it is true end to end or does he have a plausible case to say he did not? I assume he cares about his credibility though perhaps not.

It’s a shame that we cannot take Elon’s statements at face value. Tragedy really - the value of credibility becomes clear in these situations.

But anyway, I am sure it is partially true, but I am doubtful that there aren’t any elements that require substantial coding of rules.

Maybe they will be able to get to the first 9 on Chuck’s turn? Then the March of 9’s can begin (finally)!

Anyway, looking forward to it. Should be a multi-year journey to something that is a good L2 aid (possibly in a different form than shown here), slightly better than our current one. Very exciting.
Um. I've been reading casually on these hallucinations that everybody (properly) is focused upon.

First off: A great many of these seem to be the LLM in question attempting to answer a question that somebody wants a predetermined answer to. Like those lawyers in NY/NJ who wanted a Good Reason for this airline contract of carriage terms to be invalidated. The LLM, trying to meet those demands, just Made Up Stuff and, when asked about it later, Made Up More.

Or being asked some $RANDOM question and getting back a $RANDOM answer that might look ok, but it's nonsense.

The difference here is effective feedback. That's what the Dojo is all about: Take the NN/AI with all its weights, feed into it in the Dojo a zillion scenarios with desired outcomes that are Known True; if the AI/NN misbehaves, one sees it right off, adjusts weights, and tries again. If the Bad Answer keeps on coming up like a bad penny, then Something Is Done About It.

All right. That's my argument for Pro. I'm still having qualms.

Look: When one is walking somewhere, one doesn't particularly worry about one's balance. Or taking steps. That's being done behind the scenes by NNs built into not just our brains in our skulls, but in the grey matter that lives in the spinal cord. Think: Reflexes, like the doc hitting one's tendons with that little medical hammer.

A heck of a lot that makes us go is handled by NNs without us thinking about it. And, even having said that, NNs can learn: Even worms can be trained to navigate a simple maze.

It's our forebrains where reasoning, at least for us mammals, takes place. As an example, I strongly suspect that the reasoning process that results in people solving Calculus problems is probably not directly NN stuff (although, the fact that one learns Calculus and gets better at it over time argues that it may be), but is rather this froth on top of the base NN that is in operation.

And we're good enough at it so, if one is asked, "Why did you do such-and-such?" most of us can come up with a logical progression. "A car was coming up quick on the right so I didn't move that way." "I noticed a bird diving towards my head, so I ducked."

But, what with weights and all, one can't ask a NN what brought it to a particular plan of action - it just does it. Like balancing without falling over is for us.

All those 300+ lines of C++ code that Elon was referring to were attempts (possibly fairly successful attempts, or not, depending upon who one listens to) to mimic our ability to come up with reasons that we do things while driving.

But.. I dunno about you guys, but when I'm driving down ye road, often, idly, I'll be thinking of something else. Or listening to music. Or chatting with another person in the car. Implication: I'm not, exactly, thinking about what I'm doing.. I'm letting my built-in NN do the work. Or mostly letting a trained NN do most of the work, with the actual brain doing some little supervision on top. Think about walking on a forest trail: Yes, one is multitasking: Balancing, looking for one's next step, smelling, looking around for who-knows-what, and I have no idea how many other tasks. How much of that is conscious thought and how much isn't?

It may be that coders and Musk have hit upon a different way of looking at all this that results in a more efficient allocation of resources, faster reaction times, and multitasking, all at once. Wowsers.
 
A heck of a lot that makes us go is handled by NNs without us thinking about it. And, even having said that, NNs can learn: Even worms can be trained to navigate a simple maze.
Not clear that what we call machine learning/NNs bears any resemblance to the human brain.

It’s just a term of art - it’s not meant to suggest we are duplicating the function of the brain!

No one knows how our brain actually works, exactly, yet. Know even less about it than we know about NNs.