Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Presumably 12.x has plenty of diverse training to realize the solid white lines could be an indication of turn lanes in some areas but to look for other signals like painted road markings and signage and even map data when those are available. Of course these signals can be complicated by other vehicle traffic occluding the view or different weather and lighting conditions, so I wonder if it's a reasonable training approach with end-to-end to start with "cleaner" data then add in trickier scenarios versus training from scratch with all the easy and hard scenarios mixed together.

I'm wondering what you think about my intuition here:

Since V12 is video in and controls out, where video is a constant bitrate regardless of environment or complexity, V12's likelihood of making a "mistake" is the same in a simple driving scenario vs complex scenario

That is, because V12 has been released to customers, that means Tesla believes it is "production" ready without limitations (Omar didn't mention any restrictions when Tesla called him) and therefore, the training set must already have included all driving locales, complexities, and weather conditions across NA.

Since the training set is already diverse in this way, and with the video in controls out architecture, it means that V12 has no differentiation between simple and complex driving scenarios.

So that means that it is just as "difficult" for V12 to lane keep vs doing a turn in traffic

This is different than V11 because the more complex the driving scenario, the more tangled the heuristics become
 
Why not Chuck Cook? He's arguably the most grounded FSD influencer.
Since V12 appears to have gone to about 100 customers, but videos are coming from only Omar, it would not seem that the goal was to send to influencers Omar has a special place in Elon's world after the two were sued by some idiot.

I imagine that the other big influencers are hopping mad that Whole Mars is getting all the views right now.
 
  • Like
Reactions: JB47394
I believe Green confirmed this a few weeks ago.
Which "this"? You have a post from January 11 asserting that V11 and V12 can't be running on HW3 due to compute limits. So you're agreeing that it's not running both stacks?

Given that the frame rate is lower, and given that I continue to believe that they're using V11 feeds, I can imagine two scenarios:

1. That the V12 control system is efficient, but they're running the V11 control system as well for comparison.
2. That the V12 control system is inefficient, and they're only running that

The fact that the visualization is the same as V11 except for the noodle (it's shorter) certainly suggests to me that the V11 system is providing the base visualization and that the V12 control system is providing the noodle.
 
  • Like
Reactions: powertoold
That is, because V12 has been released to customers, that means Tesla believes it is "production" ready without limitations (Omar didn't mention any restrictions when Tesla called him) and therefore, the training set must already have included all driving locales, complexities, and weather conditions across NA.

This was Musk on it in December:

It is already on a lot of cars, but, given that is a completely new architecture, we are doing extra testing. It works very well in California, but needs more training for heavy precipitation areas.

Edit: Also confirms the overfitting for California, at least initially.
 
Which "this"? You have a post from January 11 asserting that V11 and V12 can't be running on HW3 due to compute limits. So you're agreeing that it's not running both stacks?

Given that the frame rate is lower, and given that I continue to believe that they're using V11 feeds, I can imagine two scenarios:

1. That the V12 control system is efficient, but they're running the V11 control system as well for comparison.
2. That the V12 control system is inefficient, and they're only running that

The fact that the visualization is the same as V11 except for the noodle (it's shorter) certainly suggests to me that the V11 system is providing the base visualization and that the V12 control system is providing the noodle.
I quoted Green that shadow mode didn't run at the same time, but someone else posted a recent tweet that he said V12 is running in shadow mode with an upside smiley currently.
 
  • Informative
Reactions: JB47394
I haven't seen anything official, but it seems extremely likely that the difference is that other makes like Ford have an IR light pointed at the driver's eyes. If I put on sunglasses, I get no nags for looking away, the Tesla cabin cam can't see where my eyes are directed.

Damn IR can detect my eye movements behind sunglasses now? And here I thought IR was like the night fighting scenes in the movies where the only thing clear is the contour of the human body.
 
The fact that the visualization is the same as V11 except for the noodle (it's shorter) certainly suggests to me that the V11 system is providing the base visualization and that the V12 control system is providing the noodle.
It appears there are no display for stopping for red light, stopping for stop sign, or even a stop line currently in v12. Visualizations do appear altered more than the noodle. Not sure I'll miss the first two, but I was just warming up to the stop line feature, especially on restricted visibility UPL's (something I'd love to see by Omar or anyone else). I have one that v11 consistently fails with - not so much with decision making but execution. If it just hustled a bit more it would be great.

Also glad to see they figured out how to make a full stop at a stop sign using E2E - several here said that wasn't possible. Clearly it is and is needed.
 
Damn IR can detect my eye movements behind sunglasses now? And here I thought IR was like the night fighting scenes in the movies where the only thing clear is the contour of the human body.
You are thinking about far infrared sensors, like a FLIR, which detects the heat radiated by objects. The cabin cameras with IR use near infrared by shining IR light toward you that is just outside of your eye's sensitivity and picking up on the reflections. It's basically the same as using a flashlight except that your eyes aren't blinded by it.

It's like the black and white security camera images you get from a Nest or Ring doorbell type of product.
 
  • Informative
Reactions: JB47394 and Hiline
You are thinking about far infrared sensors, like a FLIR, which detects the heat radiated by objects. The cabin cameras with IR use near infrared by shining IR light toward you that is just outside of your eye's sensitivity and picking up on the reflections. It's basically the same as using a flashlight except that your eyes aren't blinded by it.

It's like the black and white security camera images you get from a Nest or Ring doorbell type of product.
Yes, and sun glasses 🕶 and even clear prescription glasses block this light. It's not heat sensing.
 
It appears there are no display for stopping for red light, stopping for stop sign, or even a stop line currently in v12. Visualizations do appear altered more than the noodle.
Yep. Given that the end-to-end network doesn't know what a stop sign, traffic light, etc. are it can't display a message on the screen saying what it is responding to.
 
  • Funny
Reactions: AlanSubie4Life
Damn IR can detect my eye movements behind sunglasses now? And here I thought IR was like the night fighting scenes in the movies where the only thing clear is the contour of the human body.
Yes, and sun glasses 🕶 and even clear prescription glasses block this light. It's not heat sensing.
there was an extensive discussion on this in another thread - it depends on the wavelength and the sunglasses but yes, it is possible if the system is designed properly. (This is how FaceID on iPhones works.)

I haven't seen anything official, but it seems extremely likely that the difference is that other makes like Ford have an IR light pointed at the driver's eyes. If I put on sunglasses, I get no nags for looking away, the Tesla cabin cam can't see where my eyes are directed.
right - but the majority of the time Tesla can see your eyes just fine.
 
Downtown SF:
Numerous issues in this video. Excessive hesitation at stop sign intersections requiring accelerator presses. Seems to be a big issue at stop signs on a hill and where there are occlusions. But, sometimes for no apparent reason at all. Sometimes it seems to wait too long for pedestrians to clear the intersection. When someone is crossing from right to left, the car waits until they get all they way to the left curb instead of creeping forward after they have cleared the lane ahead. That doesn't seem to be very human-like.

Also, a dangerous passing of a car pulled half way into the lane that was attempting to pull out of a parking space. The safe thing would have been to stop to let the other car out.

And a case where a left turning lead car stops suddenly to pull over and triggers AEB. Surprisingly, no disconnect, but a close call.