Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Not my experience. I can go more than 2 min without touching the wheel if Im looking straight. So can others like Dirty Tesla.
Lucky! I don’t think I’ve ever been able to go more than 30-45 seconds. On a regular basis I get nagged every 10 seconds - and that’s with no sunglasses staring straight ahead at the road.
Blue Cruise won't make lane changes without a prompt, it also doesn't avoid obstacl…
…which is that much more reason to require more input for Ford.
 
That’s why I no longer drive with my hands at 10 and 2. I’m sure I’ve already saved at least a half a dozen wrist fractures.
It really is not that dangerous but just put your hands at 9 and 3 or even slightly lower if you want. The idea is to have good leverage and precise control due to your body mechanics. (Hard to do from the bottom of the wheel.)
Often I have them in the same position, just holding the wheel. I tried that on the way home and it didn’t work. Sometimes I‘ll have my hands at the life-threatening 10 & 2 position. Other times I’ll have my elbow on the window sill. The problem is, any ‘normal’ position one uses will specifically not put torque on the wheel because that makes the car turn but for FSD you actually need torque.

don’t think I’ve ever been able to go more than 30-45 seconds.
Just try 9 and 3 for a while like in the video and report back. For a while you have been having problems and I really don’t know why. Not saying it will never nag; it will.

It’s fine to put zero net torque on the wheel, and yes when it asks for torque you will have to jiggle the wheel slightly (the worst part of this attention monitoring as as it is distracting), but with practice it is easy to dismiss the nag. It does not help to have static torque on the wheel anyway (it will still ask for torque), so you do not need to worry about your arms canceling out the torque.

Also you have a ton of time to respond to the torque nag so it asking for it mid-turn is usually a non-issue (and easily satisfied in any case, with practice).

But the frequency you are seeing is outside the norm in most uncomplicated circumstances. I suspect the car is able to see your arm position and is suspicious about whether you are holding the wheel.

Just a hypothesis though. Try the 9 and 3 and compare results.

Maybe v12 will fix it. 😂
 
  • Like
Reactions: sleepydoc
Does not look like signaling or maintaining the correct lane are its strong points yet.

Allegedly millions of video clips from Tesla drivers though, so perhaps that makes sense.

Going to be a while, as you say.
Another. Lot's of hesitation making turns at stop signs. Omar had to put his foot to it at the top of the hill. Challenging environment, but it exposes some rough edges.

 
I'm not a fan of FSD's jackrabbit starts but to each there own. Also FSD is frequently engaging the brake pedal in stop/go traffic.

System latency including engaging the brake/regen is still about 1.5secs which is about the same as v11. An easy way to check is to measure time it takes for FSD to initially respond to a sudden firm lead vehicle brake.

 
  • Like
Reactions: AlanSubie4Life
It's possible, but I'm just saying there's no evidence that Tesla is hiring drivers to go out and provide training videos for V12. This is because:

1) there's already "free" data from millions of cars
2) if Tesla is hiring drivers to, for example, stop to 0mph at stop signs, I think Elon would have mentioned this, as he often shares ironic/silly regulatory things that Tesla and SpaceX have to do
3) the vehicle operator position mongo posted seems to be related to testing out vehicle systems and hardware (like suspension tunings, brake / Regen, vehicle software updates relating to driving/features, etc.)
4) there are different Tesla positions relating to ADAS and FSD testing, but those are likely to test V12/fsd/AP, not to create training dataset videos
You repeatedly talk about there being "no evidence," but then fall back on speculation or assumption to support your position.

For example, "I think Elon would have mentioned this" is speculation.

The fact of the matter is that only a privileged few fully "know" everything Tesla is doing with testing, training, etc.

If your best evidence is prefaced with "I think" or "I assume," then maybe you shouldn't question other people's "evidence."
 
So V12 if full neural network where the systems from videos etc. and that there is no coding involved. I get that. But what I don’t understand is how the programmers keep control of the system.

i have been watching Omar’s videos and at one time het says that this version is still overly cautious and that future iterations this will get better (or something to that effect. I cannot find anymore the where I heard that)
How do they do that? Do they train the computer with clips of drivers who are overly cautious or is there still some coding in the system?

Do programmers keep control of the system by means of what they train it with or do they still do some coding In V12?
 
  • Like
Reactions: JB47394
It seems like for V12 to work it has to get decisions from the Tesla mothership. Seems like the system in not fully contained in the car. Is that right? If so then it seems like there has to be some significant bandwidth to transmit and receive data. Probably I just do not understand neural networks.
People have said the same thing of the other NN Tesla uses or about "learning". The reality is there is no NN weights that are being updated in real-time or real-time learning. All the learning is already done in during training at the mothership and then is loaded to the car during updates. Everything is self contained and there is no per-car learning either (user only gets to customize using pre-set variables). Instead, what happens is Tesla can request triggers and for cars to upload short clips and data to the mothership that matches those triggers.
Tesla Applies Autopilot-Style Feedback to Other Systems With Dynamic Triggers

The only real-time data Tesla may use are maps data, which may load while your navigation is calculating the route (most obvious that applies to all cars is supercharger station status, but for cars with premium connectivity the traffic data).
Tesla Hacker Reveals How Maps Are Augmented with Fleet Data Between Updates
 
It seems like for V12 to work it has to get decisions from the Tesla mothership. Seems like the system in not fully contained in the car. Is that right? If so then it seems like there has to be some significant bandwidth to transmit and receive data. Probably I just do not understand neural networks.
The car makes all its own decisions while driving.
 
  • Like
Reactions: clydeiii
So V12 if full neural network where the systems from videos etc. and that there is no coding involved. I get that. But what I don’t understand is how the programmers keep control of the system.

i have been watching Omar’s videos and at one time het says that this version is still overly cautious and that future iterations this will get better (or something to that effect. I cannot find anymore the where I heard that)
How do they do that? Do they train the computer with clips of drivers who are overly cautious or is there still some coding in the system?

Do programmers keep control of the system by means of what they train it with or do they still do some coding In V12?
While a "true" end-to-end system would be based on weights/bias on the end vehicle (instead of code), the way they arrive at those weights still requires a lot of coding, which allows for customization of certain aspects. It's also not based 100% on human drivers, it is also based on a lot of simulation (which obviously can be used to push the network in a certain direction).
Learning to drive like a human

It is also not known what Tesla really is doing yet. Is it truly an end-to-end network (meaning it takes the car video/navigation input and spits out control outputs directly) or does it just reuse the existing V11 modules, but with code inside replaced by NNs (such that the whole chain is largely done by NNs, but it's not one huge network from input to output).
 
People have said the same thing of the other NN Tesla uses or about "learning". The reality is there is no NN weights that are being updated in real-time or real-time learning. All the learning is already done in during training at the mothership and then is loaded to the car during updates. Everything is self contained and there is no per-car learning either (user only gets to customize using pre-set variables). Instead, what happens is Tesla can request triggers and for cars to upload short clips and data to the mothership that matches those triggers.
Tesla Applies Autopilot-Style Feedback to Other Systems With Dynamic Triggers

The only real-time data Tesla may use are maps data, which may load while your navigation is calculating the route (most obvious that applies to all cars is supercharger station status, but for cars with premium connectivity the traffic data).
Tesla Hacker Reveals How Maps Are Augmented with Fleet Data Between Updates
Thank you for explaining👍
 
While a "true" end-to-end system would be based on weights/bias on the end vehicle (instead of code), the way they arrive at those weights still requires a lot of coding, which allows for customization of certain aspects. It's also not based 100% on human drivers, it is also based on a lot of simulation (which obviously can be used to push the network in a certain direction).
Learning to drive like a human

It is also not known what Tesla really is doing yet. Is it truly an end-to-end network (meaning it takes the car video/navigation input and spits out control outputs directly) or does it just reuse the existing V11 modules, but with code inside replaced by NNs (such that the whole chain is largely done by NNs, but it's not one huge network from input to output).
Thanks, learning some new stuff here.