Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
How long will they take to train the model to understand the attentiveness of the drivers ? How many 9s in how many months ?

I'm not sure people will be ok with Tesla using the camera to monitor their attentiveness ...

There are already alogos for doing that. Basic eye tracking. Newer iPhones have that built into FaceID already, to verify you are 'looking' at the device.

As for being okay with it... There are two situations I can think of where a driver would have issue with it:

A) Privacy: On model 3's a camera already exists and you don't know what it is doing. That said, eye tracking can be done onboard, so as long as you trust Tesla, that issue can be put to bed.

B) Nuisance: No more so than the steering wheel torque. Just harder to defeat. And at that, put an option to disable with a legal disclaimer, and default it to on...
 
Personaly, I would like a camera based driver attention system because it would allow for hands free driving in those instances where AP/NOA is good enough while still making sure the driver is paying attention.
Yes, I'd like that too - mainly because car can't figure out when I have my hands on the wheel and needs a gentle tug. I end up moving the volume wheel every time it nags.

But, going back to the original point, I don't know whether visual check for attention is any better than current nag in terms of reliability i.e. will visual nag be able to detect that someone is not attentive even when they have hands on the wheel (assuming no cheat devices) ? Because the original point was for Tesla to make sure people are paying attention.
 
Yes, I'd like that too - mainly because car can't figure out when I have my hands on the wheel and needs a gentle tug. I end up moving the volume wheel every time it nags.

But, going back to the original point, I don't know whether visual check for attention is any better than current nag in terms of reliability i.e. will visual nag be able to detect that someone is not attentive even when they have hands on the wheel (assuming no cheat devices) ? Because the original point was for Tesla to make sure people are paying attention.

Interestingly, the white paper I read about safety of autonomous vehicles mentions that driver attention systems like cameras or steering wheel bags have different pros and cons. So the best safety would probably be a combination of both not one or the other. If Tesla had both systems, the car could use the camera and/or the nag depending on the situation. For example, on an open highway, the car could just use the camera with no nags but when the car sees it is approaching a construction zone or busy traffic or a highway transition, it could notify the driver to hold the wheel and use both the camera and the nags at the same time for maximum safety.
 
Chances of fatal accidents are much less than other kinds of crashes. So, likely there will be small fender benders, rather than fatal accidents.

If the commute is 20 miles - 500 commutes is 10k miles. 1 L4+ crash per 10k miles is the human average.

Now, if Tesla sends city NOA to 500,000 cars and even if 100k of them use City NOA once a day for a 20 mile trip, that would be 100k commutes/trips (2 Million miles) every day. If people are not paying attention when the reliability is 4 nines (1 per 10k miles or 500, 20 mile commutes) - you would expect 2,000 crashes every day ;)

View attachment 430683

I could not find definitions for the different L numbers for car crashes. Apparently there's a band with "crash" in its name, and L1 L2 etc. make Google think I'm asking about vertebrae so I could not find anything. The Wikipedia article on car accidents didn't mention L-anything that I could find on a quick skim of the very long article.

Or were you referring to the number of crashes expected at each autonomy level?
 
I could not find definitions for the different L numbers for car crashes. Apparently there's a band with "crash" in its name, and L1 L2 etc. make Google think I'm asking about vertebrae so I could not find anything. The Wikipedia article on car accidents didn't mention L-anything that I could find on a quick skim of the very long article.

Or were you referring to the number of crashes expected at each autonomy level?
Here is the post with links.

Tesla, TSLA & the Investment World: the 2019 Investors' Roundtable

It is the definition used by Virginia Tech. They run large scale data collection of driving and crash patterns. Both Waymo and Cruise use their research, definitions etc.
 
Here is the post with links.

Tesla, TSLA & the Investment World: the 2019 Investors' Roundtable

It is the definition used by Virginia Tech. They run large scale data collection of driving and crash patterns. Both Waymo and Cruise use their research, definitions etc.
Interesting. It does seem like it's very hard to get those last few 9's.

It's funny there's a forum on Reddit that seems to have only one user/moderator aggregating articles that crap on AVs.
Cruise’s Secret ‘Apollo’ Robotaxi Milestones (Paywall see the comments)
SelfDrivingCarsLie • r/SelfDrivingCarsLie
 
  • Informative
Reactions: willow_hiller
Is there any indication that any of the Early Access Participants actually received the feature complete software?


Note: Usual reminder from a Software Developer that "Feature Complete" is a pretty useless milestone which normally just means all development teams think they've written the software to the specification and it compiles, builds and passes some basic tests. Don't expect too much from it.
 
Note: Usual reminder from a Software Developer that "Feature Complete" is a pretty useless milestone which normally just means all development teams think they've written the software to the specification and it compiles, builds and passes some basic tests. Don't expect too much from it.
Yes, usually in my experience in any kind of larger feature, app or software: There's a 1-3 relationship between the first working iteration (feature complete), and the mostly finished production code. If it took 2 weeks to make something work, it'll probably take 4-6 more weeks iterating it until it's good enough to release it.

It's mostly true at any timescale, and probably for Tesla too (smart summon). It took a while from Elon first saw it working and talked about releasing it, until it actually went wide.
 
Last edited:
Regarding Smart Summon - what worries me is that it doesn’t tend to enhance over time. The very latest FW version seems to handle it worse that the first available public version in the same exact place and conditions, according to my own tests. Steering is more jerky, car tends to swerve towards the ongoing traffic (dangerous!), random stops are much more frequent.
Actually not a good sign for FSD. I expected with said iterative approach it to be almost perfect by now, but it’s still mostly unusable.
 
Yes, usually in my experience in any kind of larger feature, app or software: There's a 1-3 relationship between the first working iteration (feature complete), and the mostly finished production code. If it took 2 weeks to make something work, it'll probably take 4-6 more weeks iterating it until it's good enough to release it.

It's mostly true at any timescale, and probably for Tesla too (smart summon). It took a while from Elon first saw it working and talked about releasing it, until it actually went wide.

This, too, assuming Tesla’s concept of “feature complete” is exactly the same as what we have in our heads. “Feature complete” is a meaningless term without full context as far as what are the features and how they’re intended to work, all of which is likely to never be shared beyond the teams building this software.
 
Note: Usual reminder from a Software Developer that "Feature Complete" is a pretty useless milestone which normally just means all development teams think they've written the software to the specification and it compiles, builds and passes some basic tests. Don't expect too much from it.

And they haven't even rolled this out to anybody in 2019 as promised.

But I actually see this as a good thing: Tesla will miss deadline after deadline and break promise after promise rather than release an unsafe product to market. Ford sold the Pinto knowing that it would explode and kill people. And I don't trust GM as far as I could throw one of their cars. When Tesla finally does sell a car with FSD (maybe in a decade) it will be a good product that works.
 
Regarding Smart Summon - what worries me is that it doesn’t tend to enhance over time. The very latest FW version seems to handle it worse that the first available public version in the same exact place and conditions, according to my own tests. Steering is more jerky, car tends to swerve towards the ongoing traffic (dangerous!), random stops are much more frequent.
Actually not a good sign for FSD. I expected with said iterative approach it to be almost perfect by now, but it’s still mostly unusable.

Go to comments about Nav on AP from a year or 18 months ago and compare that to your experiences today.

Auto lane change

Ping ponging in AP on HW 2 or 2.5

Auto wipers

Auto high beams

AP on exit ramps

It gets better. We just forget that awesome features weren’t always great and whine about whatever they’re trying to improve now. (street lights, smart summon, traffic circles, etc).
 
When Tesla finally does sell a car with FSD (maybe in a decade) it will be a good product that works.
Exactly. Smart Summon was six months late but when Tesla finally released it it was a good product that works.

Seriously though, it wouldn't surprise me if no FSD (pre Feb 2019 definition) features are released this year. I am looking forward to Smart Park assuming they improve the curb recognition over what's in Smart Summon.
 
  • Funny
Reactions: AlanSubie4Life