Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
As FSD Beta gets better it absolutely will get more dangerous before it gets less dangerous. The usage will go up, and trust in it will go up. But, there will still be a lot of cases for mistakes to happen.

One thing I've seen time and time again with FSD Beta is my safety threshold is one thing, and the cars is different.

Too close to the curb = disengagement, but the teenager (the cars computer) is probably telling me "it was fine".
Too fast in a residential area = disengagement, but the teenager is telling me "you told me to go the speed limit" and has no understanding that I want to do 5 under in tight residential areas.
Too slow during a maneuver = take over, but the elder (the cars computer) is telling me to have patience for its old bones.
I'm a bit less pessimistic than you, though I agree about the over-confidence factor. So far, my experiences with FSD have been that its very timid .. it tends to mostly drive at safe speeds regardless of the set limit (it did it on the way home today, when it held at 29 even though the limit was 35 because it was unhappy about weather). Also, if I'm honest, a lot of the time when I take over because (say) the car appears to be cutting things fine, in fact the car was just cutting it closer than a human (me) would be comfy with, but is no big deal to the car.

But yes, I agree that complacency by the driver is dangerous (its already happened with plain old TACC/NoA). Perhaps Tesla should program in a random "TAKE OVER" panic even when things are safe to make sure the driver is paying attention (not really, such things might actually cause accidents). My guess is the eye monitoring should take care of most of that, though perhaps I'm being over-optimistic.
 
I'm a bit less pessimistic than you, though I agree about the over-confidence factor. So far, my experiences with FSD have been that its very timid .. it tends to mostly drive at safe speeds regardless of the set limit (it did it on the way home today, when it held at 29 even though the limit was 35 because it was unhappy about weather). Also, if I'm honest, a lot of the time when I take over because (say) the car appears to be cutting things fine, in fact the car was just cutting it closer than a human (me) would be comfy with, but is no big deal to the car.

But yes, I agree that complacency by the driver is dangerous (its already happened with plain old TACC/NoA). Perhaps Tesla should program in a random "TAKE OVER" panic even when things are safe to make sure the driver is paying attention (not really, such things might actually cause accidents). My guess is the eye monitoring should take care of most of that, though perhaps I'm being over-optimistic.

In my experience a lot of the turns at intersections are wildly out of norm where it's simply not performing the turn correctly. The only reason I ever allow it to continue with a turn is simply out of curiosity. Where that curiosity could cost me a rim, but so far I haven't had any bump.

The other thing I noticed is after the recent snow storm a couple things happened.

They sanded the roads during the storm so now there is sand on the shoulders and areas where people don't commonly drive. Yet, I constantly find FSD Beta on the sand so I'm correcting for that. Mostly in residential areas and on turns at intersections.

On bothell-everett highway going north to the Mill Creek the snow plowing removed the bolts/dots (or whatever the heck their called) lane dividers so the car only shows it being one lane, and the pointer will show it wanting to move to the center (I opted not to use FSD Beta here due to it).

Even as FSD Beta gets better I can't see myself ever consciously trusting it. I strongly believe they took the wrong approach, and they have the wrong sensor suite. So always going to have the same pessimism towards it.

But, like most any human I am prone to subconscious trust. The problem with subconscious trust is driver monitoring isn't going to be effective. I'd be simply staring ahead while day dreaming about something else.

I think we'll see an uptick in accidents happening with accident free folks who just lose situational awareness, and get caught in a quick event.

It will be interesting as some folks will want to shut the whole thing down, and other folks will want to ride it out to see if it becomes 2x safer than the average driver or some other metric (I'm in the 2x safer than a good driver camp).
 
On bothell-everett highway going north to the Mill Creek the snow plowing removed the bolts/dots (or whatever the heck their called) lane dividers so the car only shows it being one lane, and the pointer will show it wanting to move to the center (I opted not to use FSD Beta here due to it).
Interesting, I'll have to try that road tomorrow. I've had FSD drive me up and down that road a few times with no trouble, so will be interesting so see how that has changed.
 
It will be interesting as some folks will want to shut the whole thing down, and other folks will want to ride it out to see if it becomes 2x safer than the average driver or some other metric (I'm in the 2x safer than a good driver camp).
Well the "shut it down .. ban it!! .. something bad happened!" is always the knee-jerk hysterical reaction, driven by our wonderful press. I agree there will be some accidents, perhaps a significant number, but I wonder what they will be like .. I suspect (with little evidence atm) that they will mostly be minor fender-benders, with the occasional much more serious event as the car drives straight into a lamp post etc.

So far I've not had to "save" FSD beta from anything approaching a serious accident .. mostly it just maybe cutting things a bit finer than I like on some turns, or taking them a bit too fast (I dont use Assertive mode). Occasionally it's over-protective (as it was tonight, when it swerved to the right edge of the road as an oncoming car drifted slightly over the lane line).
 
I'd like to speculate a bit about Tesla's FSD, attempting to foresee possible ways into the future.

One possible scenario is that Tesla realises that Full Self Driving, say SAE Level 3, cannot be achieved by the currently employed methods within 3, or even 5, years. What could Tesla do to make the best use of what has already been achieved?

FSD Beta is impressive, but not really close to full self-driving. It would be sad to throw it away or to keep running large beta tests for years without achieving the desired result. Raising its price could well lead to fewer people buying it.

How could a not-quite-full self-driving capability be released to the public and how could it be made useful and safe? One of the problems is that, if "FSD" gets relatively close to Level 3 without achieving it, drivers will become over-confident and over-reliant, causing some nasty accidents.

One possibility could be that FSD could be enriched with the ability to foresee difficult situations, perhaps by having relatively small, critical zones in their maps, and force the driver to take over until the critical zone is passed. This would work, if FSD could reliably reach Level 3, except for these known zones. A critical zone could be, for example, a difficult crossroads.

Are there other possible solutions?
 
Are there other possible solutions?

At this trajectory, I don't see how Tesla will achieve collision avoidance capability for a long time like another 6 years, then another 6 years after that or more since its first AP2 production in 2016.

It's safe as long as drivers can overtake the system at any moment.

 
At this trajectory, I don't see how Tesla will achieve collision avoidance capability for a long time like another 6 years, then another 6 years after that or more since its first AP2 production in 2016.

It's safe as long as drivers can overtake the system at any moment.



...nothing in that video even suggests that FSD was running for their test though so I'm not sure how it's useful at all in describing the current state of FSD.

We already know the "old code" stationary object detection isn't great, but that's not what's being used going forward.
 
...nothing in that video even suggests that FSD was running for their test though so I'm not sure how it's useful at all in describing the current state of FSD.
Correct! It does not say the Model Y was running FSD.

However, since 10/2014 all Tesla has been using RADAR and monocular forward camera for Automatic Emergency Braking. Its hardware has been further enhanced with tri-focal forward camera in 10/2016. Recently, in 2021, North America Model 3 and Y have been enhanced with Radarless Pure Vision.

So at the CES 2022, the Pure Vision AEB is performing pretty much the same way as Tesla 2014 AEB: It would still hit the dummy.

...that's not what's being used going forward...
And that's the progress report: Since the difficult task of sensor fusion is simplified to Pure Vision from May 2021, we can now wait for reports of Tesla new dummy collision test tomorrow, next month, next year, next 6 years or more...
 
Correct! It does not say the Model Y was running FSD.

However, since 10/2014 all Tesla has been using RADAR and monocular forward camera for Automatic Emergency Braking. Its hardware has been further enhanced with tri-focal forward camera in 10/2016. Recently, in 2021, North America Model 3 and Y have been enhanced with Radarless Pure Vision.

So at the CES 2022, the Pure Vision AEB is performing pretty much the same way as Tesla 2014 AEB: It would still hit the dummy.


Except for all we know that was just a human with his foot to the floor to make the LIDAR look good.


When Teslas AEB is actually tested, properly, by government agencies and they find it works not just fine, but generally better than other cars systems



Euro NCAP released its latest batch of crash test results, and the Tesla Model X was the “stand-out performer.”

and

In one of the more impressive tests, the Model X comes from 60 km/h to a stop on its own when it detects a pedestrian crossing the street in the dark.


In fact- it appears they even tested a situation like the one the LIDAR advertisers claim the Tesla didn't slow down for at all....

Another interesting test involved simulating a small child crossing the street from behind cars.

It gives only a second of reaction time to try to avoid an accident. It’s safe to assume that it would result in an accident more often than not with human drivers without driver-assist features.

But Tesla’s AEB performed fairly well. It stopped just in time at 25 km/h and while it didn’t stop in time at 30 km/h, it still detected the child and reduced the speed before impact



Now--- Do we believe objective testing from a government safety agency.... or what is basically ad copy from a LIDAR makers CES publicity stunt? TOUGH CHOICE!
 
Last edited:
...Now--- Do we believe objective testing from a government safety agency.... or what is basically ad copy from a LIDAR makers CES publicity stunt? TOUGH CHOICE!...

They are both valid tests.

Tesla AEB has a better chance if the speed is slower rather than faster. It also has a better chance when the obstacle is moving rather than stationary.

Both the CES and Government utilized those factors that affect the collision results. All those bad and good results are valid.
 
  • Informative
Reactions: Baumisch
They are both valid tests.

They're really not though.

One is objective, by a neutral government safety agency, with a specific, published, testable and repeatable procedure.


One is a CES publicity demo for a company whose entire financial interest here is making Tesla look bad, and who offered zero actual data beside a brief, stated, test description and a PR video.



If you think those are both valid and equal I'm not sure there's much point in further discussion though.



Tesla AEB has a better chance if the speed is slower rather than faster.

So does all AEB, because that's how physics works.

Further- as even your own link notes-

Your link said:
To be clear, Luminar's example here isn't demoing any sort of self-driving suite


For all we can tell they just hooked up a crazy expensive LIDAR array with a wire to the braking system telling it "If you think you see anything at all, regardless of degree of certainty, SLAM on the brakes"


It might well be a system that is absolutely horrible in real world use, but looks great in a PR demo in a couple of specific set-up scenarios.



Again- when an objective test was done across all actual real life vehicle control systems Tesla scored among the best on the market.
 
I am excited for this! I've been wondering about the progress of IEEE P2846 for awhile since the first draft was due to be finished by now. Well, there is going to be a free workshop to discuss the IEEE P2846 safety standard.

Is this related to RSS that Mobileye talks about ?

This workshop will provide an overview of the IEEE P2846 standard. This standard describes the minimum set of reasonable assumptions used in foreseeable scenarios to be considered for road vehicles in the development of safety-related models that are part of automated driving systems (ADS).​
 
Is this related to RSS that Mobileye talks about ?

This workshop will provide an overview of the IEEE P2846 standard. This standard describes the minimum set of reasonable assumptions used in foreseeable scenarios to be considered for road vehicles in the development of safety-related models that are part of automated driving systems (ADS).​

Yes. Mobileye is part of the working group drafting IEEE P2846. Mobileye contributed their RSS to the standard. So IEEE P2846 is more than just RSS but RSS is part of it.
 
Last edited:
Note even pulling to a stop in a traffic lane is an acceptable fallback according to SAE, and pulling into the side is definitely considered acceptable (in the same way it is when a human is driving). I don't think there will be a different engagement or disengagement ODD. Weather predictions unfortunately may not be reliable (as many found out this winter), so the car still needs to be designed to disengage "safely" when things fall out of ODD.

It has to be an acceptable fallback because it simply isn't realistic for that not to be a fallback. It's just not going to be an acceptable fallback in actual practice. It's only a fallback for a critical failure. Something that should be extremely rare.

I don't have any disagreement on whether there is one or two ODD. In Engineering practice its pretty common to use two different parameters for engagement, and disengagement.

I myself have two sets of ODD's for when I drive.

Now partly its because of my own stubborn nature, but it also makes sense to me.
 
Yes. Mobileye is part of the working group drafting IEEE P2846. Mobileye contributed their RSS to the standard. So IEEE P2846 is more than just RSS but RSS is part of it.
hmm you can be sure that P2846 will be nicely in line with what MobileEye sees as their business advantage then. That's what standards like this are usually about (having been involved in several and watched the partisan bickering and jostling first hand)
 
  • Like
Reactions: EVNow