Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD rewrite will go out on Oct 20 to limited beta

This site may earn commission on affiliate links.
They are deep questions.

Do you not already have electronic mirrors? There are several / many implementations from various manufacturers in Europe.

Remember 'your entertainment system failing (MCU1) not a safety issue' except if it effects demisting, wiper control, light control, backup camera etc etc?

Personally I think there is a good case for retaining old fashioned reflective mirrors for as long as there could be a human driver. The rear view mirrors with integrated monitor that I have seen do have some advantages, but also drawbacks that on balance leave the old fashioned mirror with plenty of value.

Of course no harm at all adding birds eye and other cool display / camera features, but with cameras still prone to problems like b-pillar cam condensation, I don't want to drive a car that relies on them.

Many of the US automotive laws were enacted many, many decades ago and have not been updated. Compared to Europe and Asia we are at least two decades old.

It’s only been a few years since an exception was given to the inside review mirror to allow an LCD and camera. These systems are better because you don’t get blocked views from passengers or the C pillar. They often have a wider field of view over the mirror and can be placed higher up closer to the roof. BUT the US only allowed it IF you could flip a switch and it turned into a regular mirror. Better than nothing.

For side mirrors I would think we could switch to the smallest legal size allowed as a backup and use interior HUD or LCD display as a primary.
 
I don't think this is correct. Machine learning requires a certain amount of data in order for the NN to get it right. One occurrence will not be enough data for the NN to solve it.
Driving policy can’t all be neural net. It is also traditional programming. In traditional programming 1 example (or even zero) can be enough to solve it.

Btw Musk said that most fixes in FSD are just fixing silly bugs.
 
Why not let FSD do a road test to get a driver’s licence like everyone else? If it passes it is good enough..

Well, you might find it funny, but the more I think about it the more I like it.. if something happens then authorities can point to the fact that FSD did a road test like everyone else and was therefore qualified. That an accident happens is inevitable like it might happen like everyone else.
 
Well, you might find it funny, but the more I think about it the more I like it.. if something happens then authorities can point to the fact that FSD did a road test like everyone else and was therefore qualified. That an accident happens is inevitable like it might happen like everyone else.
If passing a driving test was enough we would already have self driving cars. Self driving cars could pass driving tests back in 2007 (DARPA Grand Challenge (2007) - Wikipedia).
I think making a standardized test for autonomous vehicles is impossible. The issue is that machines to do not have general intelligence like humans so you can't extrapolate performance from a driving test.
 
Last edited:
Why not let FSD do a road test to get a driver’s licence like everyone else? If it passes it is good enough..

Well, you might find it funny, but the more I think about it the more I like it.. if something happens then authorities can point to the fact that FSD did a road test like everyone else and was therefore qualified. That an accident happens is inevitable like it might happen like everyone else.

The truth is that we don't really test for safety when we give a person a driver's license. We just test for basic competencies and knowledge of the rules of the road. That makes sense for people since we can't have a person drive like 10M miles to see if they are safe enough. But the result is that we give driver's licenses to a lot of people who should not get one. There are a lot of drivers on the road today that might have basic competencies but are not safe drivers.

If we applied your idea to AVs, we'd get the same problem. We'd get AVs on the road that might demonstrate basic competencies but might not necessarily be safe at all. So we'd get a lot of avoidable accidents with this approach.

And, your approach is not necessary for AVs since we can collect millions of miles of real world and simulation miles to know statistically if an AV is safe enough.
 
The issue is that machines to do not general {intelligence} like humans so you can't extrapolate performance from a driving test.

{} added


But surely AI / NN is very much about 'doing general'. The (many concerns / doubts and) over generalized negativity stem from me viewing each driving situation as being unique & uniquely difficult, while the whole point of the NN is to use common identifiable similarities to build a framework for recognition and appropriate responses.

I agree that taking a finite test no way gives a valid proof that the car can handle anything other than the specific road and traffic conditions encountered. But, having a standard test for automated vehicles to pass seems perfectly sensible imo.

Edit:
And, your approach is not necessary for AVs since we can collect millions of miles of real world and simulation miles to know statistically if an AV is safe enough.

Sure, but local conditions could still be taken into account by a local test. Not the same significance at all as a human driving test. And yes, the manufacturer testing and evidence should go a long way to making further testing unnecessary.
 
{} added


But surely AI / NN is very much about 'doing general'. The (many concerns / doubts and) over generalized negativity stem from me viewing each driving situation as being unique & uniquely difficult, while the whole point of the NN is to use common identifiable similarities to build a framework for recognition and appropriate responses.

I agree that taking a finite test no way gives a valid proof that the car can handle anything other than the specific road and traffic conditions encountered. But, having a standard test for automated vehicles to pass seems perfectly sensible imo.
Sure, my point was that that's not the way humans drive so using a human test will not work.
Maybe someone could come up with a test for AVs but no one has yet and I doubt it's possible. It's also completely unnecessary because to verify that the test works you would have to collect real world data anyway. Unless there is some point in the future where we're testing a million different self driving car designs it seems like a waste of time.
 
  • Like
Reactions: AlanSubie4Life
It already has a name, it's SAE Level 3.
You could add a description of the operational design domain. For example Audi calls there non-existent system "Traffic Jam Pilot" because it only works in slow traffic on the highway.

That's why it's important for engineers to go back and simulate each disengagement and try to figure what would have happened. Right now the disengagement rate seems far too high for that to be practical. They need to focus on making the car drive smoothly and predictably first before they start trying to measure theoretical driverless safety.

I think if they measure it, then it ceases to be "theoretical" :)
 
That's why it's important for engineers to go back and simulate each disengagement and try to figure what would have happened. Right now the disengagement rate seems far too high for that to be practical.

Not sure why you think that? Disengagements will be tinged by Tesla and they will focus on the ones that involve dangerous situations. Later, they will look at lower-priority issues as the major hurdles are cleared. And they wont simulate the engagement, they will look at the cars predictions and strategy at the moment of disengagement to find out why the car was doing what it was doing.
 
Not sure why you think that? Disengagements will be tinged by Tesla and they will focus on the ones that involve dangerous situations. Later, they will look at lower-priority issues as the major hurdles are cleared. And they wont simulate the engagement, they will look at the cars predictions and strategy at the moment of disengagement to find out why the car was doing what it was doing.
I was just saying that the sheer number of disengagements right now makes it impractical. The majority of disengagements right now don't require counterfactual simulations, they're just the car doing the wrong thing. Counterfactual simulations are required when the safety driver isn't sure whether or not the car will do the right thing and disengages to be safe.
 
  • Like
Reactions: diplomat33
I was just saying that the sheer number of disengagements right now makes it impractical. The majority of disengagements right now don't require counterfactual simulations, they're just the car doing the wrong thing. Counterfactual simulations are required when the safety driver isn't sure whether or not the car will do the right thing and disengages to be safe.

My point was your assertion that the number is impractical. Do you know what that number is? Or how Tesla triage them?

And how can you know what sort of disengagement you have until you triage it? (And I would assert that no disengagement is counterfactual by definition.)
 
My point was your assertion that the number is impractical. Do you know what that number is? Or how Tesla triage them?

And how can you know what sort of disengagement you have until you triage it? (And I would assert that no disengagement is counterfactual by definition.)
All I'm saying is that I doubt they're looking at something like this and simulating whether or not the car would have hit the parked truck and at what speed (it probably wouldn't have). They're trying to figure out why the car didn't make the turn correctly in the first place! It is true that I am making assumptions about their priorities.
 
  • Helpful
Reactions: GZDongles