Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Can confirm as a EU resident having driven in at least 15 EU countries . Road signage in Europe has huge overlap. If Tesla has a plan for UK (driving on the left) I'm sure they can figure EU out.

The toughest edge case in my opinion is on narrow streets where you have to honk or flash lights before the corner to announce your presence and to listen for honks to know the coast is clear. (Examples to be found in many busy Italian cities, for example).
I think we might have to accept that driving through medieval town centers might not get covered by FSDb...and roundabouts
 
I think we might have to accept that driving through medieval town centers might not get covered by FSDb...and roundabouts
I guess the software will avoid those types of roads when traveling from A to B, but if one of those streets is the destination I don't see why FSD V12 couldn't be trained on it. Will it be hard? Yes. But humans also get stuck there sometimes (for example the road is to narrow to cross an oncoming vehicle so one of them has to reverse to the next junction. FSD v12 should be able to reverse if necessary).

Mind you, this will take a lot of time, but as a father of young children I'm thinking: will this capability exist in 10 years? And given the AI developments in the past 10 years I'm inclined to say 'yes'.
 
Honestly don‘t see your argument. He is technically right in his statement as it’s not Needed. It may add value but that doesn’t mean it’s needed. Can a car run exclusively with cameras? Yes, can a car run exclusively with only LiDAR? No. So by that definition cameras Are superior. All your arguments and rebuttal are fine but that doesn’t make his statement wrong.

It depends if you want your self-driving to need supervision or not and how safe you want the self-driving to be. Elon is correct that vision-only is sufficient just to do self-driving. So if you are doing self-driving with supervision, no, you do not need radar or lidar. BUT if your goal is safe and unsupervised self-driving (ie "eyes off"), and Elon has talked L4/L5 which implies "eyes off", then I think you do need at least radar and probably lidar too. That's because to remove supervision, you will need much higher mean-time-before-failure than MTBF what vision-only can provide. You need much more robustness and reliability in your system if you are going to remove the human driver as a back-up. That's where lidar will absolutely help for the reasons I mentioned before.

Now, I will say that with the next gen, high resolution radar, it is possible that lidar will become optional. But my basic point, is that some other sensor (radar and/or lidar) is needed to achieve safe "eyes off". Vision-only is fine for supervised self-driving but you need some other sensor to back up vision if you want to take the self-driving to unsupervised.

It really all boils down to what MTBF you think "eyes off" needs. Based on US highway fatality stats, to be 2-3x safer than humans, the MTBF needs to be about 10M hours of driving per failure. So if you want to do safe "eyes off", then your system with no human supervision, needs to achieve 10M hours of driving between safety critical failures. Can vision-only achieve that MTBF of 10M hours of driving per safety critical failure? Most AV companies say no. They are not against vision-only, they use vision-only for L2 or supervised self-driving, they are just doubtful that it can achieve that high of a MTBF alone with no human supervision. Right now, the best vision-only systems are nowhere near 10M hours per failure. That is why they add radar and lidar, to help increase the MTBF beyond what vision-only can do, to try to get to that 10M hours per failure goal. Elon seems to think that E2E with video training on best human drivers is the missing key to getting vision-only to that high MTBF needed for safe "eyes off". We will certainly see if Elon is right.
 
Last edited:
  • Like
Reactions: spacecoin
It depends if you want your self-driving to need supervision or not and how safe you want the self-driving to be. Elon is correct that vision-only is sufficient just to do self-driving. So if you are doing self-driving with supervision, no, you do not need radar or lidar. BUT if your goal is safe and unsupervised self-driving (ie "eyes off"), and Elon has talked L4/L5 which implies "eyes off", then I think you do need at least radar and probably lidar too. That's because to remove supervision, you will need much higher mean-time-before-failure than MTBF what vision-only can provide. You need much more robustness and reliability in your system if you are going to remove the human driver as a back-up. That's where lidar will absolutely help for the reasons I mentioned before.

Now, I will say that with the next gen, high resolution radar, it is possible that lidar will become optional. But my basic point, is that some other sensor (radar and/or lidar) is needed to achieve safe "eyes off". Vision-only is fine for supervised self-driving but you need some other sensor to back up vision if you want to take the self-driving to unsupervised.

It really all boils down to what MTBF you think "eyes off" needs. Based on US highway fatality stats, to be 2-3x safer than humans, the MTBF needs to be about 10M hours of driving per failure. So if you want to do safe "eyes off", then your system with no human supervision, needs to achieve 10M hours of driving between safety critical failures. Can vision-only achieve that MTBF of 10M hours of driving per safety critical failure? Most AV companies say no. They are not against vision-only, they use vision-only for L2 or supervised self-driving, they are just doubtful that it can achieve that high of a MTBF alone with no human supervision. Right now, the best vision-only systems are nowhere near 10M hours per failure. That is why they add radar and lidar, to help increase the MTBF beyond what vision-only can do, to try to get to that 10M hours per failure goal. Elon seems to think that E2E with video training on best human drivers is the missing key to getting vision-only to that high MTBF needed for safe "eyes off". We will certainly see if Elon is right.
The problem with these discussions is that what is needed for "eyes off" is unknown at this point, and entirely up to regulators.

If regulators allow eyes off the moment FSD is statistically on par with a human driver regarding crashes/severity of crashes, then HW3 might prove sufficient.

If regulators set a higher bar (which I believe they will) then it is up in the air what HW suite will allow unsupervised autonomy.

But right now there is no "right and wrong" in this argument since the legal definitions are not yet outlined.
 
The problem with these discussions is that what is needed for "eyes off" is unknown at this point, and entirely up to regulators.

If regulators allow eyes off the moment FSD is statistically on par with a human driver regarding crashes/severity of crashes, then HW3 might prove sufficient.

If regulators set a higher bar (which I believe they will) then it is up in the air what HW suite will allow unsupervised autonomy.

But right now there is no "right and wrong" in this argument since the legal definitions are not yet outlined.

That's a fair point. As I said, vision-only is certainly enough to do self-driving. But it is unclear if vision-only will achieve the safety set by regulators. I agree that if the bar is set low, Tesla's approach may work great. And maybe Elon is hoping to get by on the cheapest sensors and the regulators will accept their safety. That would be a win for Tesla. Of course, if regulators require higher safety, Tesla may be forced to add more sensors. So the strategy could fail. But since there is no definitive answer yet, I just think it is premature for Elon to declare lidar dead.

And consider this scenario: say if you have two FSD systems that are roughly equal in terms of capabilities and ODD. But one FSD is vision-only and it kills a pedestrian like the Uber crash. It is discovered in the accident investigation that the vision failed to respond quick enough to avoid the collision due to poor lighting or whatever. The second FSD has a front lidar and avoids the collision. Regulators could require all FSD to have a front radar or lidar to avoid these accidents in the future. Radar or Lidar could also be a strong PR point: "You can trust our FSD to be driverless because it is can avoid these collisions! Don't trust the vision-only competition that will kill pedestrians!" Hyperbole perhaps but that's how PR can work.

My point being that regulation, safety, and marketing may play a big role in sensors. It is not as simple as just what you think is enough to do self-driving. Elon can say vision-only is enough all he wants but at the end of the day, there may be other considerations that force Tesla add radar or even lidar.
 
  • Like
Reactions: dramsey
Elon also tweeted a few days ago about how roads are made for vision, so lidar is not needed. He does not seem to understand the purpose of lidar.
I would bet rather a lot of money that not only does Elon understand the purpose of LIDAR, but that after a decade or so of hiring and working with some of the world's top experts in autonomous vehicle development, he probably has a much better handle on things like redundancy and sensor fusion and whatnot than any of us do.

My personal theory-- which is worth every penny you paid for it-- is that Elon likes to cut costs. Chrome trim to black? It's cheaper. Elimination of things like turn signal stalks and other physical controls? It's cheaper. My 2013 Model S had a smorgasbord of options to choose from-- remember when the power rear hatch and fancy sound system were options?-- while my 2023 Model S LR had paint color, interior color, and wheels as choices, and that's it. Because it's cheaper to build cars when you have fewer options. People may gripe about the passenger seat in their Model 3 not having power lumbar support, but Tesla may be the only company in the world making a profit selling EVs. Ford, GM, etc. sure aren't. ANYWAY...

It's also cheaper not to have LIDAR and radar. I dunno if Elon walked into the FSD engineering room one days and said "We're doing it all with just cameras", then turned around and walked out, or whether he had weeks of intense technical discussions on whether a camera-only system was feasible, because if it was, they could save $XX per car. Maybe it wasn't even his idea. We'll never know.

As we know, Elon will go for the snappy, off-the-cuff sound bite like "roads are designed for vision", even if it doesn't really make that much sense in context. Remember, this is the same guy who said the new Roadster (hey, does anyone remember the new Roadster?) would be able to physically fly.

Will Tesla actually manage to deploy an FSD system that's vision-only? I dunno. I'd think that if 11.4.7, which my car just got, had been described a couple of years ago as a vision-only system, it would have been widely decried as "Impossible!", yet here we are. Years behind Elon's ever-evolving due dates for "true" FSD, granted, but in the 18+ months I've had it, it's been getting incrementally better almost every release. What we have now would have considered a technological miracle two years ago, weird edge cases notwithstanding. I use it every day and have a pretty good mental map of its capabilities, and when I should expect to intervene or simply turn it off and drive myself. I've driven in heavy city traffic and cross-country with FSD, and I'm...optimistic about its future. Even without LIDAR.
 
That's a fair point. As I said, vision-only is certainly enough to do self-driving. But it is unclear if vision-only will achieve the safety set by regulators. I agree that if the bar is set low, Tesla's approach may work great. And maybe Elon is hoping to get by on the cheapest sensors and the regulators will accept their safety.
"Regulators", he spat. God save us from "regulators".

Regulators stuck us with inferior sealed beam headlights decades after the rest of the world had moved on to brighter, safer replaceable halogen bulbs. Regulators decreed that all cars must have a "center high mounted stop light", aka "third brake light", and were unable, a decade later, to provide any statistics showing they had reduced rear-end crashes. Regulators gave us the "chicken tax" on imported pickups.

There are so many more examples. Google "stupid auto regulations".

Are there actually any Federal regulations at all concerning advanced driver assistance systems? I don't think so, vague and outdated SAE "standards" notwithstanding.
 
  • Like
Reactions: rlsd
In regard to redundancy, would it not be the case that the main regulator concern should/would be how the car handled/what it does in the event of sensor failure?

Reading the above posts, I get the impression that there is the assumption the regulator may require full ongoing self driving ability in the event of sensor failure, however the reality may be that the regulators will only require the vehicle has sufficient ability remaining to be able to pull over to a safe place and to stop.

For most situations in the event of single camera failure, the use of hazard lights to warn other road users, combined with immediate deceleration and lower speeds a Tesla should be perfectly able to do this relying on a minimum of one front camera only I would have thought?

Even in the extremely rare event of all three (or both for HW4) front cameras failing it still might have enough data in memory/map on what was ahead prior to failure to take immediate emergency action.
 
  • Like
Reactions: willow_hiller
Thought experiment for current end-to-end AI systems:

Imagine a world in which in 99.999% of the world, you may turn right on red UNLESS there is a sign that says "No right turn on red." In %0.001 of the world, it is illegal to turn right on red unless there is a sign that explicity allows you to.

Wouldn't we need a finetuned model for this imaginary %0.001 part of the world that trains the car to hold at a red light and not turn right?

Genuinely asking the question...
 
Thought experiment for current end-to-end AI systems:

Imagine a world in which in 99.999% of the world, you may turn right on red UNLESS there is a sign that says "No right turn on red." In %0.001 of the world, it is illegal to turn right on red unless there is a sign that explicity allows you to.

Wouldn't we need a finetuned model for this imaginary %0.001 part of the world that trains the car to hold at a red light and not turn right?

Genuinely asking the question...
Yeah obviously this would not work.

They’ll use their HD map they are already using, and read all signs (some cars of course will need better cameras). It’ll all be a mish-mash.

Don’t read too much into Elon’s statements. He’s a salesman - he does not need to communicate accurately what they are actually doing or planning to do.
 
  • Like
Reactions: EVNow
It really all boils down to what MTBF you think "eyes off" needs. Based on US highway fatality stats, to be 2-3x safer than humans, the MTBF needs to be about 10M hours of driving per failure

I'd counter-argue that not all failures need to mean there's an incident. It would be acceptable, especially in the L3-4 case where there's a capable driver who's just reading a book or whatever, that some of the F's you're including in your MTBF are of the kind where the system may not be confident in how to proceed with the driving task, but can still safely pull over and politely ask the driver to take over.
 
Wouldn't we need a finetuned model for this imaginary %0.001 part of the world that trains the car to hold at a red light and not turn right?
I don't know how training sets go into neural networks, but conceptually it doesn't matter if the sign is present for 0.001% of cases or 99.999%. If the system has to behave a certain way in the presence of that sign, then it's just as mainstream as any other behavior encoded into the network. I would assume that the only real difference would be in collecting enough training data. If the system has a million cases of cars turning right without a sign and only ten of cars stopping for the sign, is the network going to produce the desired control outputs? Simulated cases may be required to ensure that the reaction to the sign forms properly in the network.
 
  • Like
Reactions: JupiterMan
I'd counter-argue that not all failures need to mean there's an incident. It would be acceptable, especially in the L3-4 case where there's a capable driver who's just reading a book or whatever, that some of the F's you're including in your MTBF are of the kind where the system may not be confident in how to proceed with the driving task, but can still safely pull over and politely ask the driver to take over.

That is why I specified safety critical failures. I am only talking about serious failures that are likely to cause an accident.

But you also need to minimize the failures that require a pull over if you want to do reliable "eyes off" or L4. The whole idea in those systems is that the human is a passenger and does not need to do any driving tasks. So you don't want a system that requires the human to take over a lot. Instances that require the car to pull over should be extremely rare.
 
Yeah obviously this would not work.

They’ll use their HD map they are already using, and read all signs (some cars of course will need better cameras). It’ll all be a mish-mash.

Don’t read too much into Elon’s statements. He’s a salesman - he does not need to communicate accurately what they are actually doing or planning to do.
If I understand it correctly, V12 will see map data removed from the planning function of FSD, it will be NN planning only thereon in. Elon has talked about this before saying that under V12 you could just enter GPS co-ordinates and the car would then work it out.

Would be good to get confirmation on this though.

Also, the Tesla under V12 doesn't actually read the sign, instead it simply associates an image with an action, so it just needs to know from training what action to take if it sees a sign it recognises - it doesn't matter to the car what is written on it, it doesn't care.
 
If I understand it correctly, V12 will see map data removed from the planning function of FSD, it will be NN planning only thereon in. Elon has talked about this before saying that under V12 you could just enter GPS co-ordinates and the car would then work it out.

Would be good to get confirmation on this though.

Also, the Tesla under V12 doesn't actually read the sign, instead it simply associates an image with an action, so it just needs to know from training what action to take if it sees a sign it recognises - it doesn't matter to the car what is written on it, it doesn't care.
Right, and that's my point: most of the time, a sign will tell you what you can't do. But sometimes it's the absence of a sign that tells you what you can't do. How does the model figure out which is which?
 
  • Like
Reactions: daktari
How does the model figure out which is which?
Context, just like your brain does. One set of inputs contains a sign and one does not. The inputs that contain the sign cause the car to wait. The inputs that don't contain the sign cause the car to turn. Context also prevents the car from turning right to drive over a pedestrian. How all that stuff is integrated into a single neural network is the magic of the things.
 
Right, and that's my point: most of the time, a sign will tell you what you can't do. But sometimes it's the absence of a sign that tells you what you can't do. How does the model figure out which is which?
Just like the absence of any other sign on any other road I guess, just training.

If you train it at junctions with and without the sign, then it will learn actions necessary.
 
  • Funny
Reactions: AlanSubie4Life
I wish I was in the special club of no hands driving. Is that for people who have driving more than 10K miles on FSD?
No, it has nothing to do with mileage. The "club" isn't supposed to exist, hence the investigation.