Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
I suppose the question I have is, will the car be able to get to better than average human safety with the current sensor suite? Clearly, we're not getting Lidar in our Teslas.
who knows. The problem is in the computer and given that I believe we need general intelligence level of AI for the car to be good at actual autonomy - don't hold your breath.

-But how long should it take them to get to a higher (more precise) resolution?
there's no way for me to know this

-if they do data fusion on outputs anyway, doesn't that mean Karapthy's claim that fusion was "barking at the wrong tree" a bit strange?
yes an no. On one hand it's must easier to do fusion beteen camera basd outputs because you can always know which parts correspond to the same area of the image so there are no mistakes (expect for partially occluded objects I guess). Yes because other sensor provide you reliable data that you cannot get in other ways.

overall - see above, I think fixation on sensor suite s somewhat wrong in the sense that it's the brains that matter and the brain clearly don't exist yet.
 
who knows. The problem is in the computer and given that I believe we need general intelligence level of AI for the car to be good at actual autonomy - don't hold your breath.

My guess is whatever gets implemented will make stupid mistakes any experienced human driver wouldn't make leading to stupid accidents (like running into parked/overturned trucks/uncommon objects), but also avoid a lot of common accidents humans make due to the 360° + 100% paying attention aspect (merging into cars in blind spots and rear ending typical cars). Which one will outweigh the other is anyone's guess.
 
I think fixation on sensor suite s somewhat wrong in the sense that it's the brains that matter and the brain clearly don't exist yet.

As a strong proponent of Intelligent Roads, V2X, Speed control limitations (like what Europe is rolling out), L4 only, and slowly growing ODD I think the obsession with general intelligence along with L5 driving hinders very real progress that could be made.

The obsession with GA, and L5 is exactly why Tesla has had very limited progress with AP/FSD. They locked the sensor suite, and told the Engineers to make it happen with the existing sensors. Then after years of failing to fuse the radar sensor data they opted to just toss it.

I don't know about anyone else, but I could not safely drive the car if all I had to go off of was the Tesla Vision data. There would be too many situations where the data would be unclear as to what was happening around the vehicle. It would be especially problematic in the dark rain in a poorly lit parking lot.

I'm pretty confident that I'll I get a HW4 computer for free as part of an FSD wide upgrade, but I don't have much confidence that the Sensors will ever get upgraded.

Long term my expectation is they'll be less obsession with sensor suites as I expect SPAD like sensors that will fuse the data themselves.
 
My guess is whatever gets implemented will make stupid mistakes any experienced human driver wouldn't make leading to stupid accidents (like running into parked/overturned trucks/uncommon objects), but also avoid a lot of common accidents humans make due to the 360° + 100% paying attention aspect (merging into cars in blind spots and rear ending typical cars). Which one will outweigh the other is anyone's guess.

My expectation is that Waymo/Cruze/etc implementations of L4 vehicles won't be running into anything.

Instead they'll have unpredictable behavior at times causing madness for the human drivers around them. This will likely hinder the rollout as fleet companies try to balance the need to rollout vehicles along with increasing the acceptance of autonomous fleet vehicles.

What will really push autonomous driving will be cargo delivery of all sorts. Everything from food delivery carts at college campuses to freight movement on the interstate.
 
@verygreen If I understand you correctly, they need to "fuse" output from this special NN with output from the others anyway. The output seem kind of low res.
The res is already very good. Others pointed out the state of the art 4D radar (which supposedly Tesla have evaluated in the past) gets 500k PPS. This method gets 691k PPS
(19200 per frame x 36 fps).
-But how long should it take them to get to a higher (more precise) resolution?
-if they do data fusion on outputs anyway, doesn't that mean Karapthy's claim that fusion was "barking at the wrong tree" a bit strange?
As already pointed out, given the data is coming from set pixels on the same sensor, there isn't any fusion to speak of. This is different from when they were using radar where they had to match various points to the image sensor data.
I know they have a "no sensor change in any Tesla since late 2016" paradigm but I get a feeling this is a risk for the sw progress in 2023, going down the wrong path, or will sw progress be reusable when they end to change sensor suite?
I believe there was a presentation by Waymo posted upthread that mentioned this challenge of how to make previous data still useful across a sensor change.
 
As a strong proponent of Intelligent Roads, V2X, Speed control limitations (like what Europe is rolling out), L4 only, and slowly growing ODD I think the obsession with general intelligence along with L5 driving hinders very real progress that could be made.
you are missing that as long as you have wildlife and people on the roads you still need close to GA.

If you take away those actors - various autonomous trains on closed tracks in airports and the like are already a thing though ;)
 
I suppose the question I have is, will the car be able to get to better than average human safety with the current sensor suite? Clearly, we're not getting Lidar in our Teslas.

Elon seems determined to stick with the current hardware. The whole schtick about "... the car has all the needed hardware; all we need to do s finish the software and get it approved..." I don't think I'm the only person who believes you can't get there from here. (Though I know some think they can.)

My guess is whatever gets implemented will make stupid mistakes any experienced human driver wouldn't make leading to stupid accidents (like running into parked/overturned trucks/uncommon objects), but also avoid a lot of common accidents humans make due to the 360° + 100% paying attention aspect (merging into cars in blind spots and rear ending typical cars). Which one will outweigh the other is anyone's guess.

Modern chess software can beat most human players, but when it makes mistakes, they are very different from the kinds of mistakes human players make. Autonomous cars will make mistakes also, and those, too, will be very different from the kinds of mistakes human drivers make. There will be deaths in situations where a human driver would not have had an accident. But they will avoid many of the accidents that human drivers cause.

Such cars will not be authorized for use until their overall safety is better than human drivers. So the autonomous cars will be safer than human-driven cars because until they are safer, they won't be allowed. When that time comes is unknown.
 
  • Informative
Reactions: pilotSteve
Such cars will not be authorized for use until their overall safety is better than human drivers. So the autonomous cars will be safer than human-driven cars because until they are safer, they won't be allowed. When that time comes is unknown.

Might depend on the state. Looking at Nevada's DMV form for public use autonomous vehicles, it looks to be "self-certification." Just check off the box that says "Is capable of operating in compliance with all applicable motor vehicle laws and traffic laws of this State" along with a bunch of other bits on a 2 page form and e-mail it in.
 
you are missing that as long as you have wildlife and people on the roads you still need close to GA.

If you take away those actors - various autonomous trains on closed tracks in airports and the like are already a thing though ;)

Wildlife is notoriously difficult for humans to predict so I'm not expecting GA to be able to handle it any better than evolving algorithms of how to handle wildlife. I think we simply have to accept that time to time wildlife will get hit, and do our best to mitigate it through other controls.

Extremely limited L4 is definitely already a thing, but I see no reason why rest stop to rest stop L4 can't be a thing a long time before GA.

You only really need GA or close to GA for L5.
 
My expectation is that Waymo/Cruze/etc implementations of L4 vehicles won't be running into anything.

Instead they'll have unpredictable behavior at times causing madness for the human drivers around them. This will likely hinder the rollout as fleet companies try to balance the need to rollout vehicles along with increasing the acceptance of autonomous fleet vehicles.

I think you are basically describing the current state of autonomous driving. AV companies like Waymo/Cruise have very good perception so the cars can reliably detect objects. They are very unlikely to hit anything. And they have pretty good planning that allows the cars to navigate many common driving scenarios. Hence why Cruise has demos of navigating SF with zero disengagements and why Waymo is doing driverless rides in Chandler. But the decision-making still needs work. The current self-driving AI is not as smart or adaptable as humans. There are driving scenarios that are easy for humans that baffle current AI. There are also driving scenarios that self-driving AI does not handle wrong per se but does not handle the same way a human would. As a result, AVs can get stuck in unfamiliar situations or act "robotically". And like you said, this can be annoying to human drivers. I think solving that decision-making part is really the holy grail of autonomous driving. And I think it is one big reason why Waymo is focused so much on simulations. They put the Waymo Driver in lots of different situations or replay situations where it got stuck to hopefully improve the decision-making.
 
Wildlife is notoriously difficult for humans to predict so I'm not expecting GA to be able to handle it any better than evolving algorithms of how to handle wildlife. I think we simply have to accept that time to time wildlife will get hit, and do our best to mitigate it through other controls.

Extremely limited L4 is definitely already a thing, but I see no reason why rest stop to rest stop L4 can't be a thing a long time before GA.

You only really need GA or close to GA for L5.
Of course wildlife will not broadcast its own location, neither will dropped boxes and car parts on the road. No system can ensure accident-free operation if an animal or a person darts in front of your high-speed vehicle at the last moment.

But what people seem to overlook is the advantage of V2V/V2X in scouting ahead. In traffic or increasingly in urban then suburban areas, the vehicle can receive alerts of such hazards. This is a very powerful supplement to any system, no matter how smart it is, that can only gather unmappable info from its own local sensors.
 
  • Like
Reactions: S4WRXTTCS
Of course wildlife will not broadcast its own location, neither will dropped boxes and car parts on the road. No system can ensure accident-free operation if an animal or a person darts in front of your high-speed vehicle at the last moment.

But what people seem to overlook is the advantage of V2V/V2X in scouting ahead. In traffic or increasingly in urban then suburban areas, the vehicle can receive alerts of such hazards. This is a very powerful supplement to any system, no matter how smart it is, that can only gather unmappable info from its own local sensors.
But the core thing is it's a still a supplement. You can't get to safe L5 without solving the GA problem, because there will still be plenty of vehicles that don't have any V2V or V2X on the road driving and such scouting ahead isn't much use for predicting their actions. And the "safe" isn't talking about accident-free, just something that's as safe as humans.

Also the way the tech is being implemented so far (in terms of communicating road features instead of the vehicle's own data) is that such data is curated, so it's not necessarily real time.

Anyways, the way I see it is that there is a possibility that L5 is never achieved and we end up with the best being L4 that is slowly expanded via meticulously mapping one area at a time. If that is the case, Tesla's approach may be barking up the wrong tree, and they may be stuck with end-to-end L2 for a long time (and may have to switch gears to similar approaches to others, which is to load up with all kinds of supplementary sensors and rely a lot more on HD maps).
 
Last edited:
  • Like
Reactions: mikes_fsd
Wildlife is notoriously difficult for humans to predict so I'm not expecting GA to be able to handle it any better than evolving algorithms of how to handle wildlife. I think we simply have to accept that time to time wildlife will get hit, and do our best to mitigate it through other controls.
the problem is some wildlife drives some wild trucks and other heavy machinery.
Some wildlife is heavy, rolls up to you, and just won't get out of the way

 
...Anyways, the way I see it is that there is a possibility that L5 is never achieved and we end up with the best being L4 that is slowly expanded via meticulously mapping one area at a time.
I think a lot of people suspect L5 is not realistic for a long time, but widespread L4 is nearly as good and I'd happily buy that.
If that is the case, Tesla's approach may be barking up the wrong tree, and they may be stuck with end-to-end L2 for a long time (and may have to switch gears to similar approaches to others, which is to load up with all kinds of supplementary sensors and rely a lot more on HD maps).
I think everyone would love to know the thought process of the best-informed people at Tesla. I am not anti-Elon nor do I think he's clearly wrong, but he's clearly putting a lot of eggs in the sensor-light basket. If that turns out to be feasible, it will pay off very nicely for the company and the present owners. Of course I hope this is the case.

But if not, and setting aside the anger/embarrassment factor of further major slips , I think they have a couple more years to achieve unsupervised self-driving and still reap major market-leader benefits. So platform-wise, fish or cut bait time is approaching. They just rolled out the S/X refresh with basically no change to the suite. If they wait much longer to decide that a new computer and a better sensor package is needed to confidently schedule FSD success, the Tesla lead will evaporate.
 

Published 9 July 2020
Tesla will be able to make its vehicles completely autonomous by the end of this year, founder Elon Musk has said.
It was already "very close" to achieving the basic requirements of this "level-five" autonomy, which requires no driver input, he said.
Tesla's current, level-two Autopilot requires the driver to remain alert and ready to act, with hands on the wheel.
But a future software update could activate level-five autonomy in the cars - with no new hardware, he said.

"I remain confident that we will have the basic functionality for level five autonomy complete this year.
"There are no fundamental challenges remaining.

A year ago today.