Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla autopilot HW3

This site may earn commission on affiliate links.
And if you change the gravitational mass of the earth you don't need an airplane to fly, but that doesn't make it a practical solution. Remember that this is the guy who has already failed to deliver on his optimistic proclamations about AI and self driving cars.

But if Waymo et al. don’t solve vision, then self-driving cars will never happen, because you need to solve vision in order to handle depthless features of the environment. Traffic lights, signs, lane lines, brake lights, turn signals, etc.

“Airplane vs. anti-gravity” is an either-or framing, but we’ve already established that you can’t do autonomous driving without solving vision. An airplane is by itself sufficient to fly. Lidar is not by itself sufficient to drive.

Elon is not alone in his thinking. Mobileye says the same thing.

Recently, Anthony Levandowski — who played a key role at Waymo for years — has made similar comments about lidar and cameras.

With regard to optimistic proclamations... Elon is probably the most prominent example of an overoptimistic technologist, but Waymo has fallen short of its goal to have a commercial ride-hailing service with no safety drivers in 2018. Years ago, Sergey Brin predicted that Waymo would offer a real, public service by now that anyone could use. Chris Urmson set the goal that Waymo would solve autonomous driving in 2019, before his kid turned 16 and became eligible for a driver’s license. But Waymo doesn't seem close, and recently the CEO has been downplaying the long-term viability of the technology.

It seems pretty common in Silicon Valley and for software/hard tech/biotech companies to set ambitious targets and then totally miss them. Elon, Tesla, and SpaceX are just more exaggerated and well-known examples of this trend.

P.S. Kuro86k, rather than engaging with folks on here who use insulting language, I recommend reporting them and putting them on ignore. I still don’t think the autonomous vehicles subforum has a dedicated moderator, so there seems to be a lot more “snippiness” (as TMC calls it) here than in other subforums. Until we get more moderator love, I think we have to self-police and just not reward rude behaviour with our attention.
 
Last edited:
First of all affordable automotive grade Lidar is already here THIS YEAR (end of 2019). A startup could easily put it in their car for well under 1k. Traditional automakers are different in that they want their system complete and shipped with a car while a startup want the hardware shipped first and then update it with OTA.


250m range
25 FPS
120X25 FOV
0.1x0.1 angular resolution
<3cm Depth Accuracy
7.5M pixels/sec
 
Last edited:
250m range
25 FPS
120X25 FOV
0.1x0.1 angular resolution
<3cm Depth Accuracy
7.5M pixels/sec

0.1 degree at 250m is 43.6cm or 17inches.
3cm depth accuracy: at 60 MPH and 25 FPS, the vehicle moves 107cm or 3.5 ft between/ during scans.

The long range Tesla forward camera is 1280x720 (with 4 sub pixels, but let's ignore that). And a field of view of 35 degrees (more vertical range that the lidar), range 250m. 35/720=0.05 degree resolution vertical and 35/1280=0.03 degree horizontal. So 2 to 3 times the resolution. It also runs up to 60 fps, or 2 to 3 times as fast. Total data rate up to 55M Pixels/sec. Price << 1k.

However the camera doesn't do depth natively. But at 1k per sensor, the total system could...
 
But if Waymo et al. don’t solve vision, then self-driving cars will never happen, because you need to solve vision in order to handle depthless features of the environment. Traffic lights, signs, lane lines, brake lights, turn signals, etc.

I'm not sure why this is so hard to understand. Waymo have solved vision. They solved it to a much greater degree than Tesla, in fact, as their recent demonstration video showing how they can recognize traffic cop gestures demonstrates.

The point is that if you rely on cameras alone you have to do a lot more work to gather all of the information needed for self driving.

So the question is always if lidar will get cheap first or if Tesla will manage to get their vision only system up to scratch first. The former seems more likely to me, given the number of companies working on it and the progress being made, and Tesla's failures thus far suggesting they vastly underestimated the difficulty of this task.
 
0.1 degree at 250m is 43.6cm or 17inches.
3cm depth accuracy: at 60 MPH and 25 FPS, the vehicle moves 107cm or 3.5 ft between/ during scans.

Speed of light is pretty fast though and you get timestamp for scans and given IMU with very high rate you can just shift the point cloud into the same timestamp as your perception algorithms are running. I guess one issue is using the lidar data to train the camera networks, but if you have some frames you can probably extrapolate where moving objects would be at camera timestamps.
 
  • Like
Reactions: mongo
Speed of light is pretty fast though and you get timestamp for scans and given IMU with very high rate you can just shift the point cloud into the same timestamp as your perception algorithms are running. I guess one issue is using the lidar data to train the camera networks, but if you have some frames you can probably extrapolate where moving objects would be at camera timestamps.

Yah, light is fast. If the lidar creates all points simultaneously, then skew is not an issue. If it uses a scanning approach (mirror or otherwise), then skew due to scan time can be. It is not unsolvable, just a little weird (to me) in terms of specs.
 
I'm not sure why this is so hard to understand. Waymo have solved vision. They solved it to a much greater degree than Tesla, in fact, as their recent demonstration video showing how they can recognize traffic cop gestures demonstrates.
They have a fully generalized worldwide capable system than operates without the need for pre-mapping? It can handle my dirt road, driveway, and garage?

The point is that if you rely on cameras alone you have to do a lot more work to gather all of the information needed for self driving

The data you need to gather is similar, effort to gather is also similar (unless you don't collect multiple data set to validate repeatability/ reliability). The development of the recognition system and training is more involved though.

So the question is always if lidar will get cheap first or if Tesla will manage to get their vision only system up to scratch first. The former seems more likely to me, given the number of companies working on it and the progress being made, and Tesla's failures thus far suggesting they vastly underestimated the difficulty of this task.

If someone made a Lidar system first Tesla doesn't lose. If they make a lidar, get it integrated in a consumer vehicle, collect enough road miles to make regulators happy, enough to time in use to validate survivability, and then sell the feature to the public, that would put pressure on Tesla. However, given the other groups' focus on TaS, not consumer dual (or private) use cars, it may still not be much pressure.

Many companies working in parallel on their own only go as fast as the fastest company.
 
They have a fully generalized worldwide capable system than operates without the need for pre-mapping? It can handle my dirt road, driveway, and garage?

And Tesla does? You do still realize that NOA is simply a better adaptive cruise control and lane keeping?

The data you need to gather is similar, effort to gather is also similar (unless you don't collect multiple data set to validate repeatability/ reliability). The development of the recognition system and training is more involved though.

I have already proven this to not be true but people don't like math

If someone made a Lidar system first Tesla doesn't lose. If they make a lidar, get it integrated in a consumer vehicle, collect enough road miles to make regulators happy, enough to time in use to validate survivability, and then sell the feature to the public, that would put pressure on Tesla. However, given the other groups' focus on TaS, not consumer dual (or private) use cars, it may still not be much pressure.
There are already plans in place. The problem is you believe anything Elon says so having a discussion is pointless. You can't have a discussion with something who believe that Level 5 is two years away for the last 5 years. Because in 2020 when elon pushes it back again for another 2 years you will totally believe him AGAIN. so its pointless.

You knock Waymo down now, but even though their system isn't good enough to take the driver out, they do have the most complete system, lots of systems don't even drive in parking lots and Tesla has nothing but lane keeping and adaptive cruise control.
 
And Tesla does? You do still realize that NOA is simply a better adaptive cruise control and lane keeping?
Tesla has nothing to do with the claim:
Waymo have solved vision.

There are already plans in place. The problem is you believe anything Elon says so having a discussion is pointless. You can't have a discussion with something who believe that Level 5 is two years away for the last 5 years. Because in 2020 when elon pushes it back again for another 2 years you will totally believe him AGAIN. so its pointless.

Plans are good, what is the implementation timeline for an OEM to integrate Lidar? Is it longer that instantaneously? If so, that shifts the lidar solved versus Tesla vision solved goal posts.


You knock Waymo down now, but even though their system isn't good enough to take the driver out, they do have the most complete system, lots of systems don't even drive in parking lots and Tesla has nothing but lane keeping and adaptive cruise control.

I did not knock Waymo at all. I asked for clarification on the "vision solved" claim, and also regarding solution to implementation.
 
They have a fully generalized worldwide capable system than operates without the need for pre-mapping? It can handle my dirt road, driveway, and garage?

No, no-one has that. But they do have more than Tesla right now, and very clear and proven path forward. Their tech works and aside from solving the same corner cases that Tesla will eventually face, the only thing keeping it from consumer vehicles is price. There is a clear path to lower cost lidar and multiple companies working on it, making good progress.

More over, most efforts towards FSD are using lidar.

So while it's not impossible for Tesla to catch up, if you were playing the odds you would have to say that cheap lidar and proven tech is the more likely choice.
 
  • Helpful
  • Like
Reactions: OPRCE and mongo
Slightly off topic, but all this Lidar talk got me wondering... How loud are the spinning Lidar units we see on top of these cars? Loud enough to be heard in the cabin? I assume the solid state ones mentioned have no moving parts so they would be silent beyond electrical hum.

I've never been able to hear them. Even outside the vehicle, it's little more than a slight whir.
 
  • Informative
Reactions: Inside and Fiver
I'm not sure why this is so hard to understand.

Don’t be rude.

Waymo have solved vision. They solved it to a much greater degree than Tesla, in fact, as their recent demonstration video showing how they can recognize traffic cop gestures demonstrates.

A neural net making an accurate classification once doesn’t prove that it makes an accurate classification 99.999%+ of the time, which is the standard for “solving vision” in autonomous driving. If you accurately classify only 99% of red lights, your system will be unsafe to deploy (without a safety driver).

Last year, with HW2, Autopilot was able to classify cars, pedestrians, driveable roadway, etc. to a high degree of accuracy. But I wouldn’t say this proves Tesla has solved vision, since the devil is in the 0.01% of failure cases, which can’t be seen from a short video.


Vision may or may not be solved by one company or multiple companies. It is hard to guess what is true given that companies don’t release statistics on the accuracy of their vision systems.

But they do have more than Tesla right now, and very clear and proven path forward.

I don’t think there is a proven path forward for conventional, hand-coded software solving complex robotics problems involving human interaction. There are some proofs of concept for imitation learning and deep reinforcement learning. But not for conventional software.

That’s why I see more promise in Tesla’s production fleet learning approach than in Waymo’s small-scale testing approach. Waymo can’t collect enough data on the state-action pairs of human driving to do imitation learning across for all driving subtasks. If anyone can collect enough data, it’s Tesla.

If you think driving is a machine learning problem rather than a conventional software problem, Tesla has the most promising approach. Machine learning requires large datasets; Tesla has access to them, and Waymo doesn’t.
 
Last edited:
Saw on Reddit that these 2 vehicles popped up on TeslaFi.
2 Model S 100D vehicles in CA were updated from 2019.5.2.c21b3a5 to terminal/HW3-mileage. Anyone have any idea what these are? I highly doubt Tesla Employee FSD test vehicles are allowed to be subscribed to TeslaFi.

Someone managed to nab a screenshot before they were removed from the site:

xqDQolm.jpg
 
Saw on Reddit that these 2 vehicles popped up on TeslaFi.
2 Model S 100D vehicles in CA were updated from 2019.5.2.c21b3a5 to terminal/HW3-mileage. Anyone have any idea what these are? I highly doubt Tesla Employee FSD test vehicles are allowed to be subscribed to TeslaFi.

Someone managed to nab a screenshot before they were removed from the site:

xqDQolm.jpg
My guess is that whatever script TeslaFi is using to parse version data from the car got confused. It means something changed in the data format in some unexpected way. Perhaps it's as simple as a different looking version number. So the script returned something weird and that terminal/HW3-mileage string was the result.

So this tells us two things: 1) something in the format changed; and 2) HW3 is mentioned in the car's data. Me, if I were a TeslaFi subscriber I'd go looking to see what those records look like now that TeslaFi has probably fixed its script. Should still have the same time stamps, and other info.
 
  • Informative
Reactions: DDotJ
My guess is that whatever script TeslaFi is using to parse version data from the car got confused. It means something changed in the data format in some unexpected way. Perhaps it's as simple as a different looking version number. So the script returned something weird and that terminal/HW3-mileage string was the result.

So this tells us two things: 1) something in the format changed; and 2) HW3 is mentioned in the car's data. Me, if I were a TeslaFi subscriber I'd go looking to see what those records look like now that TeslaFi has probably fixed its script. Should still have the same time stamps, and other info.

No this really is a firmware version. We’ve seen ones with dev/ prefaces too that don’t have versions or git hashes. Someone previously explained that “terminal” is the last station before the car gets delivered, so it’s basically like factory firmware used pre-delivery.
 
NIO 2020, Lucid Motors 2020, BMW 2021, FCA 2021
What do you mean by "Is it longer that instantaneously?"

Mostly means I'm bad at typo-checking.

The point I was trying to make is that, assuming Tesla's cameras and radar are sufficient, then the worst case roll out for FSD, when it exists, would be an AP computer swap. If HW3 is sufficient, then it is only an OTA update for current 3 production.

For anyone else, the cars do not currently have HW on them. So, once a system is proven on test vehicles, that hardware needs to be integrated into the vehicle, manufacturing process, and supply chain. Likely aligned to a model year change. So that is a longer delay after they solve FSD than what Tesla may have.
 
  • Like
Reactions: jimmy_d
A neural net making an accurate classification once doesn’t prove that it makes an accurate classification 99.999%+ of the time, which is the standard for “solving vision” in autonomous driving. If you accurately classify only 99% of red lights, your system will be unsafe to deploy (without a safety driver).

Last year, with HW2, Autopilot was able to classify cars, pedestrians, driveable roadway, etc. to a high degree of accuracy. But I wouldn’t say this proves Tesla has solved vision, since the devil is in the 0.01% of failure cases, which can’t be seen from a short video.

High degree? That video is filled with errors and false positives. Plus its missing traffic lights, traffic signs, road markings, road signs, stop line, intersection markings, road profile, animals, hazard, general object detection, etc. Which are all essential to driving, insinuating that the current NN solved vision is ridiculous.