Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla autopilot HW3

This site may earn commission on affiliate links.
My point in commenting was that for items like sharp turns maps are sufficient, if they have the data.

I like live data more than maps. In your case, the map tile does not have the needed data, nor is the system anticipating the turn. With a good fleet self updating map system it would still not need signs after the first time through (of any Tesla). However, there is always a first time, and conditions change (construction) so relying on maps is brittle.
I've been through these same turns multiple times since owning my car. Are you suggesting that it should be updating the tile data after just one car going through the turn one time?
 
I don't think we need sign recognition so much as a better way to localize the car with respect to the road, so something less than HD maps but more than regular maps; map data can only help so much if the car doesn't realize it's about to take an exit or an on-ramp that may have different geometry from the rest of the road.

(I assume this will happen outside freeways in order to enable NoA for city driving)
 
Last edited:
Are you saying Elon lied on the 22nd?
What I got from Elon's comments is that they are not planning to use HD maps as a primary input to the driving software (e.g., path planning or determining drivable area). Obviously, they will continue to use maps (HD or otherwise) for navigation/route planning and, until road sign reading gets more robust, they can use maps to get speed limit info, identify carpool lanes, one-way streets, dead ends, train crossings, etc. I imagine they will also use maps to "see" around blind curves/corners to know how much to slow down (of course, roads should be designed to avoid this, but in the real world we are not always so lucky). Maps can also include "real-time" traffic data which can help with route planning, but might also be useful to change lanes to avoid a blockage that you can't yet see.

Elon's point (which I agree with) is that you cannot rely on maps (no matter how precise) as primary input, because they are always out of date. You have to solve vision to be able to handle all of the exceptions and dynamic situations that occur. We know vision is both sufficient and necessary, because that is what human drivers use (plus a bit of hearing, which is not essential*), and because roads are designed for sighted humans. Lidar and radar cannot read signs. (*Plus, there is a microphone in the car, which could theoretically be used to detect sirens, etc.) If you "solve" vision, you don't need anything else to be at least as good as humans. It remains to be seen how hard this problem is to solve (or what exactly that means). I think recent advancements in software (Neural Nets/Machine Learning) and hardware (faster, cheaper and more domain-specific CPU/GPU/ASIC) are helping to accelerate progress, and we are close to an inflection point where a lot of progress can be made in a short time. Clearly, Elon is betting on this.

Then there is the question of sensor redundancy. I think there are at least two facets to this: 1) handling hardware failures or blocked sensors and 2) providing a supplement or backup for limited capabilities.

As far as 1) goes, Tesla has chosen to have multiple cameras covering frontward view (main direction of travel for cars) and not much overlap on the sides and back. Arguments for more cameras: they are cheap, could be blocked by dirt/debris/precipitation. Arguments against: don't fail often, so not needed. I am much more worried about occluded cameras than failure. Some partial remedy might come from hydrophobic/oleophobic coatings, shielding, heating elements, and/or wipers. Coatings and shielding could be added using aftermarket products.

As for 2), obviously autonomous cars will need to drive in some level of rain, snow and fog. Cameras and lidar are not great in these conditions due to reliance on visible light (or infrared). Lidar also has fairly poor temporal resolution, although spatial resolution is good. Radar can see right through precipitation, and has good temporal resolution, but spatial resolution is not great. Elon said in the Autonomy Day presentation that he preferred radar since it uses a different frequency spectrum (i.e., not visible light) and therefore covers cases lidar can't. Tesla has decided a single front-facing radar is good enough. What I got from the presentation is that moving forward, they are planning to use vision as the main sensor input and radar as a supplement in cases where cameras struggle (inclement weather, and maybe poor lighting). They also talked about using radar to help train (as a ground truth cross check) new software to use only the cameras to accurately supply distance information and build a 3D point cloud view of the world (akin to lidar). This seems promising, but is not a done deal. Again, Elon is betting on this.

I think they are deliberately avoiding having a system with multiple sensor types as primary inputs (aka sensor fusion). If you have conflicting inputs from multiple sensor types, deciding which one is right can be tricky. Maybe this will be solvable with software 2.0 (NN/ML), but I notice that Mobileye's fully autonomous prototype car is using only cameras.

TL;DR Elon's fundamental point is that vision is both essential and sufficient to solve FSD. Thus, focusing on other types of sensors (lidar) and inputs (HD maps) for the driving task are just distractions or crutches to help overcome (relatively?) short-term software limitations.

I think his overall reasoning is sound. Of course, the devil is in the details, so there are lots of potential stumbling blocks.
  1. The FSD computer (HW3) plus Karapathy's neural nets may not be good enough, or may take too long to get there. Predicting software development progress (especially in a rapidly changing area like machine learning) is hard. Elon has problems estimating timelines. In the next 2-3 months, we should start to see HW3-specific FSD features rolling out. The pace and quality of said roll out will give a good indication how well they are making progress. My gut feeling is that, since the hardware is well-designed and Karapathy is a smart and well-respected domain expert, we will indeed see some impressive progress shipping to actual customer cars in that time frame.
    If progress is too slow, they may lose mind and market share to the HD maps and/or lidar proponents, who will continue to make progress for some time. Tesla has to make it through the short term in order to survive long term as a company.
  2. There could be a breakthrough in solid-state lidar making it cheap enough to use widely and soon in production cars. This will help some developers/manufacturers stay (or jump) ahead of Tesla, at least in the short term. I am having a hard time not comparing this supposedly imminent breakthrough to the weekly announcements of battery breakthroughs that rarely pan out, but we shall see.
  3. Another car manufacturer could start taking FSD software development seriously. I don't think they are doing so now, even though some want you to think they are. Ultimately, autonomous cars mean the world will need fewer cars, so the manufacturers have a strong disincentive to make that happen quickly. Instead, they will continue to be conservative (slow) in their approach citing safety, the need for more testing, and maybe even (curse the thought) regulation(!). Take a look at how the automakers are dragging their feet incorporating Mobileye's newer chips and software or geofencing (Super)Cruise, or restarting with new alliances (Germans), or doing very little (Japanese/Koreans, well maybe not Nissan). A couple of them will get antsy and forge ahead. Several will not survive. New companies will probably emerge (maybe Chinese automakers partnering with US software).
  4. Tesla could fail to manage carefully the transition from assisted driving to FSD. This is tricky. People will start making too many assumptions about what the car can do. Depending on the performance of software and the design of the user interface, it might be hard for drivers to intervene safely when the car makes mistakes, which it will. A few bad fatalities (children/celebrities) could cause a huge backlash from regulators and/or the public against Tesla or FSD in general. Responses will probably (certainly) not be rational.
  5. Tesla could just not make it as a company for other reasons. The FUDsters could still win. It could be something with the SEC or battery fires or too many "Autopilot crashes". I think this is a lot less likely to happen now than 3-4 years ago, but there are many vested interests betting against them.
Autonomous vehicles will be an extinction level event in the transportation industry. The dinosaurs will die off, and the auto sapiens will emerge. Elon and Tesla will probably stumble along the way, but I'm betting on them stumbling forward when they do.
 
Last edited:
My AP1 slows nicely before tight turns based on map info.

There is a specific route that I take every and every workday (going to work - funny that), where on AP the car will slow down AFTER the turn. It has a 35 MPH turn sign, and just prior to the turn the speed is 55 MPH. The car will enter the turn at full speed (55 MPH) then after it turns it then starts to slow down.
I would love to know how to get this changed.
 
  • Like
Reactions: OPRCE
There is a specific route that I take every and every workday (going to work - funny that), where on AP the car will slow down AFTER the turn. It has a 35 MPH turn sign, and just prior to the turn the speed is 55 MPH. The car will enter the turn at full speed (55 MPH) then after it turns it then starts to slow down.
I would love to know how to get this changed.
This to me, is the classic example of why we want the car to read road signs. The 35mph turn sign isn't a speed limit sign so it's not in the map database to slow down. If the car could read that sign then it would start slowing well before the turn and then resume to normal speed after exiting the turn. It's slowing down after the turn right now because it was too slow in figuring out that it was going too fast.
 
I think they are deliberately avoiding having a system with multiple sensor types as primary inputs (aka sensor fusion). If you have conflicting inputs from multiple sensor types, deciding which one is right can be tricky. Maybe this will be solvable with software 2.0 (NN/ML), but I notice that Mobileye's fully autonomous prototype car is using only cameras.
Well yes, Mobileye's entire "claim to fame" when they started out was that they were able to do ADAS with just a cheap camera, which was later extended to multiple cameras. Quite possibly this is what made Elon, with Tesla being a Mobileye customer, believe in the approach. However, also note this passage in the article you linked:

"That dozen cameras, plus of course GPS and a couple decades' worth of learning from the company's various imaging-based driver assistance systems, is enough to make the car drive itself.
Except that it isn't. At least, it won't be when it comes to delivering the kind of on-road redundancy that Mobileye's engineers demand. And that's why Mobileye is now working on a lidar and radar-based solution."

As Lex Fridman said in a recent lecture, vision-only systems rely on the driver as failsafe, while vision+lidar systems use the lidar (and potentially HD maps) as failsafe. If true, this would imply that the vision-only system would never go beyond level 2 autonomy.
 
As Lex Fridman said in a recent lecture, vision-only systems rely on the driver as failsafe, while vision+lidar systems use the lidar (and potentially HD maps) as failsafe. If true, this would imply that the vision-only system would never go beyond level 2 autonomy.

I think that's why Tesla was pointing out the blue and green sides, and the camera overlaps. Tesla seems to believe they can safely operate the car with either half of the camera network, driven by either of the chips on their independant power systems.
 
Are you saying Elon lied on the 22nd?
Well, they discontinued autopilot maps v2 used up to 19.4.x and replaced it with autopilot maps v3 introduced in 19.8.x

You decide if he lied or was just selective about what he said.

If that's their sole angle of attack on this issue then they are going to be out of date before then even update their database more often than not
last I checked road curvature changes pretty rarely. same for bridges/overpasses and the like.

Yes, speed limits are out of data, but it's the road curvature that's important to determine safe speed that's not going out out of date as much as you imply.

Could they collect more data? Eventually they could, I guess. And this is not a replacement for realtime speed limit recognition, but for the purpose you requested - namely "see if it's a sharp turn to drop speed" - their solution is clearly superior.

Map will not tell you if an accident was just happened
Ever heard of... I dunno, Waze? ;)

Real time (sign reading) data is better
it's not. The sign does not tell you the road curvature or a safe speed.
 
I think that's why Tesla was pointing out the blue and green sides, and the camera overlaps. Tesla seems to believe they can safely operate the car with either half of the camera network, driven by either of the chips on their independant power systems.
That provides redundancy against some types of failures in the compute module, but is not a failsafe for failing or misdetecting sensors or flaws in the vision neural networks themselves.
 
Well, they discontinued autopilot maps v2 used up to 19.4.x and replaced it with autopilot maps v3 introduced in 19.8.x

You decide if he lied or was just selective about what he said.


last I checked road curvature changes pretty rarely. same for bridges/overpasses and the like.

Yes, speed limits are out of data, but it's the road curvature that's important to determine safe speed that's not going out out of date as much as you imply.

Could they collect more data? Eventually they could, I guess. And this is not a replacement for realtime speed limit recognition, but for the purpose you requested - namely "see if it's a sharp turn to drop speed" - their solution is clearly superior.


Ever heard of... I dunno, Waze? ;)


it's not. The sign does not tell you the road curvature or a safe speed.
If the curve as displayed on the map is enough to tell the car the right speed to attack the curve then why aren't they already doing this? GPS maps have been around for a very long time. Road signs do a lot more than tell the driver how fast to travel through a curve simply due to the sharpness of the curve.

And given the option I'd always rather multiple data points instead of one.
 
Hmmm... Who is going to report the accident? You after hitting the wrecked car on the road? :rolleyes:
the first driver (loosely used term) that sees it will report it.

Not to mention I've never heard Waze can direct you to steer through an accident. Remember we are talking about no driver in the car.
The driver should be well-enough aware of the surroundings to navigate the roadways. if they are not - they are unfit to drive and their driving license (loosely used term) should be revoked until improvements are made.
Remember that we cannot be talking about no drivers in the cars because there must be a driver to actually control the car, whatever form that driver takes. Uncontrolled cars are generally a bad idea.

If the curve as displayed on the map is enough to tell the car the right speed to attack the curve then why aren't they already doing this?
they do.

And given the option I'd always rather multiple data points instead of one.
yeah, this is a good idea of course.
 
the first driver (loosely used term) that sees it will report it.


The driver should be well-enough aware of the surroundings to navigate the roadways. if they are not - they are unfit to drive and their driving license (loosely used term) should be revoked until improvements are made.
Remember that we cannot be talking about no drivers in the cars because there must be a driver to actually control the car, whatever form that driver takes. Uncontrolled cars are generally a bad idea.


they do.


yeah, this is a good idea of course.
No. Actually they do not. I can video about 15 examples for you but YT is already littered with them so check there first.
 
Just to double check:

It is now May 2019 and there are NO features out there that require FSD pack (as defined above EAP), right? So nothing in August 2018, just like we got nothing in July 2017, the first FSD feature promise from Musk.

It actually seems plausible many of the AP2 cars will hit three years before the first FSD feature reaches them.
What we actually do have is by far the best ADAS on the planet.

Weird that you don’t follow autonomy enough to know that.

Are you new here, or just don’t read much?
 
What we actually do have is by far the best ADAS on the planet.
Are there statistics to back that up? I'm not convinced that Tesla's system has a better safety record than Mobileye's system.
Are you new here, or just don’t read much?
Anyway, most people here are more interested in level 3-5 autonomous vehicles which no one has deployed yet.
 
  • Helpful
Reactions: rnortman