Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
(general silliness, not directed at you)

Huh, Tesla needs more pedestrian data?? I wonder where they could collect that? Where on earth they could find a lot of instances of cars and people in close quarters, and how could they get people to identify them while accepting low speed travel (just in case). :D

But, this perfectly illustrates the general frustration I have with Tesla.

They have a massive install base, but they barely take advantage of it.

With Smart Summon path planning they rely on OpenStreetMaps which is perfectly fine, but they don't have any mechanism to update those maps. The car doesn't update the maps based on what it sees when it attempts to drive the path. It will do the same path as many times as you try it.

If you update OpenStreetmaps yourself then within 72 hours (or so) the car has much better path planning. So updating openstreetmaps clearly works great, and worked for a lot of us including myself. It's the only way I could get smart summon to work halfway decently.

So why didn't Tesla incorporate any way of updating openstreetmaps themselves? Especially considering the night and day difference it makes?

Clearly we know from the failure of pedestrian detection that they haven't been using fleet learning to improve their pedestrian detection system because it would perform better than typical ADAS cars on the road.

We also know they have problems with maps that they don't seem to be using fleet learning to fix.

Like an example of this would be the speed limit in various areas. I think it's pretty easy to deduce when a speed limit is wrong if the speed people travel on it is vastly different than what the maps say.

To me it doesn't feel like Tesla is doing the very things that Karpathy talks about.

Now granted some of them are going to be kind hard. To collect data from the fleet you have to have trigger for the car to collect this data. How can you create a trigger for something the car doesn't know how to detect?

You might also be able to collect data for certain levels of uncertainty like "I'm 50% sure this is a kid". But, it's entirely possible you might never get data from the fleet for exactly what you're looking for. Where you need to have a lot of humans reviewing footage looking for whatever you're trying to improve. Where that footage was triggered by other events like when the car thinks the path forwards is fine, but the driver slammed on the brakes.

It does look like Tesla is trying to improve their data collection. Like recently I've seen (but, not in my car) screenshots of Tesla asking the user what caused a disengagement.

The problem with disengagements is sometimes its because the car really does something wrong, and other times the driver simply wants to take over. Over 50% of my take over events are simply because I wanted to take over, and was too lazy to bother doing it properly.
 
It's not like you are letting them die, it is not intentional.

Is it more or less immoral to let the 40k die so that you are not responsible for the 20k dieing?

It becomes immoral when you kill good drivers in order to save bad drivers.

Which is a huge problem considering the fact that the majority of accidents are caused by bad drivers, and not good drivers.

So at a minimum they have to be better than good drivers by a factor of 2 or more. Especially since good drivers will likely be the first to transition as they tend to be older with more money.
 
It becomes immoral when you kill good drivers in order to save bad drivers.

Which is a huge problem considering the fact that the majority of accidents are caused by bad drivers, and not good drivers.

So at a minimum they have to be better than good drivers by a factor of 2 or more. Especially since good drivers will likely be the first to transition as they tend to be older with more money.
I have experience with good drivers. Drive everyday , never a ticket or accident. They are paranoid in a good way. Because of this I doubt they will accept the tech until it has proven itself without a doubt. The drunks, the tired, etc, will be quick to give it a go.
 
With Smart Summon path planning they rely on OpenStreetMaps which is perfectly fine, but they don't have any mechanism to update those maps. The car doesn't update the maps based on what it sees when it attempts to drive the path. It will do the same path as many times as you try it.

If you update OpenStreetmaps yourself then within 72 hours (or so) the car has much better path planning. So updating openstreetmaps clearly works great, and worked for a lot of us including myself. It's the only way I could get smart summon to work halfway decently.

So why didn't Tesla incorporate any way of updating openstreetmaps themselves? Especially considering the night and day difference it makes?

Where is the feedback system for this though? It would need to come from real drivers as summon only knows that you canceled it. They could use vehicle tracking to guess at the traffic flow and parking locations, but I would be very wary of automatically pushing that to the general data repository.

I have experience with good drivers. Drive everyday , never a ticket or accident. They are paranoid in a good way. Because of this I doubt they will accept the tech until it has proven itself without a doubt. The drunks, the tired, etc, will be quick to give it a go.

And that will help save the good drivers.
 
  • Like
Reactions: DanCar
So why didn't Tesla incorporate any way of updating openstreetmaps themselves? Especially considering the night and day difference it makes?

Wait, I thought Tesla was the pioneer of map-less driving? But it actually depends heavily on OpenStreetMaps????? Hahahaha. That is ******* hysterical to me!

How can you create a trigger for something the car doesn't know how to detect?

This is a very good observation, and one of the most potent arguments against "fleet learning." False negatives (FNs) are among the hardest problems in the AV space. Fleet learning doesn't help FNs, and in fact causes harm.

If the car doesn't detect an object, it won't send anything to the mothership to be included in the training set. The next time the net is trained, guess what? It will now be penalized if it does detect that object. Neural nets are still pretty stupid: they are taught to detect positive objects, but also, implicitly, to not detect anything else.

The FN issue here actually snowballs with time. Each time the net is retrained, it will be less likely to detect the FNs than it was before.

Guess what everyone else does to fix the FN problem? They use lidar. Lidar has essentially 100% recall of rare and unknown objects. This is the one super-important quality that is not provided by any other sensor. If you do not have at least one sensor with 100% recall of rare and unknown objects, you cannot make a safe AV.

To me it doesn't feel like Tesla is doing the very things that Karpathy talks about.

The things Karpathy talked about wouldn't actually work well in practice. The whole Autonomy Day presentation was a bunch of marketing gimmicks. What did you expect from a pitch to retail investors, many of whom were already fanboys?
 
Last edited:
Elon said High resolution (3d) maps is a fools errand. Tesla makes plenty use of 2-d maps.

The arguments that Tesla fans make against maps are:

1) The effort required to make a map is staggering and insurmountable.

2) Construction, police activity, etc. constantly change the state of the world, and no mapmaker could keep up.

Neither of these arguments make a distinction between 2D and 3D maps.

Let me be clear on this: a map of a parking lot or home driveway, with people manually drawing in the lanes of travel or whatever... that's an HD map, yo.
 
  • Disagree
Reactions: jepicken and DanCar
The arguments that Tesla fans make against maps are:

1) The effort required to make a map is staggering and insurmountable.

2) Construction, police activity, etc. constantly change the state of the world, and no mapmaker could keep up.

Neither of these arguments make a distinction between 2D and 3D maps.

Let me be clear on this: a map of a parking lot or home driveway, with people manually drawing in the lanes of travel or whatever... that's an HD map, yo.

Maps are also used in cruddy ways for NoA, and it is NoA's main point of failure at this point. I wouldn't call the NoA or summon map use 'HD' maps though. The maps simply show lane lines; doesn't show details on drivable space. Cadillac is using much closer to HD maps with laser scanned roads.

As with everything the answer is in between. Any AV can't rely on a good map as there will always be situations that don't align with the map data. However, maps will likely always be a useful part of the inputs to tell the AV the general idea of where you are going...same way a human uses a map. I'd say tesla is too reliant on maps right now but I suspect their roadmap (see what I did there lol) is to make maps less and less important as they add sign and road marking reading. OSM maps is probably a stopgap so they can recognize revenue while they keep working on a more intelligent way of navigating. In the last 6 months they added recognition of different vehicle types and traffic cones, so road signs should be only 6-8 years away :) Its crazy to me I can make a raspberry pi read road signs (well during the day anyway) and AP2+ still hasn't bothered to read anything. I guess they only have so many hands available and it hasn't made the cut yet.
 
  • Like
Reactions: SandiaGrunt
Let me be clear on this: a map of a parking lot or home driveway, with people manually drawing in the lanes of travel or whatever... that's an HD map, yo.

Oh come on, you're going to equate a Rand McNally atlas to a lidar map?
Classifying part of your dirt yard as parking and part as not is not the same as an HD map. Taging an aisle as one way is not the same as an HD map.

Say you want a car to drive from Detroit to Chicago. With no maps, the car won't even know what direction to head. With low res maps, the car will know where I-94 is what roads connect to it and the car will determine and pick its own lane. With a system that requires high res maps, it will take the GPS spline refined in its data set and get annoyed if the current conditions do not match its database.
 
Wait, I thought Tesla was the pioneer of map-less driving? But it actually depends heavily on OpenStreetMaps????? Hahahaha. That is ******* hysterical to me!
Humans can't see from one end of a parking lot where they park to another point that is not in direct line-sight. Hence something with camera views can't either. Make sense there is a mapping of parking lot. Think walking around a corn-maze with just your eyes or walking around a corn maze with the help of your phone and a satellite view looking down on it where there is blue dot showing your location.
 
  • Like
Reactions: SandiaGrunt
Ok, champ, tell our viewers what exactly you believe qualifies a map as “HD?”

I'm a champ? Cool! I was really staring to get worried about my place in the universe and whether I was destined to only be a pawn in the game of life.


As to HD vs low res vs none, read the rest of the post you quoted.

Or to reiterate since it is not the maps so much as the system that needs them:
Cross country orienteering (roads? where we're going we don't need roads (but do still need a heading))
Tesla AP (looks like a road going the way I want to, I'll drive down it)
SuperCruise (is this one of the super well maped roads in my database?)
 
Cross country orienteering (roads? where we're going we don't need roads (but do still need a heading))
Tesla AP (looks like a road going the way I want to, I'll drive down it)
SuperCruise (is this one of the super well maped roads in my database?)

What a fascinating non-answer. I guess you’re a not a champ after all. I gave you too much credit. Sad trombone.

I’ll ask again: what specific features do you think makes a map “HD?”

What exactly makes it... “super well maped...” vs. just “regular old maped?”
 
Last edited:
What a fascinating non-answer. I guess you’re a not a champ after all. I gave you too much credit. Sad trombone.

I’ll ask again: what specific features do you think makes a map “HD?”

Does the car need the lanes, medians, and curbs defined or not? If not, low res, if yes high res.

If it can drive on an unmapped dirt road, that is below low res, but not neccesarily helpful for route planning.
 
lanes, medians, and curbs defined

Yes! Lanes are features of an HD map.

The people in this thread are talking about editing OpenStreetMap to add lanes and obstacles and such to make Smart Summon work. To make it work at all, in fact.

After all this bullshit about map-less driving, it turns out that Smart Summon uses an HD map, of the very lowest quality, edited by anyone on the internet.

What could go wrong??
 
Yes! Lanes are features of an HD map.

The people in this thread are talking about editing OpenStreetMap to add lanes and obstacles and such to make Smart Summon work. To make it work at all, in fact.

After all this bullshit about map-less driving, it turns out that Smart Summon uses an HD map, of the very lowest quality, edited by anyone on the internet.

What could go wrong??

Yes, we have to edit openstreetmaps to improve the path.

But, we're editing a low resolution 2D maps. When you edit a map within OpenStreetmaps all you're doing is adding the parking lanes since quite a few parking lots won't have them.

There is no adding obstacles or curbs or anything like that.

It's all very low resolution stuff, and you're typically using the satellite view (I think its provided by Bing).

The car ONLY uses this to generate the pre-planned paths. The path it shows you before it even attempts to drive the path. It also doesn't completely rely on this. Like you can't purposely have the car run into a building just by drawing a lane line into a building.

It's just a low resolution guide where it's vision system will override things.

Where an HD map like what GM does for their Supercruise system is entirely different. They use Lidar to generate a very precise mapping of a freeway so the car knows precisely where things like overhead signs for example.

The drawback for them is if road construction or some event changes where things are.

Tesla on the other hand has no information about this, and has to use its sensors to figure out where everything is. In practice it doesn't work that well as that's exactly why the car will falsely brake under things like overhead signs (not usually, but sometimes it will).
 
Last edited:
Waymo describes their hi resolution maps as accurate to a centimeter. Location of traffic signs, traffic signals, fire hydrants, curbs, building structures are all mapped several times in 3 dimensional space. The computer can predict when a lidar signal should return if there is nothing in that 3D region because of the map. Without the map the computation is more challenging. Consequently if the laser signal returns earlier than expected then Waymo knows there is something in that region.
 
Last edited:
Location of traffic signs, traffic signals, fire hydrants, curbs, building structures are all mapped

Agreed, those are also part of an HD map.

But do you see that Tesla now uses maps that have hand-edited lane lanes? Their maps now also include traffic lights and overpasses.

It’s a very slippery slope, my friend. Tesla is the map-less company that just keeps adding more and more dependence on maps.

Imagine going back a year in time and telling Tesla fans that “full self-driving” and ”automatically drive anywhere connected by land” actually really meant they’d soon be hand-editing lane lines in their local parking lots!
 
@mongo @DanCar @SandiaGrunt @scottf200 @S4WRXTTCS

I get the notion of using maps for path-planning being different from HD maps.

However, there is no proof that Tesla is actually using maps only for path-planning — despite their suggestions of this. In practice our Teslas already today rely on maps in several non-path-planning ways in Autopilot 2+:

1) Matching Autopilot top speed to mapped speed limits (indeed it sounds like this could remain the case even with Automatic city driving)

2) Fleet speed, matching Autopilot speed to fleet data based on current location

3) Traffic light recognition reportedly using map data to know when to react to traffic lights

None of these fit into the ”only path-planning” paradigm and all risk being confused if the reality on the road changes from whatever has been pre-mapped.

There could be more ways maps are used for non-path-planning, given that there are continuous anecdotal reports of Autopilot working better in some areas than others (eg California working surprisingly well compared to RoW).
 
Last edited:
Wait, I thought Tesla was the pioneer of map-less driving? But it actually depends heavily on OpenStreetMaps????? Hahahaha. That is ******* hysterical to me!



This is a very good observation, and one of the most potent arguments against "fleet learning." False negatives (FNs) are among the hardest problems in the AV space. Fleet learning doesn't help FNs, and in fact causes harm.

If the car doesn't detect an object, it won't send anything to the mothership to be included in the training set. The next time the net is trained, guess what? It will now be penalized if it does detect that object. Neural nets are still pretty stupid: they are taught to detect positive objects, but also, implicitly, to not detect anything else.

The FN issue here actually snowballs with time. Each time the net is retrained, it will be less likely to detect the FNs than it was before.

Guess what everyone else does to fix the FN problem? They use lidar. Lidar has essentially 100% recall of rare and unknown objects. This is the one super-important quality that is not provided by any other sensor. If you do not have at least one sensor with 100% recall of rare and unknown objects, you cannot make a safe AV.

I don’t think this characterization is correct. Classification is binary only after thresholding, Tesla could set their snapshot system to be extremely permissive when initially capturing data. Tesla can also learn false negatives because they effectively have a massive number of test drivers producing interventions in the case they are generating false negatives. Rare objects can be found by a network that only distinguishes an obstacle from a non obstacle. All the techniques Tesla is using is only possible with a massive fleet. Everyone else uses lidar because they have no choice. A massive fleet is only possible without lidar.
 
  • Like
Reactions: scottf200