Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
IMO, "HD" maps are a crutch for propping up for poor software, and a nice plump revenue stream for those who generate & maintain them.

Poor is probably an overstatement here. I would assume even great software with current sensors (ones that can be put into volume products anyway) will require HD maps for a long time to come, to do Level 5 etc. at a sufficient reliability level.

I think calling HD maps "a crutch for propping up our current and near-tem realistic technology limitations" would be a more accurate way of putting this.
 
If road conditions were static than HD maps would be great, but they aren't. When driving you can't rely on data that was generated months, days or even hours ago. Roads change, surroundings change. L5 will require all fine level road data to be sensed in real time, not based on a historic HD map.
 
If road conditions were static than HD maps would be great, but they aren't. When driving you can't rely on data that was generated months, days or even hours ago. Roads change, surroundings change. L5 will require all fine level road data to be sensed in real time, not based on a historic HD map.

Then again, isn't this based a little on the old fallacy that redundancy doesn't help if you don't know who is right? Obviously redundancy would have a priority-list, based on conditions. E.g. radar and lidar and maps and vision constantly sanity-checking each other - the more data you have pointing the same direction, the more certainly you can rely on it. If there is a discrepancy, then it comes down to good algorithms or good deep learning to decide how that information is acted upon exactly...
 
  • Like
Reactions: scottf200
Again, you seem to be using a completely custom definition of HD map which is not in step with what everyone else is using. An HD map is simply a map with better accuracy than your typical GPS maps. It does not have to be 3cm.

I was specifically responding to @mrkisskiss who was claiming 3cm localization with 10kB/km map size. Apparently the claim is actually 3-5cm lateral (in other words, 5cm), 10cm longitudinal... further discussion below

They claim new GPS chips in android phones would give 30cm accuracy.

Useless in urban environments, degraded when skies are overcast, probably degraded when operating at vehicle speeds, and anyway that 30cm that you get in goldilocks conditions is an order of magnitude off for lane localization. But vehicle-based GPS is already way better than phone GPS, at least in good conditions (better antennas, consistent orientation with respect to ground and heading).


They're getting ~3-5 cm lateral localisation, and at least 10cm longitudinal, from vision only. 10cm is with sparse landmarks every 200m. In a more common, denser environment you get landmarks every 20m or so, which can improve accuracy further.

Add radar into the mix and you may get even better still, but really... it's already more than good enough to drive you safely through mind bogglingly complex junctions - ones you'd struggle with as a human - in pretty much any weather. No need for GPS, other than basic positioning to optimise the process.

If they only have landmarks every 200m, 3-5cm lateral/10cm longitudinal is aspirational and will only be achieved in goldilocks conditions (at least in the next couple of years). Real environments are punishing to systems like this. You could never build an L4 system that relied on this, and even an L3 system would need to have a very solid backup plan.

Also, going around a tight corner with +/- 10cm longitudinal accuracy is pretty iffy. In some cases you will be in the adjacent lane, or scraping the guardrail.


On the subject of radar: lol. @verygreen, what's the bandwidth between the radar and the CPU/GPU in HW2/2.5? (Trying desparately to talk about at least one on-topic subject... and I'm actually really curious about this...)

The term "HD map" isn't yet well defined. It runs the gamut from a human-readable 'slightly better than normal' map to centimetre accurate laser-scanned maps from Google and HERE, to slightly less accurate guide-rail ADAS maps like Mapbox Drive. It's a wide term at the moment!

That much is quite clear -- which is why I was specifically talking about the claim for vision-only maps at 10kB/km achieving 3cm localization without GPS (in anything other than goldilocks conditions). I'll believe it when I see it.

Tesla has been very, very quiet on the map front. I'm hopeful that they've been hard at work here, and are ready to deploy in some areas soon. When they do, our cars will seemingly get some new superpowers overnight - regardless of whether they're HW2 or HW2.5 :)

I think maps are going to improve AP2 systems (and probably AP1) substantially. But there are still substantial challenges to getting to an L3 system using this hardware. Maps aren't the hardest part of L3. But better maps will make for a much better, smoother L2 system that doesn't try to murder you once an hour even in goldilocks conditions.
 
Useless in urban environments, degraded when skies are overcast, probably degraded when operating at vehicle speeds, and anyway that 30cm that you get in goldilocks conditions is an order of magnitude off for lane localization. But vehicle-based GPS is already way better than phone GPS, at least in good conditions (better antennas, consistent orientation with respect to ground and heading).
While I agree in principal with what you say I still feel obliged to say that you can always guarantee that if anyone says anything positive or optimistic on these forums you can always guarantee that it will be shat on by someone for some reason.

GPS chips are getting more accurate, that's all I was pointing out. I'll go back to checking this thread once a week like I always do instead.
 
Then again, isn't this based a little on the old fallacy that redundancy doesn't help if you don't know who is right? Obviously redundancy would have a priority-list, based on conditions. E.g. radar and lidar and maps and vision constantly sanity-checking each other - the more data you have pointing the same direction, the more certainly you can rely on it. If there is a discrepancy, then it comes down to good algorithms or good deep learning to decide how that information is acted upon exactly...

Yes. If all information (vision, lidar, radar, HD map) show the same result, then it is very likely true. So the benefit of HD maps during road work or accident would be to alert car, that something has changed recently. This could e.g. cause the car to slow down even if it doesn’t see any particular reason to do so. But the car still needs to be able to drive even when it notices, that map is not up to date.
 
Yes. If all information (vision, lidar, radar, HD map) show the same result, then it is very likely true. So the benefit of HD maps during road work or accident would be to alert car, that something has changed recently. This could e.g. cause the car to slow down even if it doesn’t see any particular reason to do so. But the car still needs to be able to drive even when it notices, that map is not up to date.

The other thing is, if the map is a living document developed by the cars themselves, only the first few cars will need to sort out what's going on.

With a few million cars on the road contributing to the same map, the odds are high that by the time you reach the construction zone, a few prior cars saw the discrepancy, figured out the right answer, and updated the map to match.

This can even work as a level three/four solution when the software isn't developed enough to correctly sort out the problem areas - the first car to come across it notices that the map and sensor inputs don't match, and tells the driver "hey, I've got a problem" and lets him figure it out - and watches the solution. The next few cars do the same thing. If all of the drivers solve it the same way, the map is updated to their solution and future cars handle it without help...
 
For those claiming you need lidar to win:
comma ai on Twitter

Oh and depth perception:
comma ai on Twitter

Gotta admit, I really love comma.ai giving us a view behind the scenes. I'm probably sooner or later going to buy their Panda for my S and help them out with data.
Eh...what`s your point? the 3d lidar image just needs some procedural noise reduction while the software needs training to even perceive depth at all with the camera image. In your example the lidar is vastly superior...as usual btw.
 
@BigD0g

I too love comma ai giving us a peek behind the scenes, because it is probably relevant to what Tesla is doing.

However, I do not understand the lidar hate. Nobody every suggested lidar would replace vision or even radar.

Why would sensor fusion be worse than no fusion? I'm not buying that...

That doesn't mean vision only can't work, of course it can.
 
  • Like
Reactions: rnortman
@BigD0g

I too love comma ai giving us a peek behind the scenes, because it is probably relevant to what Tesla is doing.

However, I do not understand the lidar hate. Nobody every suggested lidar would replace vision or even radar.

Why would sensor fusion be worse than no fusion? I'm not buying that...

That doesn't mean vision only can't work, of course it can.

I'm all for lidar but cost is a real concern as is determining which is the primary sensor. After all, too many cooks...
 
  • Helpful
Reactions: AnxietyRanger
@BigD0g

I too love comma ai giving us a peek behind the scenes, because it is probably relevant to what Tesla is doing.

However, I do not understand the lidar hate. Nobody every suggested lidar would replace vision or even radar.

Why would sensor fusion be worse than no fusion? I'm not buying that...

That doesn't mean vision only can't work, of course it can.

The only reason I can see for sensor fusion actually being worse is if it creates such a large collection of data that finding the important parts of the data becomes difficult - not really likely under the circumstances, since the whole point of adding LIDAR is to reduce the amount of image processing needed.

The question is whether the cost of LIDAR is justified by the degree to which it simplifies the processing and improves the car's understanding of the environment.

If the flash LIDAR people succeed in driving the price way down, solid state LIDAR may be an obvious addition to cars even though a pure vision system can work, to save money.
 
  • Helpful
Reactions: AnxietyRanger
because they are totally redoing everything and see no sense in fixing the "good enough" model for now

This. New management probably came in and said, "Sorry Elon, but the previous folks sold you a bill of goods. We have to start over." So they stabilized what they could on the piece of *sugar* and now we won't see much from this team until they rebuild everything.

Wake me up when it's summer 2018.
 
  • Helpful
Reactions: AnxietyRanger
For those claiming you need lidar to win:
comma ai on Twitter

Oh and depth perception:
comma ai on Twitter

Gotta admit, I really love comma.ai giving us a view behind the scenes. I'm probably sooner or later going to buy their Panda for my S and help them out with data.

Guys like George and Musk are still working on sensing while Mobileye has already called pencils down!