Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
I’m assuming these are 3D maps (or 3D models) that include lane lines and traffic barriers/road dividers?

no. the ones I observed are nothing like that. they are more like those adas maps we discussed before just a bit more detailed.

Hmm... That’s interesting. Do you mind elaborating?

What you once again fail to totally realize @strangecosmos is this:

1) You can’t generate automated mapping without a reliable vision engine (or at least one you can then deploy to consumers)
2) You can’t generate mapping on barriers without a vision engine that actually recognizes barriers and barrier types

This is why MobilEye is capable of doing both on EyeQ4, because they have a reliable vision engine and one that recognizes a multitude of barrier types.

Tesla has a semi-reliable vision engine for Level 2 lane keeping and vehicle detection in its consumer fleet but one that lacks support for a wide range of necessary barrier type detection. That is no way to do mapping to the extent you mention. It is probably sufficient to generate some level of lane data as said, though even there future use of such is a big question mark since relying on that data is iffy.

The thing is: Tesla probably has all sorts of features in their labs but they are hampered by the basic reliability issues in their deployment. Without a reliable base it is very hard to deploy anything beyond stuff that is within the immediate control of the driver and/or based on those more reliable bits of the system.

If you had a Tesla of your own, you would know what I am talking about just by looking at the IC and what the cars do there. If the system actually reacted to the cars on the sides or behind you based on vision, every drive would end up in a disaster...

@verygreen is not doing any favors here by making it sound like Tesla’s mapping is more than it is earlier in this thread. I am actually once again surprised by basically how wrong I think he is. The AP2 HW3 retrofit question is another such disagreement.
 
Last edited:
2) You can’t generate mapping on barriers without a vision engine that actually recognizes barriers and barrier types

That is a good point, although automated mapping from production cars isn’t the only way to acquire an HD map of U.S. freeways that includes barriers.

@verygreen is not doing any favors here by making it sound like Tesla’s mapping is more than it is earlier in this thread. I am actually once again surprised by basically how wrong I think he is.

Do you have findings from your own hacking you would like to share? What is the source of your knowledge about Tesla’s maps?
 
I’m assuming these are 3D maps (or 3D models) that include lane lines and traffic barriers/road dividers?

They are not and this is what @electronblue is alluding to. Its a GPS map not a HD map.

Watched a few minutes starting at 22:00. Didn’t hear where Amnon says all trajectories driven by all production cars are uploaded. Do you have an exact timecode, down to the second?

I wasn't talking about driving traj in reference to that video. I said

Yes they record road users and static features and they use part of the data to create the REM Map and then use insights from the data to provide more information from for smart cities that includes road users, pedestrians, cars, bikes and cyclists and their behaviors.

All of this is automatically done, both the REM Map creating and the Smart Cities Info.
This is then plugged into HERE map which is owned by BMW and VW..

Here is one clip where Amnon talks about it.

22mins 0 secs

This isn't the first time Amnon talks about it. REM Map purpose wasn't just to feed self driving cars. it was to be used for way much more than that.



I can’t find any language in the patent that says all trajectories of all production cars’ are uploaded. Can you please quote the exact part of the patent that says this?

"The navigation information may include a trajectory from each of the plurality of vehicles as each vehicle travels over the common road segment."

"Collection of data and generation of sparse map 800 is covered in greater detail below, for example, with respect to FIG. 19. In general, however, sparse map 800 may be generated based on data collected from one or more vehicles as they travel along roadways. For example, using sensors aboard the one or more vehicles (e.g., cameras, speedometers, GPS, accelerometers, etc.), the trajectories that the one or more vehicles travel along a roadway may be recorded, and the polynomial representation of a preferred trajectory for vehicles making subsequent trips along the roadway may be determined based on the collected trajectories travelled by the one or more vehicles. Similarly, data collected by the one or more vehicles may aid in identifying potential landmarks along a particular roadway. Data collected from traversing vehicles may also be used to identify road profile information, such as road width profiles, road roughness profiles, traffic line spacing profiles, road conditions, etc. Using the collected information, sparse map 800 may be generated and distributed (e.g., for local storage or via on-the-fly data transmission) for use in navigating one or more autonomous vehicles. However, in some embodiments, map generation may not end upon initial generation of the map. As will be discussed in greater detail below, sparse map 800 may be continuously or periodically updated based on data collected from vehicles as those vehicles continue to traverse roadways included in sparse map"​

patent


There are three distinct goals a company could have:
1) Get to superhuman autonomous driving with imitation learning.
2) Use imitation learning to bootstrap reinforcement learning, and get to superhuman autonomous driving with reinforcement learning.
3) Do (1), then do (2) to make it even better.

For (1), I don’t think Nvidia’s vehicle would qualify. For (2), maybe, but I’m skeptical. For instance, how does it do on crowded streets in a city? How does it handle rare edge cases on the highway?

What Nvidia did was prove that you only need 72 hours (3k miles) of data to create a NN agent that can drive like a human in alot of cases. Which you can then use to do #2.

I don't understand why you would be skeptical of #2. AlphaStar is literally #2. They used imitation learning to bootstrap RL. #3 and #2 are identical.

Lastly while imitation-only agent was able to beat bots (which the early RL-only bot also did by the way), they still made stupid mistakes. This is why #1 as it pertains to driving is virtually impossible.

Lastly the only problem with RL-only literally is finding the right rewards, i'm 100% certain that after AlphaStar they will do AlphaStarZero. Its basically a guarantee.
 
Last edited:
I’m assuming these are 3D maps (or 3D models) that include lane lines and traffic barriers/road dividers?

This was actually my area of expertise. I made maps with barriers etc and talked with 3rd party supplies. I havn’t worked with this for the last two years, but I know what was available back then.

TomTom for example are offering barriers explicitly in their HD vector map and implicitly in their RoadDNA.
TomTom HD Map RoadDNA | TomTom Automotive

Almost all the commercial HD maps have lane markers in 3D.

If you want to see some pretty decent results, here was a student project that using only production lidar managed to do positioning with barriers with 3cm average error, 15cm max error and 10kB/km datasize.
http://publications.lib.chalmers.se/records/fulltext/241975/241975.pdf
 
This was actually my area of expertise. I made maps with barriers etc and talked with 3rd party supplies. I havn’t worked with this for the last two years, but I know what was available back then.

Very cool! In addition to TomTom and HERE, I know there are a bunch of startups working on HD maps: e.g. DeepMap, Carmera, Mapper, Civil Maps, Mapbox, Nomoko, and lvl5.

I would guess a company like Tesla would probably want to develop its own HD mapping solution in-house to eventually leverage the big production fleet. But in the interim maybe Tesla could outsource HD mapping similar to how it outsources data annotation.

I don't understand what why you would be skeptical of #2. AlphaStar is literally #2. They used imitation learning to bootstrap RL. #3 and #2 are identical.

Lastly why the imitation-only agent was able to beat bots (which the early RL-only bot also did by the way), they made stupid mistakes. This is why #1 as it pertains to driving is virtually impossible.

The imitation learning only version of AlphaStar achieved estimated performance at roughly the median human level. (It didn’t just play against bots, it also played against humans.) This is unlike the Nvidia car.

What Nvidia did was prove that you only need 72 hours (3k miles) of data to create a NN agent that can drive like a human in alot of cases. Which you can then use to do #2.

I’m not skeptical of the general concept of using imitation learning to bootstrap reinforcement learning. I’m skeptical of the specific claim that the Nvidia car truly emulates human driving behaviour well enough to achieve this. Does it react the way a human would react in all situations we would want to train in RL?

Lastly the only problem with RL-only literally is finding the right rewards, i'm 100% certain that after AlphaStar they will do AlphaStarZero. Its basically a guarantee.

Possibly, but this begs the question why Waymo hasn’t already solved self-driving with pure RL.

People at Waymo seem to indicate that a stumbling block for RL is a lack of smart agents for the simulator:

“...doing RL requires that we accurately model the real-world behavior of other agents in the environment, including other vehicles, pedestrians, and cyclists. For this reason, we focus on a purely supervised learning approach in the present work, keeping in mind that our model can be used to create naturally-behaving “smart-agents” for bootstrapping RL.”​
 
Last edited:
Very cool! In addition to TomTom and HERE, I know there are a bunch of startups working on HD maps: e.g. DeepMap, Carmera, Mapper, Civil Maps, Mapbox, Nomoko, and lvl5.

I would guess a company like Tesla would probably want to develop its own HD mapping solution in-house to eventually leverage the big production fleet. But in the interim maybe Tesla could outsource HD mapping similar to how it outsources data annotation.

I don’t have much insider information about Tesla’s map and it seems that very little has leaked out. For a while autopilot 1.0 would not activate on some roads but a few runs later it would. This made people speculate that there was some mapping being done by vehicles. Exactly what we don’t know, could be where cars drive in average to get the center of the lanes, could be false radar detections, could be featurepoints for positioning, could be barriers. We don’t know. Since then I assume their mapping has increased greatly, but my intuition is that it still very much work in progress. And I think Karpathy will force more deep learning into the mapping, instead of SIFT they might use some learnt features etc but I don’t think they will just “learn mapping”, instead I think they are using some graphSLAM with local tiles and some form of particle filter for localization.

Remapping roads is messy and expensive, Tesla with their fleet have a golden opportuinity here. At some point Tesla might be able to monetize all the information their cars are gathering. The value of this asset should not be understated. It might not be in their mission, but soon they will be able to monitor the world very extensively and plenty of companies and military agencys would pay for this information.
 
  • Like
Reactions: Engr and J1mbo
Possibly, but this begs the question why Waymo hasn’t already solved self-driving with pure RL.

Bladerskb, is your belief that Mobileye will solve full autonomy with a little bit of imitation learning (under 10 million miles of manual driving data) and mostly reinforcement learning in simulation?

If that is your belief, do you think Mobileye will solve full autonomy before Waymo?

If you do think Mobileye will get there before Waymo, why? Isn’t Waymo (and Google, and DeepMind) presumably better at developing and applying reinforcement learning algorithms? What is Mobileye’s advantage over Waymo?

It’s not compute — Google has more. It likely isn’t software infrastructure — Google created TensorFlow. As I said, I doubt that it’s algorithms — Google and DeepMind are world leaders.

Real world (i.e. non-simulated) training data? That isn’t relevant for reinforcement learning in simulation. Waymo can collect a few million miles of real world manual driving data to bootstrap RL with imitation learning just as easily as Mobileye can. (Waymo’s safety drivers do ~1 million miles/month in autonomous mode. Waymo could just get most of them to switch to manual mode if the data were more valuable.)

The simulation itself? Is there any evidence that’s Mobileye has a better driving simulator than Waymo’s Carcraft?

So, what is Mobileye’s advantage?

I’m wondering what your theory is here.
 
Last edited:
@strangecosmos

Personally I do not think MobilEye necessarily has an advantage over Waymo though I would agree this is a matter for views and debate of course. Waymo is impressive for all the right reasons.

But MobilEye is impressive too. And the reason why both are impressive — and likely have an edge over many later comers — is this: This will very likely not be solved by IL/RL alone. The end-result will be combination of techniques, experiences, datasets and collaborations and it is with those that established players like Waymo and MobilEye have an edge over others. And some differences between themselves too.

The biggest difference between Waymo and MobilEye — on a strategic level — for me is a matter of focus though. Waymo has gotten there first with the self-driving taxi and they may well make strides in this spehere more and more. MobilEye is the more likely one to be deployed in car responsible driving in consumer-available vehicles.
 
Last edited:
upload_2019-3-17_18-11-42.png
 
It just tells of the immaturity and low level of testing of the system if they are catching issues while deploying (!) the thing... Such glaring issues should perhaps have been fixed earlier or in the infamous shadow driving...

I think maybe you are being a bit harsh. Musk just referred to "rare corner cases". That's a far cry from "glaring issues". Plus, you have no idea what the issues are, so how can you say that they should have been fixed sooner? Plus, software development never catches 100% of issues before release. It's why every software release involves post release patches. This would be even more true for something like a feature like NOA without confirmation that needs to work on millions of miles of roads. No matter how much testing Tesla did, they would never catch every single issue before release. It's why Tesla staggers the releases over several updates instead of just releasing the update all at once to everybody.
 
I think maybe you are being a bit harsh. Musk just referred to "rare corner cases". That's a far cry from "glaring issues". Plus, you have no idea what the issues are, so how can you say that they should have been fixed sooner? Plus, software development never catches 100% of issues before release. It's why every software release involves post release patches. This would be even more true for something like a feature like NOA without confirmation that needs to work on millions of miles of roads. No matter how much testing Tesla did, they would never catch every single issue before release. It's why Tesla staggers the releases over several updates instead of just releasing the update all at once to everybody.

Debating this is pointless because we come from very different places.

I happen to believe I have a more realistic and experienced view of what Musk is saying and especially why he is saying it, so my view is radically different from yours. It is not because I don’t understand software development. It is because I do believe I understand this particular software development case.

Think of it this way: Musk first said advanced Summon would come out six weeks from November 2018. Mmm. And now they are catching ”corner cases” that allegedly surfaced on a consumer deployment that started on March 15th. Mmm.

It is fine to disagree but some of us just don’t believe him. And if we do believe them: an issue is rather large if it stops deployment instead of being simply a patch later.

If something really was in public deployment on the 15th we’d know about it on TMC and Reddit even if it weren’t deployed to everyone.
 
Last edited:
It just tells of the immaturity and low level of testing of the system if they are catching issues while deploying (!) the thing... Such glaring issues should perhaps have been fixed earlier or in the infamous shadow driving...
Or it shows the maturity of the system that they can do a staged roll out and recieve feedback data from the field to validate the software.
Corner cases != glaring issue (unless you happened to be testing against that exact corner case, which you will in the future because you integrate it into your test suite).

Ya know that intersection in that little town in the middle of Vermont with the statue of the marching soldier in the median that registers as a pedestrian ? Yeah, Tesla didn't either till car X53984 drove past it three different times without stopping.
Or how Google navigation tells you the wrong lane for the State Street & Ellsworth roundabout? Good thing there are at least 4 different Tesla that take that route every day.
 
  • Like
Reactions: diplomat33
@mongo We all believe what we believe of course. But would Occam’s Razor really be ”maturity of the system” with Autopilot 2+?

I personally can’t believe an AP update really started in public on the 15th and was halted due to an exotic corner case being reported back.

The idea that it was all wrapped up, as good as it can be, ready for the real world... and then something was reported back in the first day or two of a consumer release and the release halted... no, that does not ring likely to me.

More likely reason for me is that the immaturity of the system became visible in some other testing and / or the the public release did not start at all yet.
 
Last edited: