Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
It works for humans because we can pivot or heads and watch the lanes and markings, but the cameras all face straight, so how does this get solved?
This is basically solved.

Back in July, I made this diagram:
hill_tops-jpg.236025


On page four of this thread we put our heads together and came to alternative solutions
 
I drove up a very small hill not more then 3-5’ in elevation and as you crest let’s call it a hill it immediately turns hard left, but the cameras don’t see crap as they are effectively pointing at the sky, AP freaks out and dives to the left because it has no tracking. Even lidar unless it’s on a gimble of sorts would be screwed as lidar isn’t used for lane lines.

In a word: maps. It will know that bend is coming. It will slow down. It needs to slow down even if in principle it knows exactly where the road is because it also needs to confirm that there are no objects on the road, and it can't confirm that until it gets a good look over the hill. Really, a human driver ought to slow down to be safe in that situation also -- you have know idea what might be on the road over the hill today, even if it's been clear every time you've crested that hill during your daily commute for the past 20 years. Anyway, you won't mind that it slows down, because you'll be watching Iron Man 4 (starring Elon Musk) while the car drives itself, right? ;)

LIDAR systems don't need a gimble; they can be built with whatever vertical field of view (FOV) you're willing to pay for; in principle draw a line from where the LIDAR sits down to the front hood/bumper and that's the limit. Same as with cameras actually if you put the right lens on them (this comes at the cost of long range resolution). But this FOV is wasted 99% of the time so to save costs you may not achieve that maximum FOV. But, you will note that all the big players in self-driving cars put both LIDAR and cameras way up high on the roof where they can get a good look at everything and see over the car itself to minimize blind spots.

But setting that digression aside, I'll just say it again in one word: maps.
 
Perhaps it can. There's a whole academic field devoted to doing so. Problems become more difficult the longer you look at them. One of Chomsky's books relates an anecdote of the state of AI in the 1940's or 50's. Someone assigned the task of replicating vision to an undergrad researcher as a summer project.



You don't think Tesla is making good crystal? They've been at it only 12 months - I think they've made amazing crystal for how long they've been doing this. See my attached photo. Remember AP1 - they're progressing far faster than that system did. Oct 2014 was the AP1 hardware surprise release. A full year passed before autosteer was released. A FULL YEAR - and Tesla was working with pre-trained neural nets.

AP2 is more reliable now than the first release of AP1 was in October 2015. AND - this time Tesla had to start from zero. AP1 they had a huge head start. I understand folks are frustrated but looked at objectively this progress is stunningly fast. See my little chart below - which I update whenever a new release comes along so I can remind myself how fast this development really is.

AP1 -> 12 months from hardware to autosteer using a pre-trained neural net from mobileye.
AP2 -> 3 months from hardware to autosteer using a blank neural net that knew absolutely nothing.

20 days from now will be the equivalent of the AP1 hardware to software delay - and AP2 is already better than AP1 7.0 was in October 2015.

AP1 has gotten better since then - it's a moving target Tesla is chasing with AP2 (and Tesla is catching up). Remember AP1 we love so much is a full 2 years post-release.

AP2 has progressed faster and we have every reason to believe its progress will only accelerate. Tesla has only one quarter of AP2 video uploads going right now and I have already seen my car definitely learn in one trouble spot between releases. Not perfect but it's dramatic learning. We know these video uploads are being used for reinforcement neural net learning.



No I think you are spot on. Tesla is taking a more difficult approach than, say Cadillac - which "cheated" by simply mapping the roads with lidar. But in the long run Tesla's system should be extremely robust.

View attachment 250686
Perhaps it can. There's a whole academic field devoted to doing so. Problems become more difficult the longer you look at them. One of Chomsky's books relates an anecdote of the state of AI in the 1940's or 50's. Someone assigned the task of replicating vision to an undergrad researcher as a summer project.



You don't think Tesla is making good crystal? They've been at it only 12 months - I think they've made amazing crystal for how long they've been doing this. See my attached photo. Remember AP1 - they're progressing far faster than that system did. Oct 2014 was the AP1 hardware surprise release. A full year passed before autosteer was released. A FULL YEAR - and Tesla was working with pre-trained neural nets.

AP2 is more reliable now than the first release of AP1 was in October 2015. AND - this time Tesla had to start from zero. AP1 they had a huge head start. I understand folks are frustrated but looked at objectively this progress is stunningly fast. See my little chart below - which I update whenever a new release comes along so I can remind myself how fast this development really is.

AP1 -> 12 months from hardware to autosteer using a pre-trained neural net from mobileye.
AP2 -> 3 months from hardware to autosteer using a blank neural net that knew absolutely nothing.

20 days from now will be the equivalent of the AP1 hardware to software delay - and AP2 is already better than AP1 7.0 was in October 2015.

AP1 has gotten better since then - it's a moving target Tesla is chasing with AP2 (and Tesla is catching up). Remember AP1 we love so much is a full 2 years post-release.

AP2 has progressed faster and we have every reason to believe its progress will only accelerate. Tesla has only one quarter of AP2 video uploads going right now and I have already seen my car definitely learn in one trouble spot between releases. Not perfect but it's dramatic learning. We know these video uploads are being used for reinforcement neural net learning.



No I think you are spot on. Tesla is taking a more difficult approach than, say Cadillac - which "cheated" by simply mapping the roads with lidar. But in the long run Tesla's system should be extremely robust.

View attachment 250686
 
I was wondering about the firm ware updates mentioned in this thread seem to stop at 2017.36 ib27c6d while my MX 100D has 2017.38 f87c64d5. Does anyone else have this more recent version and know what may have changed? I have visit several other Tesla forums and no one ever mentions 2017.38 that I have.
 
  • Disagree
Reactions: BigD0g
Ok back on topic!!

I have an interesting observation from a drive yesterday that I wonder how 2/2.5 and really liar will solve currently.

I drove up a very small hill not more then 3-5’ in elevation and as you crest let’s call it a hill it immediately turns hard left, but the cameras don’t see crap as they are effectively pointing at the sky, AP freaks out and dives to the left because it has no tracking. Even lidar unless it’s on a gimble of sorts would be screwed as lidar isn’t used for lane lines.

It works for humans because we can pivot or heads and watch the lanes and markings, but the cameras all face straight, so how does this get solved?

That's an interesting example you've mentioned because it's a lot more subtle than it appears at first glance. When you're cresting a small hill on a curving road what should you be planning for after the hill? In the case of lane following on a hill the boundary of what the system knows about the road versus what it's guessing becomes the horizon of the hill crest road surface. Imagine as a human that you were wearing blinders and could only see the road surface and had to predict what to do based only on the lines you could see on the road - maybe something like what happens when you're driving in fog at night and can see the reflective paint on the road but not much else. Most people slow down in that situation because only seeing the road tells them a lot less than they would like for as fast as they normally drive. A person normally can see other things like trees or street lights or power poles past the crest of the hill that give you info about where the road goes next. You might see the top of cars or trucks that have crossed the hill already. You might be able to guess a lot from the lay of the land (high in the mountains? driving alongside a river? maybe a hillly suburb?). All that extra context gives you confidence to go faster than you would if you could only see the road surface. Most of that context is probably out of reach for current nav systems, although they do have HD maps when GPS is good.

I've noticed that AP2 fails more often in situations like the one you describe than AP1 does, which got me wondering what kinds of info AP1 was using to supplement the road lines for predicting the road 5 or 10 seconds ahead when you can't see see the lane markings that far ahead. The obvious thing is to use maps and map following as much as possible when the GPS signal is good. But then hilly areas are exactly where GPS accuracy will drop due to reduced sky visibility and signal multipath issues. GPS error gets big enough that the car really needs those lane markings to know where it is on the road.

Of course, AP1 and AP2 should both have the same HD maps and comparable GPS, so why is AP2 failing more? It's possible that the map integration is not as good in AP2 but that seems unlikely. My guess is that AP2 vision is still sketchier than AP1 in corner cases like cresting a hill, but most of the time you don't see this when GPS is good. Occasionally GPS error is on the high side and you get to see the difference in accuracy, which the car demonstrates by scaring the crap out of you.

And now for the neural network dork digression:

Something about training neural networks on road data which might not be obvious: you have a lot more training examples of long straight roads than you do of the moments just before you crest a hill. You can get thousands of long straight road training samples in just a few miles of driving. But if you want a comparable number of unique samples of cresting a hill you need a lot of hills, which takes a lot more time. So naturally you just end up with fewer of those hill examples, which can easily make the network a lot weaker in those situations. Depending on how well your network generalizes it could take an awful lot of hill data before you've got the 'cresting a hill' situation completely covered. I expect Tesla probably pulls in a lot of data, but I wonder if maybe they still don't have good coverage of important corner cases. That seems like exactly the kind of situation where a hand made network (which is what I understand mobileye used in their AP1 hardware) would have a big advantage over a more conventional network trained directly on raw driving data.
 
Of course, AP1 and AP2 should both have the same HD maps and comparable GPS, so why is AP2 failing more? It's possible that the map integration is not as good in AP2 but that seems unlikely. My guess is that AP2 vision is still sketchier than AP1 in corner cases like cresting a hill, but most of the time you don't see this when GPS is good. Occasionally GPS error is on the high side and you get to see the difference in accuracy, which the car demonstrates by scaring the crap out of you.

What makes you think that (a) Tesla has HD maps of any sort deployed and being used in production, and (b) either AP1 or AP2 are using maps? I think they don't (yet) have HD maps (or anything close to it outside of their R&D labs) and if APx is using maps/GPS at all (beyond speed limit and road classification) I think it's a pretty primitive attempt to slow down in advance of a curve, and I think even that is speculation at this point.

I would be thrilled to be shown to be wrong on any of those points.
 
What makes you think that (a) Tesla has HD maps of any sort deployed and being used in production, and (b) either AP1 or AP2 are using maps? I think they don't (yet) have HD maps (or anything close to it outside of their R&D labs) and if APx is using maps/GPS at all (beyond speed limit and road classification) I think it's a pretty primitive attempt to slow down in advance of a curve, and I think even that is speculation at this point.

I would be thrilled to be shown to be wrong on any of those points.

I believe @verygreen can speak to Tesla's mapping ambitions that will be rolling out soon. There's also speculation that the camera snapshots that are being taken in cars have to do with an upcoming HD mapping roll-out.
 
What makes you think that (a) Tesla has HD maps of any sort deployed and being used in production, and (b) either AP1 or AP2 are using maps? I think they don't (yet) have HD maps (or anything close to it outside of their R&D labs) and if APx is using maps/GPS at all (beyond speed limit and road classification) I think it's a pretty primitive attempt to slow down in advance of a curve, and I think even that is speculation at this point.

I would be thrilled to be shown to be wrong on any of those points.
This started since AP1:
Tesla is mapping out every lane on Earth to guide self-driving cars

Will have to dig up the old posts, but there was discussion of how cars download map tiles with that information for AP1.
 
And now for the neural network dork digression:

Something about training neural networks on road data which might not be obvious: you have a lot more training examples of long straight roads than you do of the moments just before you crest a hill. You can get thousands of long straight road training samples in just a few miles of driving. But if you want a comparable number of unique samples of cresting a hill you need a lot of hills, which takes a lot more time. So naturally you just end up with fewer of those hill examples, which can easily make the network a lot weaker in those situations. Depending on how well your network generalizes it could take an awful lot of hill data before you've got the 'cresting a hill' situation completely covered. I expect Tesla probably pulls in a lot of data, but I wonder if maybe they still don't have good coverage of important corner cases. That seems like exactly the kind of situation where a hand made network (which is what I understand mobileye used in their AP1 hardware) would have a big advantage over a more conventional network trained directly on raw driving data.
Nvidia specifically said they chose curvy sections more than straight sections of road to keep the net from "learning" to go straight. All those cues you mention about trees, phone poles, etc. are some of the data I think give NN such an advantage in these situations. It's in many ways akin to the problems with natural language translation that NN solved. There's so many quirky things to deal with that you can't if...then...else or switch case your way through them all.

But I'm sure Tesla has a good reason to have the cars veer dangerously, unpredictable and eagerly off on some angle when they lose the road. :eek: Seriously, if instead of "panic! Turn the wheel!" as the car does at the top of the hill, would it be better if it slowed down but continued straight? In your concrete example, is there enough time to slow down and have the pitch of the car level w the road enough to see the turn upon cresting the hill?
 
I believe @verygreen can speak to Tesla's mapping ambitions that will be rolling out soon. There's also speculation that the camera snapshots that are being taken in cars have to do with an upcoming HD mapping roll-out.

Yeah, I've been following the data uploads news in that other thread (and watching the uploads from my own car). I'm not sure it's about HD maps in the short term as much as it's about just getting more training data, but clearly Tesla has plans to roll out their own maps in the long run. But I don't think any of that is *currently* being used in the deployed systems whereas the post I was replying to implied that not only was it already being used, but that it has been in use for quite some time, even going back to AP1 cars (which may or may not ever be able to make use of the maps Tesla is currently building -- which are highly unlikely to be truly "HD" maps anyway, at least nothing approaching what the big boys like Waymo are using).
 
What makes you think that (a) Tesla has HD maps of any sort deployed and being used in production, and (b) either AP1 or AP2 are using maps? I think they don't (yet) have HD maps (or anything close to it outside of their R&D labs) and if APx is using maps/GPS at all (beyond speed limit and road classification) I think it's a pretty primitive attempt to slow down in advance of a curve, and I think even that is speculation at this point.

I would be thrilled to be shown to be wrong on any of those points.
Elon is on the record saying HD GPS lane mapping in AP1 is what allowed them to get the cars to stay in lane on poorly marked sections of the 405 in Los Angeles. I am too lazy to find it right now but it's in the record. The skeptics claim this wasn't / isn't possible because of technical limitations but there are theories of how Tesla could have used gps systems with confidence intervals wider than a lane to produce maps with single lane accuracy. For map-making If for example the confidence interval for position reports is plus-minus 20 feet, and the sampling errors are uniformly distributed across the range, then one could average the reported position numbers to arrive at a more precise range that maps out lanes. Then on the driving side, again if the samples are frequent enough the computer could average he samples to arrive at a more precise image of where it is in the lane and combine this with visual information to arrive at a more confident and accurate decision on whether it is in the lane. This kind of tempora sampling to remove noise is used in other fields also. For example software can now increase the effective resolution of 8mm film by comparing successive frames of an image to remove noise (because the noise/grain - analogous to a sampling error - is not in the same part of each frame) and construct a synthetic image that is more detailed than any individual film frame.
 
Continuing this - Elon, when talking about the 405 freeway problem, went into detail about how they solved it and claimed it happened during AP1 development, prior to the 7.0 release of October 2015. He specifically mentioned that part of the 405 (he lives near that freeway and would probably take it to work on occasion) was not usable with autopilot until Tesla had a test driver make multiple passes through each lane so the GPS could gather data to place the car in lane. In fact he said the system worked so well that the first time they constructed these maps at that freeway section, the test driver himself got the lanes wrong (the 405 had a lot of work going on and the markings were crazy bad) - and then the Tesla followed his bad driving precisely in the "fake lanes" his driving had constructed. So they had to "wipe" the memory and have him drive the section again, this time in the correct position for the lanes.

Living in So Cal, all I know personally is that the freeway markings are garbage (budget problems) - and over the months each release of AP1 handled poor markings and high glare situations better than the last until finally it performed admirably even driving directly into the sun on low-contrast sections of freeway that are challenging even for a human in those conditions. I found it quite shocking how good it got. It took months but it happened. Either this was evidence of HD mapping at work - or the visual neural network lane recognition was getting better - or both.
 
  • Helpful
Reactions: scottf200
...the maps Tesla is currently building ...are highly unlikely to be truly "HD" maps anyway, at least nothing approaching what the big boys like Waymo are using).

Why do you say that? Mobileye claims to be building crowd sourcing low-bandwith maps using crowd sourced cameras only (no lidar) - an ingenius solution (Amnon's lecture about this is online). There is no reason Tesla could not use the same technique. The idea is for the car to take pictures of known physical landmarks (a billboard for example) that can be then put in a database with its precise lat/long coordinates. Then future cars driving that road will recognize the landmark in their cameras, the database will be consulted, the coordinates looked up - and then successive image frames compared by the computer to triangulate the moving car's position against the physical landmark. The maps will have successive landmarks every X feet so that some landmark is always in view. Amnon claims that even the sparsest highways have enough identifiable fixed points that a continuous map can be built this way. He was claiming accuracy +/- 1 one foot or so for the maps, and said that combined with image recognition neural nets you do not need maps that are in the cm accuracy range.

Of course Mobileye is a business and Amnon needs to sell cameras because he doesn't sell lidar. Whether this method is good enough - we will see.
 
Last edited: