Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Seeing the world in autopilot, part deux

This site may earn commission on affiliate links.
You might have noticed different colors at edges of driving spaces around objects. They seem to mean different things (1 color per different value, but we do not know what the values mean. perhaps some means "this is a car front/side/whatever"?)

Are you talking about the red swizzle line? That's semantic free space showing you that there is an object at its boundary.

2d bouding box might just be a debug aid for a different team for al we know and not used by the actual driving algorithm that uses interpreted data in the drivable space, though? That's why I say 3D bounding box while neat eyecandy, might have the same data expressed in other ways.

The bounding box is the representation of actual detection data and the actual data, the vectors, are used in a Self driving system.

Mobileye are able to create accurate 3d environmental model because of their 3DVD system.

26 mins 45 seconds

There is a huge difference on hilly roads. HUGE. Also I have no way of extracting data like this from eyeq3 ;)

Yeah but you are trying to compare a 2014 tech with a 2018 tech.
That in of itself speaks volume. But especially the fact that based on your analysis and performance comparisons by other tesla owners, Tesla hasn't surpassed the 4 years old eyeq3. That is alarming.

But we will see when v9 drops. Hopefully you can get your hands on it.

But what you posted is where the industry is at other than Mobileye. For example look at Nvdia Drive Platform which is similar to what Tesla is at (although Nvidia has a poor mans inaccurate 3DVD compared to ME, but atleast it has it).
Mobileye seems to me light years ahead of everyone in computer vision.
 
Last edited:
It's kind of interesting to me that they have all this information about drive-able areas, yet the current system won't let you change lanes into or outside of an HOV lane in Washington, because we use a sold while line, and AP treats those as the edge of the road, no matter what. It's both amazing that they can tell the difference between dashed and sold lines, but disappointing that they can't tell the difference between a white and yellow line, or that an HOV lane is an acceptable place to go since it's fully bounded by two lane lines. Pre 2018.10 I could go in/out of HOV lanes but it knew the outer side of the HOV was the edge of the road, so this is a regression.

It’s possible this is what KP is referring to as the “hard” solution and its being trained now and these aspects are the data gathering and labeling needed to build out the NN stack and decision engines vs the “easy” approach that is basically if then statements. I suspect they are currently running on the if/then statements based on the networks telling them the lanes availed and localization data through gps telling them actions permitted ala no local lane changes for example.

The 2.0 stack KP refers to doesn’t just need to learn the driveable space, but it would need to learn all the rules of the road and where it can drive in the space,,,

No, I suspect this riddle will be solved with a bit of both approaches with hw3 to interpret more data for decesioning for more accurate and defined if then statements.
 
Last edited:
Perhaps they are using data from the map to ensure that they do not allow crossing into lanes like this.

If they are to enable to automatic lane changing... they will need to implement this capability anyways.. likely by using map data.

This is not map data. It always changes about 3 seconds down the road after it goes dashed. One of the roads is an express lane that changes direction in the middle of the day, and depending which way you go, the lane is dashed on your left or on your right. It always gets this, where a map would be totally lost. It also works in tunnels where GPS is dead.

If they could get this much data so perfect about lane lines in maps, why can't they get speed limits right?

They don't need maps to know exact lane lines to enable lane changing. All they do is get into the rightmost lane 1 mile before the exit, and then follow the exit off when it's time. You don't need a map to pick a lane either.
 
Mobileye seems to me light years ahead of everyone in computer vision.

Yeah, my MobileEye based 3DVD performs awesomely in my brand new car.

Oh, wait.. Crap.. There isn't a car with it or anything close to it, and I had to get another Tesla because it's still the only game in town. You're much loved L3 Audi A8 scampered away from the US market with it's tail between it's legs.

I'm not sure why you make assumptions about where a company is at based on hackers trying to figure out how things work. Their conclusions might not be complete, and they're most definitely not working with the latest developer builds.

I can certainly understand challenging their conclusions about how something works, and helping them figure out what's there. But, whey twist it around to try to attack Tesla with it?

Can you imagine working on something, and getting it to the point where it can be used to log events in a shadow mode. But, then someone tries to analyze it without understanding the limits/intention of it. When I develop something odds are I'll have something that works well enough to gather data, and then I'll have some other branch with my latest stuff.

What you're comparing it with is all carefully crafted, and selected by solution providers like MobileEye. You're not getting that kind of unfiltered data that we see on the OP's post.

It's pretty obvious to any owner such as myself that Tesla has a long ways to go. I really wish I could help out because I'm curious why there have been so many reports of false braking with AP 2.5. So I'd love to see what the car see's when false positive happens. What's really throwing it off?

I'm not sure comparisons to other vendor solutions is really right for this thread. This is just seeing something these guys figured out how to get access to, and I'm grateful for their efforts.
 
Last edited:
I suspect they are currently running on the if/then statements based on the networks telling them the lanes availed and localization data through gps telling them actions permitted ala no local lane changes for example.

If they are doing this, they have the logic very broken, because crossing this line is perfectly legal and expected in WA (it's the only way in/out of an HOV lane) yet since 2018.10, I have never once been able to use AP to get in/out of an HOV lane. As far as I can tell, they have very simple logic of "do not cross a solid line. period." which goes against the federal guidance for lane lines.
 
If they are doing this, they have the logic very broken, because crossing this line is perfectly legal and expected in WA (it's the only way in/out of an HOV lane) yet since 2018.10, I have never once been able to use AP to get in/out of an HOV lane. As far as I can tell, they have very simple logic of "do not cross a solid line. period." which goes against the federal guidance for lane lines.

There has been a ton of reported activity with downloading new maps of recent. So there is some speculation that the new maps will be required for V9.

But, I have no clue how they're going to handle HOV lanes especially the ones on 405. You can cross a single line, but not a double line. Is it going to know what the double line even is? Is it going to know it's free after 7pm.

Now I don't really expect it to know. It probably won't cross them like it currently doesn't.
 
This is not map data. It always changes about 3 seconds down the road after it goes dashed. One of the roads is an express lane that changes direction in the middle of the day, and depending which way you go, the lane is dashed on your left or on your right. It always gets this, where a map would be totally lost. It also works in tunnels where GPS is dead.

If they could get this much data so perfect about lane lines in maps, why can't they get speed limits right?

They don't need maps to know exact lane lines to enable lane changing. All they do is get into the rightmost lane 1 mile before the exit, and then follow the exit off when it's time. You don't need a map to pick a lane either.


I do not know if Tesla is using HD maps for this application or not.

But I do know that the system I work on does use HD maps for this application, and it works in tunnels and in situations you describe with lane changes in the middle of the day.

"It also works in tunnels where GPS is dead."

GPS is dead, but when a self driving car is using HD maps for autonomous driving... they do not use GPS for localization.

"why can't they get speed limits right?"

typically speed limit data does not come from HD maps in adas systems. Also, speed limit data in maps is not precise, nor safety critical.

There has been a ton of reported activity with downloading new maps of recent. So there is some speculation that the new maps will be required for V9.
.

That is the natural assumption
 
This is awesome! Clearly a lot of work went into creating this visualization, and it’s the most visceral insight we have so far into what Autopilot currently sees. It’s really fun to watch the Tesla driving around Paris and the crazy amount of bounding boxes.

I’m really excited for Autopilot v9. I hope it lives up to the hype.
 
"why can't they get speed limits right?"

typically speed limit data does not come from HD maps in adas systems. Also, speed limit data in maps is not precise, nor safety critical.

Can you explain this further? In most places, exceeding the speed limit is prima facie evidence that you were being unsafe. So if a self driving car doesn't get the speed limit right, and goes too fast, it is safety critical.

Also, are you saying that the HD maps in ADAS systems are safety critical and if they are wrong the vehicle will crash?
 
But especially the fact that based on your analysis and performance comparisons by other tesla owners, Tesla hasn't surpassed the 4 years old eyeq3
I had an AP1 loaner for a wekk and it was absolutely unusable on a windy hilly road I have by my house. works perfectly on AP2, though. AP1 has other benefits though, like speed sign recognition, I guess.
Mobileye seems to me light years ahead of everyone in computer vision.
well, we mostly have those unverifiable demos from other industry members, though. If you ever worked on demos yourself you know the difference between a tech demo video vs live tech demo vs technology demo you let others to try vs technology you just let others to use at any time.
So I don't think they are lying or anything, but there's a good chance the footage being shown is selected from many examples and is the best possible case on a lucky day and the car was eaten by a grue 10 seconds after the footage stopped being shown ;)
 
Bladerskb said:
But especially the fact that based on your analysis and performance comparisons by other tesla owners, Tesla hasn't surpassed the 4 years old eyeq3
I had an AP1 loaner for a wekk and it was absolutely unusable on a windy hilly road I have by my house. works perfectly on AP2, though. AP1 has other benefits though, like speed sign recognition, I guess.

Apples and oranges, though.

You are comparing EyeQ3 being fed with one narrow, camera (AP1) to AP2 being fed several (including fisheye) cameras. EyeQ3 also supports several camera inputs but Tesla does not provide them with it in AP1. It would be no surprise that a single narrow FoV camera like on AP1 would lose sight of things on a hilly road for obvious reasons - such a camera may only/mostly see sky.

Remember that Tesla did work on two-camera EyeQ3 system (the "AP 1.5"), but shipped only the frame for it in AP1 Model X...
 
I understand there are multiple "steps" (I think of them as a pipeline of steps) that go, say, from sensing (camera, LIDAR, whatever) to acting (turning wheels, accelerating, etc.) What I'm unclear on is when I look at the footage above (congrats btw!), it seems that if a vehicle (say, on the highway) that has been properly identified by the camera goes behind another vehicle (like a truck), the system seems to not "track" it anymore i.e. the colored box just disappears. Where in that "pipeline" does the system keep "inferring" that, given the law of physics, this object most likely didn't just pop out of the ether, nor did it fly above the vehicles before/after the ones he is stuck in-between? i.e. is there a layer in the pipeline that creates "assumptions" where vehicle are likely given past observations? This is what we normally do as drivers continuously and yet I don't see any of this in the Tesla footage (either because it doesn't exist or because it can be observed at the layer that has been analyzed)
 
It's kind of interesting to me that they have all this information about drive-able areas, yet the current system won't let you change lanes into or outside of an HOV lane in Washington, because we use a sold while line, and AP treats those as the edge of the road, no matter what. It's both amazing that they can tell the difference between dashed and sold lines, but disappointing that they can't tell the difference between a white and yellow line, or that an HOV lane is an acceptable place to go since it's fully bounded by two lane lines. Pre 2018.10 I could go in/out of HOV lanes but it knew the outer side of the HOV was the edge of the road, so this is a regression.

That may be by design and not an error. Our HOV lanes have particular areas you can enter and exit the HOV\PPU lanes and if you cross the solid line there is a HUGE $$ fine. You can only enter and exit when the lines become dashes. I would NOT like them to change this as automatic lane change could result in heave fines if the NAV / EAP decided to change lanes. (when available lol)
 
That may be by design and not an error. Our HOV lanes have particular areas you can enter and exit the HOV\PPU lanes and if you cross the solid line there is a HUGE $$ fine. You can only enter and exit when the lines become dashes. I would NOT like them to change this as automatic lane change could result in heave fines if the NAV / EAP decided to change lanes. (when available lol)

I just looked, and GDOT page says that HOV lanes use double white lines to indicate no crossing, not just a single line. This follows the federal guidance around lines. Double white is prohibited cross, single while is restricted crossing. For Tesla to obey the laws, they need to track if it's a single or double line.

I agree that auto lane change can't cross solid white all by itself as it doesn't understand the restriction. I just want it to cross when I ask it too, instead of pretending that lane doesn't exist at all.
 
Can you explain this further? In most places, exceeding the speed limit is prima facie evidence that you were being unsafe. So if a self driving car doesn't get the speed limit right, and goes too fast, it is safety critical.

Also, are you saying that the HD maps in ADAS systems are safety critical and if they are wrong the vehicle will crash?

First, we need to separate if we are talking about self driving cars here or ADAS systems.

In an ADAS system, the driver is responsible for ensuring safety and proper speed limit, so if the the system makes mistakes it is not safety critical.... for example, automatic emergency brakes that only works 50% of the time is not a safety critical issue.

for SDCs, there is a lot more being taken into account than just posted speed limits for choosing speed even when they are the one car on the road. They would be using a different kind of map to get target speeds for various segments, rather that what ADAS systems use for posted speed limits. And even in these cases the map data is not safety critical.

I did not mean to suggest that any map data is safety critical.
 
  • Helpful
Reactions: croman
I had an AP1 loaner for a wekk and it was absolutely unusable on a windy hilly road I have by my house. works perfectly on AP2, though. AP1 has other benefits though, like speed sign recognition, I guess.

Again you a comparing a 4 years old product with a 2018 product that only finally today (with 9.0) is able to match/eclipse the 4 years old product in driving output performance and features. Although they are lacking behind in other detection networks that eyeq3 has.


well, we mostly have those unverifiable demos from other industry members, though. If you ever worked on demos yourself you know the difference between a tech demo video vs live tech demo vs technology demo you let others to try vs technology you just let others to use at any time.
So I don't think they are lying or anything, but there's a good chance the footage being shown is selected from many examples and is the best possible case on a lucky day and the car was eaten by a grue 10 seconds after the footage stopped being shown ;)

Mobileye isn't like the other industry members though. it would be very disingenuous to lump them together. Every single SDC company are research labs (including waymo) other than Mobileye. Why? Mobileye is actually SELLING their products. No othe company is selling complete SDC hardware and software today other than Mobileye. Let that sink in.

This isn't some demo. This is real! Tesla could just as easily be using eyeq4 now (with fully done EAP software ready on AP2 Oct 2017 launch using eyeq4) if they didn't like doing everything in house.

Mobileye product ARE actually REAL! Prove in point is AP1 and Super cruise using 4 year old eyeq3.
Think about it, Mobileye was 4 YEARS AGO where Tesla is today.
You think the last 4 years mobileye has just been sitting on its ass doing nothing?

Or do you think mobileye is lying to their TIER 1 companies buying their product. Because that's the only conclusion if you think the footage that Mobileye showing could be lies. Then Nissan, VW, BMW, GM and more are being fooled.

That's like seeing you can't trust Samsung video on their OLED screen because it might be fake yet Apple uses the same screen in their iphone. So has Apple been fooled? Or do you think companies keep quiet while mobileye lies about their chip capabilities?

Amon has said time and time again, unlike other companies, they don't show demo ware, they don't show research projects, they don't show anything that doesn't have a production deal. Eyeq4 is IN production. BMW cars are getting it this year. Nissan is saying they will have LVL3 using eyeq4 early 2019.

While other companies are toying around with HD MAP. Mobileye's crowd sourced REM MAP are actually IN PRODUCTION RIGHT NOW!

This is a huge difference between mobileye vs other companies, they actually have to sell stuff to make money. You can't sell a lie (unless you are Tesla :p) Because the person you sell to already tested your chips and know its capabilities before they sign a contract.

Mobileye doesn't deal with theories, they deal with production deals!