Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Musk: All modes AP by end 2019

This site may earn commission on affiliate links.
I can't figure out two critical things. 1) why can't AP2 read speed limit signs? I underestimated how important it was until this weekend. It's so clearly superior and not something AP2 can't do. I can't even imagine that it's hard to program software. I'm truly curious about this flaw. 2) why can't they replicate the smooth function of the auto-lane change? Again, I underestimated the convenience of the AP1 lane change. I am good with AP2 lane change, but only when I'm driving..... having to engage, and then cancel and re-engage lane change multiple times is a big inconvenience when cars are approaching.

I understand why AP2 may lag in representing the adjacent lanes and vehicle type on the IR, since this seems more complicated if you are dealing with a fundamentally different system. But I truly can't understand why they haven't made speed limit sign reading and lane change match AP1. I would really love to hear from a person who understands the software side like @jimmy_d.

I will say on the flip side, back in Austin today, AP2 does things that AP1 can't do.... very tight and complicated spaces with curbs are very good on AP2 in Austin. I see more promise with AP2, but wow, can't believe they didn't fix these two issues.... the sign reading and lane change correction would vastly improve the AP2 experience.

I'm not giving up on AP2 at all, but this update really needs to be solid. I'm sincerely rooting for Tesla to have a good roll-out.
 
So this all but confirms the FSD video was contrived in one way or another. It sure fooled me at the time.

Yes, we have documented this a few times, we know now that video was a manufactured / hard coded at best.

@buttershrimp @TaoJones

I'm going to give this tidbit, the current alpha build is amazing, it really is, I dare say it brings us to parity with AP1 with the exception of reading speed limit signs, it still doesn't do that, but it's rock solid on most curves, it's not perfect but it's 100% better then the current crap in 18.10, brand new huge NN's and really amazing, I don't know why or when it's getting into a GA build, but by June I assume it will be in place, and it's quite good.

As for HD Maps @buttershrimp be careful what you wish for! Remember when Apple tried to remake maps to get away from google and how great that was ? Reminds me a lot of the AP1 -> AP2 debacle. Keep in mind the M3 uses the HD Maps, can we please stop calling it Hd Maps and just call it Tesla Maps? Here's a great recent example of how great the new maps are currently: Alma AG on Twitter

Let's not all rush out at once to use them.....
 
So this all but confirms the FSD video was contrived in one way or another. It sure fooled me at the time.
I don’t think it’s contrived, look at the second video on the model s webpage, that’s not a video you can fake. I’m optimistic, but to me, the AP1 experience is evidence of the challenges the team faces , I do think you can make an argument for them framing the challenge in a way that created unforced scrutiny
 
Yes, we have documented this a few times, we know now that video was a manufactured / hard coded at best.

@buttershrimp @TaoJones

I'm going to give this tidbit, the current alpha build is amazing, it really is, I dare say it brings us to parity with AP1 with the exception of reading speed limit signs, it still doesn't do that, but it's rock solid on most curves, it's not perfect but it's 100% better then the current crap in 18.10, brand new huge NN's and really amazing, I don't know why or when it's getting into a GA build, but by June I assume it will be in place, and it's quite good.

As for HD Maps @buttershrimp be careful what you wish for! Remember when Apple tried to remake maps to get away from google and how great that was ? Reminds me a lot of the AP1 -> AP2 debacle. Keep in mind the M3 uses the HD Maps, can we please stop calling it Hd Maps and just call it Tesla Maps? Here's a great recent example of how great the new maps are currently: Alma AG on Twitter

Let's not all rush out at once to use them.....


Although quite exciting to hear, I certainly hope we don't actually have to wait until June 20XX to see a meaningful version of EAP to finally show up. :confused::(
 
  • Like
Reactions: FlyF4
Name me one Self driving company, or academic institution or anyone from the industry that have said, expects or eluded to the fact that their self driving software/tech/car by the end of 2019 will work in "all modes of driving and be at least 100% to 200% safer than a person."? other than Elon Musk himself.

Name another car company that is taking customer money promising a feature that is not available and not promised in the near term. Might as well put Tesla on Kickstarter. At some point some of their customers are going to go to court over this.

At least Jobs never showed his crazy product until it was done.
 
If my new P100D scheduled for June delivery is not better than my 3 year old P85D - I might give them back the $2,500 deposit and tell them to put the money toward AP2 development and call me at the end of next year or whatever the latest tweet says.

And this is coming from the huge fan boy and Musk follower...
You do know you can delay paying for FDS (or is it FSD). cost an extra $1000, what years later??
Buy some stock, by the time you need to pay the $1000 you might have made it on your stock purchase.

Or, never buy a Tesla again. Your money, you actually get to decide.
 
[QUOTE="Brando, post: 2610739, member: 52957"]You do know you can delay paying for FDS (or is it FSD). cost an extra $1000, what years later??
Buy some stock, by the time you need to pay the $1000 you might have made it on your stock purchase.

Or, never buy a Tesla again. Your money, you actually get to decide.[/QUOTE]


That is fine for buyers today (I did NOT buy it on our 12-22-17 X75D) but because of promises and videos and Tesla CEO set expectations I DID buy (pre-pay for FSD) on my early '17 S100D. Hard learned lesson.
 
  • Helpful
  • Like
Reactions: TaoJones and croman
I drove AP1 and AP2 this weekend on the same roads, lots of video and on the same road with same settings in identical cars. @TaoJones, @BigD0g , @verygreen ,@lunitiks , @jimmy_d are the folks that I feel obligated to share this info with.

Everyone on the forum knows I'm a big AP2 fan, but I have to give credit where credit is due.

I would post the video, but it would be a lot of work just before a critical update will likely come for AP2.

IMHO, right now, AP1 is quite a bit more usable than AP2 for two reasons, speed limit sign recognition and lane changing is far far more available on more roads.... AP2 is brutalized everytime a construction lane takes you off the beaten path.. The lane changing of AP1 in particular is really a lot better. I was shocked by this in particular. @BLKTSLA had some good videos on this.

AP1 is not perfect, however, it's limitations are more predictable, and certainly AP2 does do several things better, the problem is that it doesn't convert to a more functional experience..... My main thought is this, until Tesla pushes an update with ability to read speed limit signs, I don't see how AP2 will surpass AP1 in the majority of situations.

I'm very optimistic about AP2, but I'll be honest, if I had owned an AP1 car and then upgraded my car to AP2, I'd be very frustrated. It pains me to write this, but it is so clear why this would have felt like a gigantic step down in functionality for someone who used AP1 and then got AP2. Also, I don't think it's helpful to be angry with the folks at Tesla who are likely trying very hard to take AP2 to the next level. I look at the discrepancy as evidence of how hard it is to take a more complex system to the next level. perhaps @lunitiks and his cycloptic cat versus baby multi eyed alien is the most accurate analogy

I'm extremely excited to see the two updates coming from Tesla. The HD maps are probably going to help, but I'm much more interested at this point to see when speed limit sign reading comes into play. This is a clear deficiency for AP2 right now. The only other insight I have from this experience is that if one area could be reverse engineered from AP1, it should be the autolane change. It's so good that a person using Autopilot for the first time would feel comfortable with it on the first couple of tries.

Its amazing that months ago, people attacked anyone who made any statement or even implied that AP1 was better than AP2. I'm just glad that people are finally looking at this from an objective point of view, rather than from a defensive reaction

Side by side test of AP1 vs AP2 (Same road, same TOD, same speed) with disengagement notice.


AP1 outperforms AP2 easily.
 
I can't figure out two critical things. 1) why can't AP2 read speed limit signs? I underestimated how important it was until this weekend. It's so clearly superior and not something AP2 can't do. I can't even imagine that it's hard to program software. I'm truly curious about this flaw. 2) why can't they replicate the smooth function of the auto-lane change? Again, I underestimated the convenience of the AP1 lane change. I am good with AP2 lane change, but only when I'm driving..... having to engage, and then cancel and re-engage lane change multiple times is a big inconvenience when cars are approaching.

I understand why AP2 may lag in representing the adjacent lanes and vehicle type on the IR, since this seems more complicated if you are dealing with a fundamentally different system. But I truly can't understand why they haven't made speed limit sign reading and lane change match AP1. I would really love to hear from a person who understands the software side like @jimmy_d.

I will say on the flip side, back in Austin today, AP2 does things that AP1 can't do.... very tight and complicated spaces with curbs are very good on AP2 in Austin. I see more promise with AP2, but wow, can't believe they didn't fix these two issues.... the sign reading and lane change correction would vastly improve the AP2 experience.

I'm not giving up on AP2 at all, but this update really needs to be solid. I'm sincerely rooting for Tesla to have a good roll-out.

I'm not a pro on the software side, and I only dabble in neural networks so all I can offer is observations and opinions. But here are a few items:

I put 26k miles on AP1 before switching to AP2 and now have about 7k miles on AP2. As of right now it's not hard to find situations where one or the other clearly wins so I can't really make the case that one is superior. Depending on what you care about and how and where you use it you may well have a strong preference for one or the other. Personally I'm pretty happy with AP2 right now but I can certainly understand people who are deeply unsatisfied with it.

From what I've been able to dig up about the vision features in Mobileye's vision design in AP1 and what I can determine about how AP2's vision works from the neural network architecture I think it's fair to describe these two systems as fundamentally different approaches. When ME started development on vision for their system neural networking for vision wasn't a thing. So of course they designed their silicon to be an efficient accelerator of conventional vision heuristics and they programmed it conventionally. From comments the CEO made back in 2014 and 2015 I believe that they hand tuned all of the vision kernels for the system that went into AP1. This approach has advantages and disadvantages. On the upside the kernels are very computationally efficient so you can run with less hardware, which was really important when they started. But the more important difference is that the kernels are "designed".

When something is "designed" it tends to be fairly well understood. If you look at any particular situation where it isn't working you can figure out why and how to fix it. If you take a well defined use case and design a solution for it you can come up with something that works reliably within that use case. I think it's fair to say that ME developed their use case and designed a machine that functioned predictably within that use case. So that's great, but it means you need a well defined use case and it means you have to explicitly design the machine for that use case. And within that use case you'll get predictable behavior. (As an aside, I think Tesla pushed AP1 outside of ME's use case and it's not hard to see why that would be upsetting to ME. They didn't want to see any accidents on Tesla's closely watched vehicle being attributed unfairly to a failure of ME's vision system.)

Now for driving in the real world a single overall use case isn't feasible so you break the problem down into elements and scenarios and you design solutions for each one and then combine them all. It's labor intensive. VERY intensive of VERY expensive expert labor. Google started with similar limitations and a similar approach and has been throwing enormous resources at the problem for over a decade and still doesn't have a production system. They might have one soon, or they might not. Rodney Brooks - one of the pioneer luminaries in this field - has predicted that they are still 15 years away (his prediction is that it won't be a real thing for real people any sooner than 2032).

So the rapid advance of neural networks - which were almost entirely ignored until the last 5 years - allows for a different approach. Instead of "designing" the vision system you give it lots of data and create a process that allows the vision system to "learn" what it needs to do. This has some downsides compared to explicitly designed systems. For one thing when it's not working you don't know why explicitly. Just like there isn't one neuron in a crazy person's brain causing the problem there isn't one line of code in a neural network that's responsible for why a particular sign wasn't recognized in a particular use case. The system's knowledge wasn't created by the designers and it isn't organized in ways that allow the designers to tease out the causes of particular behaviors. This 'black box' aspect of neural networks is a major challenge to people who work with them.

So why use neural networks if they have this really ugly flaw? In a word, it's because they scale well. If I need 50 designers for 5 years to design a system that works well in 1 use case and I have 10,000 use cases then I need something like 500,000 designers for 5 years to do all 10,000 use cases. Or more likely 50,000 people for 50 years. With neural networks the problem is data and computers per use case rather than people and years per use case. So I need 10,000x as much data rather than 10,000x as many people. And to the extent that this simplistic analogy is true this second example is feasible within 5 or 10 years and the first one is not.

In this manner of thinking about AP1 and AP2 Tesla Vision is more of a 'learned' system than it is a 'designed' system whereas the ME parts of AP1 are in the 'designed' category. And AP2 is still an immature 'learned' system at that - the training of it isn't yet properly sorted out. But the promise is that once they have the process for training the system worked out it will scale up to be able to handle the enormous variety of the real world much faster than a 'designed' system could scale up it's work force to deal with those thousands of use cases.

Ok, so this is a very roundabout answer to your question of why can't AP2 do simple things that AP1 could do years ago. And my grossly oversimplified response is that AP1 and AP2 are made different ways and those different methods have very different strengths and weaknesses. Tesla started over with a different approach because they need the ability to scale up the ability of AP without having to hire a vast army of people who don't even exist yet. Elon clearly believes that this tech is going to scale very rapidly once they have the formula worked out as his public pronouncements have consistently shown.

And in the meantime there are situations that AP2 doesn't handle that AP1 does.

As an aside - I prefer the AP2 lane change over that of AP1. Maybe this is geography or a matter of taste rather that code? And as for the speed limit signs - I agree that reading signs is not a particularly hard problem. My guess there is that they decided not to rely on reading signs rather than that they can't do it. Maybe because of a focus on using map annotations instead, or perhaps there's some subtle failure mode that relying on speed signs can lead to.

I recently sat through a lecture by Waymo's head of development and he was describing all these crazy and kind of scary things that they run into. One example was about seeing an overhead sign reflected in the rear window glass of the car ahead. It only happens in rare situations but since both lidar and vision reflect off of glass both of those sensors see a big street sign lying in the road ahead of the car and their car wants to swerve or brake to avoid the 'sign' in the road. It's a really obscure but serious failure and it's much harder to deal with than it first seems. They get similar weird events when driving past glass fronted buildings and big shiny buses. Even standing water on the road can do crazy things in the right situation. They have all these cases that they have to carefully test for, write code to fix, and then go out and test again. Heuristic approaches like the one's that Waymo uses work perfectly when they are working but they are brittle - they fail spectacularly and suddenly and the designers have to compensate for that. They make it easy to be overconfident because you can't see the failure coming, which I'm sure is one of the things that led Chris Urmson to commit to nothing less than full level 5 - because ordinary users can't be relied upon to respect the limitations of a system that they don't experience until it's too late.

I note that their initial test deployment service is going into Chandler AZ; a suburban development with few overhead signs, glass fronted buildings, or big shiny buses roaming the streets. And not a lot of standing water. I wonder if that's a coincidence.

Neural networks get wonky as they approach a failure point and if you use them much you'll find that you can see a failure coming. My sense of AP1 and AP2 mirrors this - AP1 gives me perfect confidence even in places where it might be driving right along the edge of a gross failure. That makes AP1 more 'comfortable' because it's hiding it's limitations, in a sense. AP2 conveys it's lack of confidence to me by getting wobbly or moving outside the perfect center of my comfort zone. So depending on what you expect that can make you not want to use it. I like it, but I understand why other people do not.
 
As for HD Maps @buttershrimp be careful what you wish for! Remember when Apple tried to remake maps to get away from google and how great that was ? Reminds me a lot of the AP1 -> AP2 debacle. Keep in mind the M3 uses the HD Maps, can we please stop calling it Hd Maps and just call it Tesla Maps? Here's a great recent example of how great the new maps are currently: Alma AG on Twitter

Let's not all rush out at once to use them.....

There's a big difference between how it picks the routes with the new nav vs what can be done with the data in the tiles for autopilot, though. Someone smarter than me will likely correct this but I feel that if the car has a better understanding of an upcoming curve via the tiles, it can prepare instead of doing the late-entry-and-hugging-the-outside thing that it tends to do right now.
 
As an aside - I prefer the AP2 lane change over that of AP1. Maybe this is geography or a matter of taste rather that code? And as for the speed limit signs - I agree that reading signs is not a particularly hard problem. My guess there is that they decided not to rely on reading signs rather than that they can't do it. Maybe because of a focus on using map annotations instead, or perhaps there's some subtle failure mode that relying on speed signs can lead to.


Neural networks get wonky as they approach a failure point and if you use them much you'll find that you can see a failure coming. My sense of AP1 and AP2 mirrors this - AP1 gives me perfect confidence even in places where it might be driving right along the edge of a gross failure. That makes AP1 more 'comfortable' because it's hiding it's limitations, in a sense. AP2 conveys it's lack of confidence to me by getting wobbly or moving outside the perfect center of my comfort zone. So depending on what you expect that can make you not want to use it. I like it, but I understand why other people do not.

This post perfectly describes my experience. Predictable failures and easier engagement with speed limit signs and lane change availability is the real difference with AP1... it’s not truly better like some are suggesting. So in my videos, I noticed that my AP2 could traverse the entire winding road without a disengagement. It did this in part, because it had incorrect information from a speed limit sign and was set 10 mph below where it should have been. When AP1 on the other hand, failed this turn. So AP2 lucked out a bit. When AP1 was forced to keep the same speed as AP2 on my second attempt, it did it well through the turn. The hugging of the inside of the lane on a turn impressed me on AP1. Still, a lot of folks on this forum will take my observations the wrong way, that I’m saying AP1 is better.... I’m not saying that. I’m saying that is more comfortable and functional in daily use. More along the lines of @BLKTSLA ’s conclusions in his video. AP2 lane change won’t engage reliably when I need it to.... when it is working, it feels like a more sophisitated lane change.... more like a human. AP1 on the other hand seems to have solved this problem by standardizing the lane change rate of entry into the adjacent lane. It seems that a solution to the problem will likely be integrating AP1 simplistic approaches with the neural net. Probably something much harder than it sounds.

I also want to be clear that I don’t think HD maps will solve the problem. I don’t see how Tesla will solve the AP2 problem without reading speed limit signs due to construction zones and errors. It seems to be a huge disadvantage not to prioritize the speed limit sign as more valuable than the map data.... It seems to me that prioritizing the physical sign is the best approach and using map data as a backup for areas where someone enters a road way until a speed limit sign is available. I’m sure they are having these conversations.

To me, the lack of communicating what jimmy said is problematic from Tesla. I think they can do this while being somewhat vague. When people don’t understand the reasons why something is not as functional, anger and frustration will follow.

I’ve always said that I hope to see significant improvement in AP2 by June of 2018 to figure if it’s on track.... this date is somewhat arbitrary, and before the AP1 tests I conducted, I felt more confident that we were on track with AP2 because I’ve been pleased with the progress of late. However, once I experienced a real AP1 functionality this weekend, my expectations have changed. I see it as a very important thing that they are functionally similar very soon.

One thing is for sure.... I disagree with the P85D being better than a P100D.... totally disagree... @TaoJones mentioned the road noise and other small factors that are clearly superior in the most recent builds..... I wouldn’t trade my car with AP1 at all.... but I definitely understand the visceral frustration of those that upgraded from AP1 to AP2 much more than I had previously. I’d be really frustrated.... Problem is people tend to crash the forum when you are objective and try to spike the football when you admit to being wrong. So, we are less likely to be honest with each other about our concerns.

Next updates will be key.... if they don’t do something neat, I’ll put together a video comparison for folks.
 
Its amazing that months ago, people attacked anyone who made any statement or even implied that AP1 was better than AP2. I'm just glad that people are finally looking at this from an objective point of view, rather than from a defensive reaction

Side by side test of AP1 vs AP2 (Same road, same TOD, same speed) with disengagement notice.


AP1 outperforms AP2 easily.
My video is better, same time of day... I don’t think that video can be used quite yet. Apples to oranges kind of thing. I don’t have the time to do a split scene video.... not after wearing the puppet hands.... this shrimp is resisting Final Cut Pro as long as possible.
 
I'm not a pro on the software side, and I only dabble in neural networks so all I can offer is observations and opinions. But here are a few items:

I put 26k miles on AP1 before switching to AP2 and now have about 7k miles on AP2. As of right now it's not hard to find situations where one or the other clearly wins so I can't really make the case that one is superior. Depending on what you care about and how and where you use it you may well have a strong preference for one or the other. Personally I'm pretty happy with AP2 right now but I can certainly understand people who are deeply unsatisfied with it.

From what I've been able to dig up about the vision features in Mobileye's vision design in AP1 and what I can determine about how AP2's vision works from the neural network architecture I think it's fair to describe these two systems as fundamentally different approaches. When ME started development on vision for their system neural networking for vision wasn't a thing. So of course they designed their silicon to be an efficient accelerator of conventional vision heuristics and they programmed it conventionally. From comments the CEO made back in 2014 and 2015 I believe that they hand tuned all of the vision kernels for the system that went into AP1. This approach has advantages and disadvantages. On the upside the kernels are very computationally efficient so you can run with less hardware, which was really important when they started. But the more important difference is that the kernels are "designed".

When something is "designed" it tends to be fairly well understood. If you look at any particular situation where it isn't working you can figure out why and how to fix it. If you take a well defined use case and design a solution for it you can come up with something that works reliably within that use case. I think it's fair to say that ME developed their use case and designed a machine that functioned predictably within that use case. So that's great, but it means you need a well defined use case and it means you have to explicitly design the machine for that use case. And within that use case you'll get predictable behavior. (As an aside, I think Tesla pushed AP1 outside of ME's use case and it's not hard to see why that would be upsetting to ME. They didn't want to see any accidents on Tesla's closely watched vehicle being attributed unfairly to a failure of ME's vision system.)

Now for driving in the real world a single overall use case isn't feasible so you break the problem down into elements and scenarios and you design solutions for each one and then combine them all. It's labor intensive. VERY intensive of VERY expensive expert labor. Google started with similar limitations and a similar approach and has been throwing enormous resources at the problem for over a decade and still doesn't have a production system. They might have one soon, or they might not. Rodney Brooks - one of the pioneer luminaries in this field - has predicted that they are still 15 years away (his prediction is that it won't be a real thing for real people any sooner than 2032).

So the rapid advance of neural networks - which were almost entirely ignored until the last 5 years - allows for a different approach. Instead of "designing" the vision system you give it lots of data and create a process that allows the vision system to "learn" what it needs to do. This has some downsides compared to explicitly designed systems. For one thing when it's not working you don't know why explicitly. Just like there isn't one neuron in a crazy person's brain causing the problem there isn't one line of code in a neural network that's responsible for why a particular sign wasn't recognized in a particular use case. The system's knowledge wasn't created by the designers and it isn't organized in ways that allow the designers to tease out the causes of particular behaviors. This 'black box' aspect of neural networks is a major challenge to people who work with them.

So why use neural networks if they have this really ugly flaw? In a word, it's because they scale well. If I need 50 designers for 5 years to design a system that works well in 1 use case and I have 10,000 use cases then I need something like 500,000 designers for 5 years to do all 10,000 use cases. Or more likely 50,000 people for 50 years. With neural networks the problem is data and computers per use case rather than people and years per use case. So I need 10,000x as much data rather than 10,000x as many people. And to the extent that this simplistic analogy is true this second example is feasible within 5 or 10 years and the first one is not.

In this manner of thinking about AP1 and AP2 Tesla Vision is more of a 'learned' system than it is a 'designed' system whereas the ME parts of AP1 are in the 'designed' category. And AP2 is still an immature 'learned' system at that - the training of it isn't yet properly sorted out. But the promise is that once they have the process for training the system worked out it will scale up to be able to handle the enormous variety of the real world much faster than a 'designed' system could scale up it's work force to deal with those thousands of use cases.

Ok, so this is a very roundabout answer to your question of why can't AP2 do simple things that AP1 could do years ago. And my grossly oversimplified response is that AP1 and AP2 are made different ways and those different methods have very different strengths and weaknesses. Tesla started over with a different approach because they need the ability to scale up the ability of AP without having to hire a vast army of people who don't even exist yet. Elon clearly believes that this tech is going to scale very rapidly once they have the formula worked out as his public pronouncements have consistently shown.

And in the meantime there are situations that AP2 doesn't handle that AP1 does.

As an aside - I prefer the AP2 lane change over that of AP1. Maybe this is geography or a matter of taste rather that code? And as for the speed limit signs - I agree that reading signs is not a particularly hard problem. My guess there is that they decided not to rely on reading signs rather than that they can't do it. Maybe because of a focus on using map annotations instead, or perhaps there's some subtle failure mode that relying on speed signs can lead to.

I recently sat through a lecture by Waymo's head of development and he was describing all these crazy and kind of scary things that they run into. One example was about seeing an overhead sign reflected in the rear window glass of the car ahead. It only happens in rare situations but since both lidar and vision reflect off of glass both of those sensors see a big street sign lying in the road ahead of the car and their car wants to swerve or brake to avoid the 'sign' in the road. It's a really obscure but serious failure and it's much harder to deal with than it first seems. They get similar weird events when driving past glass fronted buildings and big shiny buses. Even standing water on the road can do crazy things in the right situation. They have all these cases that they have to carefully test for, write code to fix, and then go out and test again. Heuristic approaches like the one's that Waymo uses work perfectly when they are working but they are brittle - they fail spectacularly and suddenly and the designers have to compensate for that. They make it easy to be overconfident because you can't see the failure coming, which I'm sure is one of the things that led Chris Urmson to commit to nothing less than full level 5 - because ordinary users can't be relied upon to respect the limitations of a system that they don't experience until it's too late.

I note that their initial test deployment service is going into Chandler AZ; a suburban development with few overhead signs, glass fronted buildings, or big shiny buses roaming the streets. And not a lot of standing water. I wonder if that's a coincidence.

Neural networks get wonky as they approach a failure point and if you use them much you'll find that you can see a failure coming. My sense of AP1 and AP2 mirrors this - AP1 gives me perfect confidence even in places where it might be driving right along the edge of a gross failure. That makes AP1 more 'comfortable' because it's hiding it's limitations, in a sense. AP2 conveys it's lack of confidence to me by getting wobbly or moving outside the perfect center of my comfort zone. So depending on what you expect that can make you not want to use it. I like it, but I understand why other people do not.
Can I nominate this explanation for post of the year? REally helpful.