Agreed. This is basically EAP with some poor decision making added, some odd regressions, and confusing behavior on top. I actually disagree with this point somewhat. I have an offramp I've experimented on before, and EAP could navigate it acceptably well but needed to be monitored carefully. NoA made it significantly worse. To the point where I do not trust it whatsoever. Other offramps simply demand you take over immediately before you've completely existed the highway. While I appreciate the early warning that the system can't handle what's about to happen, that does make me think there's some level of HD map data involved instead of relying solely on the sensors. This is a lot wonky though. With the aggressive changes in speed as the car is approaching an exit, I've had people attempting to speed past me while they take the exit, not realizing my car is about to try to maneuver into the exit lane. Most exits I've attempted with NoA, it moved over as soon as the exit lane began. But there were one or two where it didn't attempt to move into the exit lane until half way down it. I didn't have NoA suggest I merge into anybody, so that's pretty interesting. But my testing was done at night with a mostly empty highway. I've not trusted EAP in ending lanes yet. I'm just not confident enough that it will do the right thing while trying to find the middle of a lane that doesn't exist. But I will say that when passing an onramp that doesn't have a dashed line, EAP will violently jerk right to try to find the middle of what is two lanes narrowing into one. This is especially fun on a curved section of highway where it feels like it is way too aggressive. In the winter time, if that happened, my car would have easily spun out and caused a major pile-up. Not NoA specific, but an EAP flaw for sure. Again, I didn't see any of that in my test. But that's an interesting data point for sure. This was the most frustrating part of my testing. On a clear, two lane highway, it constantly prompted and dinged to get my attention. Even on a three lane highway, it just always wanted to move left until it was a mile from a planned exit or so. I have no idea why it wanted to move left while traffic was regularly passing me on the left, but I do know that it has no concept of it being the cause of traffic slowing down behind you. This is a really fundamental rule of the road, and the fact that it's missing is a bit concerning. Then again, it probably prevents the car from wildly seeking the middle of a lane that's merging from the right. So maybe it's a blessing for people that live where 8 lane highways are the norm. This to me is the most infuriating EAP experience. Here in the northeast US, we have undulating highways, and they seem to cause EAP to really lose its mind and smash the brakes. This is pretty inconvenient for everybody around me when traffic is accelerating to 75+ and suddenly my car wants to go 45. It is completely unacceptable. Even worse, there are areas around me where highways merge for several miles. The speed limit posted is the normal highway speed. But EAP insists on traveling at least 5 under the posted limit no matter what I do to adjust the speed it selects. If I cancel and resume EAP, it will drive the speed I choose. That's utter nonsense- I'm either on a speed enforced interchange, or I'm on a highway with two route designations. Make up your mind EAP. I'm probably one of the few people that thinks reading speed limit signs is an awful idea. It's common near me for people to vandalize speed limit signs and either remove digits or modify existing digits to make them something else. So, spray painting a sign that says 65 to make it read 85. A human can easily detect the difference, but I flat out do not trust a neural network to do the same. Another neat practice is for someone to add a "1" prefix to a sign so now it says 170 instead of 70. I'd rather the system flag an event for review by Tesla where it says it read a sign that says something, but the tile data says something else. The obvious exception is construction zones where temporarily reduced speed limits are common. But in those cases, I frankly don't think people should be using EAP in the first place. It's far too unpredictable. This is why I think EAP through a construction zone is too much of a hazard. Imagine a worker steps out partially into a lane of traffic. Do we know what EAP will do? My honest fear is that it would hit that person and be none the wiser. There's just too much going on at road work sites to trust a line follower robot to do what needs to be done. I will say that the deceleration starts earlier and happens much more smoothly than it did in 32.2 for me. I appreciate that, because it was like getting sea sick and whiplash at the same time before. So that's a very welcome improvement. There was a situation where the car was traveling slower than my set speed but still faster than the speed limit, and it just stuck behind someone. Eventually it did ask me to change lanes, but I couldn't tell if that was just the normal desire to not be in the right-most lane, or it was a real speed adjustment suggestion. They both say the same thing on the screen. I've had a new experience where I'll put my blinker on fully (Model 3, push the stalk all the way up or down until it can't move further), it will blink a random number of times, the car will do nothing at all, and then turn the blinker off. I was convinced that I wasn't fully pushing the stalk to where it needed to be, but it continues to happen. Auto lane change behavior in general seems wonky. I'm not sure if this is because Tesla are starting to rely more on AI to make driving decisions and relaxing some of the manually coded decisions, or what, but it's a minor frustrating point. As for features that used to work, I find NoA is introducing flat out wrong choices in some situations. When exiting a highway on an offramp that splits, the navigation knows to stay right, it indicates that it will stay right, and then NoA/EAP absolutely wrenches the car left to attempt to find the middle of a lane that is quickly becoming a granite block protected island. This would be fine if it was just EAP and the system didn't attempt to tell me that it understood how to handle offramps because I'd know that it would try to crash. But the fact that it still says I have a hundred feet before needing to take over is probably going to lull someone into a false sense of security and cause a very serious accident. This is the only place I deeply disagree with you. MobilEye (now Intel) had over a decade of lead time over everybody in the industry. It wasn't until Intel bought them that any real, solid plans to release their newer platforms came to fruition, and we still have to wait and see what actually comes from them in 2021. I think MobilEye has better automatic high beam control than AP2.x vehicles seem to have, obviously sign recognition is something that exists on the EyeQ platform, but I'm not sure how much of that is actually the EyeQ systems and how much of that is existing componentry sold by Continental. In either event, I find sign reading to be of little or no value in practice. As for Tesla versus MobilEye, I think it's good to keep in mind that within the span of 2-2.5 years, Tesla has built a system that effectively meets EyeQ3's feature set and implements some of EyeQ4's. I'll actually go so far as to say that MobilEye's claims of Level 3 and 4 autonomy in the EyeQ4 and EyeQ5 platforms is bullshit. Frankly, I've seen some of the demos they've done with Audi on their "level 3" driving, and I just don't see it being much better than EAP plus a third revision of NoA. In fact it's such BS that Audi has basically removed the claim from their A8 and appears to only still be including it in marketing materials. They made a very big deal of how amazing it was and how much of a leap it would be, and then they realized they couldn't convince any legislations to allow it, and that it didn't quite work in the autonomous manner they initially believed. So now they're calling it a driver assistance package instead of autonomous driving. I will grant this, though. MobilEye are much more on board with using multiple types of sensors to augment their vision platform, and not just using cameras. This is a massive, massive improvement over what Tesla is trying to do with only using vision systems plus a single front facing mid-range radar. At least having some level of surround radar would be an improvement, and I'm actually a pretty big proponent of lidar on top of that, but it seems that war has been won by the "no" votes within Tesla.
I have no technical understanding about this topic. But is it possible to replicate your mods in any way?
That quote was a minor aside. It is not his main post on v8 and now v9. Neural Networks Plus the recent podcast has a lot of details. Posted in my previous post. He has street creds as well. Just like you don't like people blowing you off, you should do the same to other that have proven themselves (obviously not talking about me but him and some others). What about the 4 other questions?
I have to rate autopilot by the metric of "how conformable would I be falling asleep behind the wheel". Elon is promising Full Self-Driving by next year; if Tesla's can't full self-drive on limited access highways, what chance do they have in crowded cities. When will Tesla release any level 4 cars? Audi, Caddilac, Waymo and others are ready to take responsibility for their car's actions; how long can Tesla wait.
«Relax your eyes, yes, you’re falling asleep now, yes, now press the Informative-button» https://teslamotorsclub.com/tmc/post-ratings/6/posts
Jason, good to see you back on TMC. Out of interest... what would happen if you built on top of Nvidia's existing DriveWorks stack? All the stuff in v9 seems to have been a part of DriveWorks (or whatever it's called these days) for quite some time... anyway I'm sure you've already thought of it, but just wondered if it'd be a quick way to get a launchpad for some new mods
I find your thoughts highly accurate and your work unmatched. Please, please consider selling your AP1 upgrade to us who would appreciate it! I’m out of warranty anyway!
Other than tweaking some of the lane change logic, I think Nav on AP is a very solid step forward. I have noticed some poorer basic lane holding performance than the last build though.
Please open source your solution so tesla will take it, claim it as there own and give it to all customers, oh please please open source this, I suspect a ton of the community would chip in to help as well after your 24hr you guys suck video ofcoz!
I think the biggest thing that might be missed here : doesn't seem like AP 2.0 and AP 2.5 are Tesla's primary software focus right now. We know they are supposed to be implementing a new computer early next year. And that there are larger networks behind Tesla's cameras that can't run on the current computers. It would not surprise me if the team putting out Nav on AP is the B team whereas the team trying to get the FSD working on HW3 is the A team.
Please ignore the people here who like to kneejerk fanboy and don't know who you are. Those of us who do know your pedigree, take your feedback very very seriously. Ughhhhh much of this totally jives with my experience. Phantom braking. Oh-god-the-phantom-braking. Phantom lane change abandonments for no reason. Squirrely suggestions to change lanes. The car has no concept of a car two lanes over attempting to change into the lane you're also attempting to change into, and I've had to panic bail to prevent side-swiping another car when NavAP was trying to change lanes. And my favorite: you can tell it you'd prefer to use HOV for routing... which it will do at all costs (I enter a freeway and there are 3 miles to go before the next interchange that offers both HOV and standard, far-right, I'm-already-in-this lane exits, and it'll try to cut me across 5 lanes of traffic to use the HOV ramp. AND THEN NOT CHANGE INTO THE HOV LANE SINCE IT'S A SOLID LANE LINE! Regarding a lane ending and gradually merging, I think it's handling things better, but it tends to aim dead center of new "super roomy" lane, instead of just aliasing to the line to the left like it should. I have not experienced the attempt to route into objects it detects at all. It seems pretty reliable at detecting an object and immediately red-lining the lane. I do like that it picks the correct interchange. I think the blind spot detection is better than you say, but only just barely. Swimming cars, colliding cars, ghost buses, etc, are all super super common and something that I hope is relatively easy to smooth out. I also wonder if the car has a bit of a blind spot right at the A pillar... it sometimes show cars popping in and then out and back again when they overtake me.
Feel sorry for both A and B teams. Anyway don't forget C team, which is by far the best and I hope they're still working on it--remember the FSD video in 2016?
I don't see this as likely. Regardless the computer change, Tesla should be utilizing the same software on both EAP and FSD. My understanding is the new computer allows for more processing of camera data but is not an overhaul of the NN. NoA should not use different code than FSD and this is what I think is concerning to many - NoA is likely an indication of the maturity of FSD.
No, there are other TMC posts where they talk about a completely different sensor fusion net that can be ran. The higher level software (which does need improvement) that decides what to do with the data may be similar, and you are right that needs to be improved. But There's no reason it's the same if the sensor net data is going to vastly different in FSD
First, AVs become legal, then they congest things, then comes the "we must have V2V", before finally "its the human drivers stopping Level 5". I hope I'm not underestimating the "progress" of firms with more market capital than a number of TBTF banks?
Actually I had interesting failure mode, when your lane ends (left lane, works better with right lane endings? or there's something strange with NC lanes that causes this?) the car is all too happy continuing on the shoulder before suddenly aborting autosteer with a gong. I only have one bad augmented sample of this from the days I was still figuring out how to properly collect this data, though and I don't feel like driving there again just to reget this footage
OP, I tend to use new features only when I'm solo in the car. I do this primarily for passenger comfort reasons. But I do try to use the new features when I'm solo, despite their drawbacks and bugs, because I like to think that by using them, I'm contributing real world data to the NN which will ultimately help all of us and all future owners. Am I kidding myself? If so, I won't bother using the new features until the kinks have been better worked out (6-12 months). Thanks very much for your comments; I'm much less eager now to have the NoA update appear in my vehicle.