Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Where is the enhanced autopilot?

This site may earn commission on affiliate links.
P85D doesn't produce anything near 700hp anywhere. The official Elon excuse was "the motors are capable of 691hp, just limited to 463hp by other components". If you buy this, all Tesla cars with any AP are capable of FSD, just limited by sensors, software, redundancy, whatever. Oh, as for 0-60, Tesla totally fudged the numbers with P85D - they started comparing 85D no rollout numbers to p85d with rollout numbers to make it look better than it really was.

Sorry for cutting him a little slack but you say it doesn't make anywhere near those numbers and then you say he said it did make somewhere near those numbers but only at the motors. Like my stereo totally makes 1000 watts. I dropped some crazy engine into my car that makes a thousand horsepower, and? I still don't understand why it matters so much. He rattled off some numbers that he was thinking about as far as what the motors can produce and he was wrong(?) okay. The 0 to 60 numbers I'm not sure of and or what roll out numbers are. I did have a guy come out with a laptop and drop my 0 to 60 by about a second. Technology and specs are always a mess but I don't get the feeling he was doing anything malicious, yet.
 
Last edited:
Traffic sign recognition has to be one of the easiest vision problems. After all, traffic signs are standardised. It must be only prioritisation thing, that it is not implemented yet.
With different standards in every country and millions of possible deviations when you bring assholes with spraycans, vegetation and damage into play.

Don`t underestimate this.
 
AP1 has it, my Audi Q7 had it, my Volvo V90 has it, everybody knows how to do this so why is Tesla lazy and uses a Tomtom traffic sign database?
As indicated before, the moment other manufacturers will come with 100% electric, high HP cars nobody will accept anymore the attitude of Tesla. Promise a lot and deliver poor......
 
No sign recognition is 100% accurate, so much for the spray cans and Vegetation etc...
On the other hand you have international standardization bodies and Tesla only delivering their products in limited and precisely defined markets which offer perfect sign references. Every region can easily use a specific neural net that is trained for it.
It's a functionality that has been available with traditional manufacturers for YEARS before AP1 became available. So please dont push that "sign recognition is really hard" narrative.
 
  • Like
Reactions: Matias and Axael
Totally agree. My €35k Volvo V40 from years ago had a pretty accurate speed sign limit recognition which handled most of them without any issues. And that's actually not the only thing this car did better than my €120k Model S, unfortunately.

I know it's quite a repetitive narrative, but yet... it's disappointing.
 
U.S. sign shapes are standardized across the U.S. In fact, it used to be part of the driver's test to identify a sign by its shape. I don't know if it still is as I haven't taken a written test in years now. If there is software that can figure out what a captcha has written on it where it is intentionally obfuscated, then it should be able to read signs with graffiti or growths even though those are not that common. It knows what kind of sign it is based on its shape and that should be trivial.
 
There are some misconception on how self driving is working. If we say human driver is 99.9% reliable (not real number just to illustrate), level 4/5 FSD to be acceptable need to reach 99.99%. One way to get there is redundancy : two independent 99% systems give you your 99.99%.
If we simplify the problem to level 2 and level4/5, level 2 reliability is the conjunction of the automatic driving plus human driver, when level 4/5 is only the automation.
Level 2 is very tricky because the better the system works, the more complacent and less reliable the human driver get.
- at 95% with a lot of disengagement as was the case before Tesla, there is not much risk of that, human stay 99.9 so overall safety is the same or better.
- at 99% which is the level of Tesla Autopilot at the moment on highway, if you can keep your 99.9 it much more secure, but unfortunatly the number of recent accident show that human can get very low (less than 90%) ending with worse safety using the system.

Among the self driving player they are two strategies to get to the 99.99% : Waymo, Cruise, DriveAI... aiming directly to level 4/5 and Mobileye/Tesla trying to go from ADAS/level 2 to full autonomy.

The steps are always the same : situational awareness (whats is around me) and driving policy (what are the other supposed to do and what should I do).
If we leave aside the driving policy (perhaps easier and more programatic), you need a perfect situational awareness to have a chance to get a reliable system.
In theory, you should be able to do that with only vision and without map as the human is capable, but in practice we are very far from it.

Waymo level 4 already as its rolling out its autonomous car in public, reached this 99.99 thanks to sensor fusion (vision+lidar+radar) and high definition map (I know how the world is supposed to be, so i have only to work on what has changed, and thanks to landmarks I know to cm where I am). Waymo technology is there but will not necessarily translate to a large diffusion outside fleets.

Mobileye had a very different and coherent approach for years : nick the vision problem on a small chip that can be deployed on a large scale ADAS systems (reliability requirement is much less as its mainly last ressort action). Its were we are now, EyeQ3 : used by many constructors for ADAS (AEB, LDS, Autoparking...) and by Tesla for AP1 : a good vision system not dependent on map or network, but far from enough to get to even level 3 without map, and perfect object detection.

Coming next are EyeQ4 and EyeQ5, still deployed for ADAS system and basis for some limited level2/3, but will also be used to build as a background task the very important HD map thanks to smart very light landmarks.

Mobileye impressive Jerusalem demo shows that they are not very far from the Graal, pure vision (plus hd map) level 4. In parallel they are working on a pure lidar system, so once they get there they will have the combo to go from ADAS to level 5 with the best map system and have a good chance to be the Windows of self driving.
Jury is still out if pure vision level 4 system will be reliable enough and regulatory accepted. But if you have two independent close to level 4 system (vison and Lidar) you are 100% sure to have the best level 4/5. And with only the vision, you get excellent level 3 and even more important incredibly effective ADAS system, thats will save many lives, acting as a supplementary pilot ready to save you from yourself and others.

And where is Tesla in all this? After working with mobileye and learning a lot from each other, they choose to go their way. Its somehow coherent not to be dependent on the Windows of self driving, when their own culture is more Apple like with a total control, from battery to charger network and self driving technology. But contrary to the Apple/windows analogy, its Tesla who needs to copy and replicate the original Mobileye design as its clearly the best and only one way to get from level 2 to level 4/5.

I don't know enough of the internal reasons for the decision to split from Mobileye, but Elon Musk communication and AP2 development seems to imply that they had too much faith on NN and datas and thought they could use NN tricks to shortcut part of the HD map and driving policy development, and the vision system would be easier. Now they have to go the long and difficult way and pretty much replicate Mobileye strategy, with less means and late in the game.

All said, its not necessarily a bad decision as they will be fully independent in their development and can freely release an very convenient even if not safer Autopilot which is an excellent sale argument and margin maker. But they need to match their communication with reality : pure vision FSD (even with radar) is far far away, and if ever will be an all new system rather than a slow evolution from the actual one.
The realistic goal (not even possible) is delivering a real level 3 system on highway, which imply a huge improvement in vision and cm precision on HD map (not mapped to cm, but knowing to cm accuracy where you are on the map, thks to landmarks).

At the moment, the situational awareness of autopilot is quite poor : he doesn't know where it is (building it mainly through its own instantaneous vision), Not sure where are all roads, intersections, all the mobiles object (what is it, in which lane, where is it going...). Radar is adding information, but not precise enough to be 100% sure of all the surrounding infos. Its working impressively well, but to be clear its a very focus driver, doing it best with a poor vision (5/10).

The idea is not bashing Tesla. Autopilot is an incredible achievement, that only Tesla was able to deliver on a such scale thanks to its OTA capability. People and regulator accept an imperfect system because they know it will get better, but if they don't want to spoil this asset, it gets really urgent to match their communication with reality and give a clear vision of where they are going (they need to talk level 3 and/or pilot monitoring) even at the price of recognizing the safety deficiency of the system and the overoptimism of FSD.
 
Sorry for cutting him a little slack but you say it doesn't make anywhere near those numbers and then you say he said it did make somewhere near those numbers but only at the motors. Like my stereo totally makes 1000 watts. I dropped some crazy engine into my car that makes a thousand horsepower, and? I still don't understand why it matters so much. He rattled off some numbers that he was thinking about as far as what the motors can produce and he was wrong(?) okay. The 0 to 60 numbers I'm not sure of and or what roll out numbers are. I did have a guy come out with a laptop and drop my 0 to 60 by about a second. Technology and specs are always a mess but I don't get the feeling he was doing anything malicious, yet.
I know @whitex can get a little spun-up and unfiltered, but he is spot-on with this one. You've got a lot of catching-up to do if you're not familiar with this scandal. He didn't just randomly rattle off some numbers as you say. This was on Tesla's website when you ordered the car. Your analogy doesn't come close, unfortunately.

Here's a much better analogy of what happened... The manufacturer tells you that the car makes 700hp. You dyno the motor at the crank and see that it only produces 450hp. You ask the manufacturer to explain and they tell you, well, you see, the forged internals (crank, rods, etc) are capable of 700hp, but the rest of the components that you need to create such an explosion (high volume fuel pump, high pressure fuel injectors, high flow air intake, etc) are no where near capable of delivering the potential energy to create that kind of power.

In our case, yes, the electric motors are built well, and can clearly produce 691hp combined. However, it was always known that the potential energy delivery system (battery, contactors, safety quick disconnect fuse, cabling, cooling, etc) could never handle it. So to advertise the car as a 691hp car was very misleading.

It's kind of like the ICE that Ford put in their Mustang GT from 2005 - 2010. It was clearly built strong enough to handle over 400hp with better fuel delivery, larger injectors and forced air, but Ford didn't advertise the car as a 400hp+ car. It was advertised as a 300hp car; because that's what it produced from the manufacturer.
 
Last edited:
Sorry for cutting him a little slack but you say it doesn't make anywhere near those numbers and then you say he said it did make somewhere near those numbers but only at the motors. Like my stereo totally makes 1000 watts. I dropped some crazy engine into my car that makes a thousand horsepower, and? I still don't understand why it matters so much. He rattled off some numbers that he was thinking about as far as what the motors can produce and he was wrong(?) okay. The 0 to 60 numbers I'm not sure of and or what roll out numbers are. I did have a guy come out with a laptop and drop my 0 to 60 by about a second. Technology and specs are always a mess but I don't get the feeling he was doing anything malicious, yet.
You misunderstood. The motors don't make the speced horsepower - they would need to make 50% more to hit the spec. They are "capable" if the car had a different battery, wiring, and power electronics (which sadly Tesla did not include with P85D, nor offered to retrofit - you have to buy a brand new P100D to get those parts). It's as if Elon came out and said he already divered FSD, because Tesla the steering and brakes are capable of FSD, if only they were connected to a system that can navigate the car through the world. So you give him a little slack and say, good job Elon on delivering full self driving cars? Btw, sadly, such an excuse from Elon is totally within the realm of possibilities one day when AP4.0 or later actually gets there, all previous versions FSD owners will get such an excuse, though I bet it will involve something along the lines of "the government will not let us release FSD for older AP hardware, but it's totally capable".
 
Last edited:
If you have ever been the lead designer (imagineer) of a large software development project you will recognize the truths in the statements that follow:
1. It can be very difficult to communicate your idea and have it embraced in the way you envisioned it.
2. Every programmer thinks that code they write is the best so they don't want to use anyone else code in their program even if that code works.
3. Every programmer will tell you that if you just let them rewrite the code they have delivered it will be much better and more bug free.
4. Every program is 90% - 95% done and they just need a little more time to complete it.
5. As the scheduled due date approaches (a schedule they either created or agreed upon) they just need another week or two.
6. The performance is not there yet but just let us refactor and clean up the code and it will be there.
7. When illustrating bugs in the code, "We didn't expect anyone to do that so of course we have this bug." Sub-title: "What kind of idiot does that."

Some of this is a little exaggerated but not by much. I guess my point is that Elon may have fallen victim to his own optimism and listened to what his programmers said they could do without requiring them to demonstrate actually being able to do it. I'm guessing (I don't have any direct knowledge) that his programmers looking at Mobileye's ability thought they could replicate the functionality very easily, especially if they had better hardware to work with. However, programming is more art than science and what looks deceptively easy after someone else has done it may turn out to be exceedingly difficult when you try to do it.

I'm reminded of a smart Navy Captain that I asked to fund an idea I had for a shipboard system. He asked me how much it would cost and how long to build a demo version without all of the full capability and features. He said he would pay for half of that and if we delivered when I said we would and it did what I said it would do he would provide the balance of the funding I needed. I had asked for $2M, he put up $100K. We delivered early and with more capability than we promised and he kept his side of the deal. He is now a 3 star admiral and the system was a huge success.
 
There are some misconception on how self driving is working. If we say human driver is 99.9% reliable (not real number just to illustrate), level 4/5 FSD to be acceptable need to reach 99.99%. One way to get there is redundancy : two independent 99% systems give you your 99.99%.

There are also gross misconceptions about what system safety actually is and how it plays into a performance system like auto-driving. Aviation has already worked out safety concepts for navigation, but I haven't seen any of it in enthusiast discourse. I'm just not sure that people understand what they're dealing with when they toss around simple statistics to argue safety.

The 99.9% reliability number quoted is referring to accuracy, which is the navigation solution correctness in nominal conditions. This is only one part of a much bigger problem. Just to give a taste: let's say, as proposed, there are two independent navigation systems in use that each are correct (i.e. good to within an error tolerance) 99% of driving segments. It is true that a cross-comparison of the solutions will protect you 99.99% of the time in nominal conditions. Some major issues:
  1. If the conditions aren't nominal, e.g. one of the systems has a hardware fault, meaning some equipment is producing erroneous outputs, then you no longer have a 99.99% chance of catching a problem. You have to rely on the remaining system for fault detection, which is only good 99% of the time. It's still better than 99% but not 99.99% as promised.
  2. System availability is a problem. Suppose one of the systems produces a bad navigation solution and the cross-check catches it. There is no way to determine which one is correct, so the auto-drive system must be stopped for safety. There is a 2*.99*.01 = .0198 = 1.98% chance of this occurring. For the sake of argument, if our probabilities are scaled per minute, then for every 50 minutes of driving, we would expect almost 1 of those minutes to experience system outage. Is this actually good enough for a level 4 or 5 system? Autonomous driving isn't my realm of expertise, but it sounds pretty unacceptable to me.
There are other areas that I won't discuss, like specific hardware fault detection, but those areas are also complicated. (And to be clear, hardware fault does not necessarily mean total failure. It can mean that equipment is producing subtle incorrect data, which is much more dangerous than total failure.) The point here is that many more design problems exist than just simple fault detection, and the numbers for such things are pretty big and troublesome to real engineers.
 
  • Informative
Reactions: electracity
Sign recognition requires a lot of contextual information. For example, school zones have temporal limits as well some have flashing signs indicating when its in effect. Some signs apply to classes of vehicles (i.e. different speeds for trucks vs. cars). So I think sign reading is less useful and focusing on HD maps and L3 highway would be best for Tesla.
 
When it comes to safety issues, I think the approach Mobileye is taking to safety is one that makes sense. It stays away from statistical approaches because the data needed for such an approach is unwieldy and too subject to games statisticians play. Their RSS model mathematically backed makes sense in that it better mimics human commons sense. First defining what a dangerous situation is in rigorous terms, then defining what causes it and finally how to respond to it. This is nice PDF paper explaining the concept: https://www.mobileye.com/responsibility-sensitive-safety/rss_on_nhtsa.pdf
 
  • Informative
Reactions: croman