Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What are the chances of Autopilot 3.0?

This site may earn commission on affiliate links.
What makes you think that the fact that the rest of the industry is doing something differently (and has been for longer) serves as evidence that the rest of the industry is somehow doing things in a better way than Tesla?

Mine was a response to this:

Also worth remembering that the AP2 solution wasn't designed one afternoon by Musk on the back of a Starbucks napkin, but is the work of many skilled engineers, who would have been able to verify common scenarios like how the system would handle your Volvo blindspot example.

The point was - tons of other skilled engineers have been coming to a different conclusion about self-driving suites. Including tons of engineers within both Silicon Valley (e.g. Google) as well as the traditional automotive industry (e.g. Audi/VW and Volvo). I was just reminding that these systems too are the work of many skilled engineers, who have thought about and been able to verify (likely to a far larger extent, given longer times working on this) various common scenarios.

Look. None of us know what the optimal FSD sensor suite will be. Neither of us - as in you or me - are experts in this area, not even pundits, we're just guys paying some attention and having some interest in it. It will be up to the future to inform us.

The second point I'm making is this: An "AP3" sensor suite may be based on considerations quite different from AP2, which means it may include for example more sensors.

I think several factors contributed to Tesla's selection of the AP2 suite, including the need to port legacy features from AP1 (hence the radar, the ultrasonics and the attempted MobilEye integration in AP2 - the board even has room for it). The rest of the suite seems to include the bare minimum of what is required for vision-based FSD, with very little redundancy and visual blindspots especially around the lower nose. The number of cameras were, thus, probably in part dictated by cost and expected computing performance/capability.

With AP1 he built the most functional semiautonomous system on the road using the least number of sensors. That alone should show us "it's the software, stupid" - or at least "let's withhold judgment."

He - well, Tesla - did. However, that was standing on the shoulders of a giant in its ownright, the MobilEye chip. Since then we have learned a thing or two about Tesla's internal software prowess in the form of the adventure that is AP2. We have also learned just how much and probably how dangerously Tesla was pushing the AP1 envelope, seeing how they scaled the autonomy back through nags and disengagements. MobilEye had an opinion about that as well.

What I personally view the AP2 as, is the bare minimum of what Tesla needed to take AP1 work forwards (the legacy part that still runs things like TACC, blind spot detection and auto-parking) and release a bare-minimum visual platform on the market that they can start working, deploying and researching full self-driving on. This is the part where Tesla's aggressive, let's do more with less and deploy it before it is ready strategy is showing. For this purpose they probably felt the need to put a limit on how much redundancy etc. they would build into the hardware, considering its use would be possibly years away.

Here's the thing: I don't doubt AP2 suite can do good weather FSD, especially with an upgraded CPU/GPU. I have made that clear and barring a company failure, expect Tesla to deliver.

But regarding this thread, it is hard to see how the AP2 suite would somehow be so optimal that once these above-mentioned considerations for Tesla change and the work progresses on the software side, it wouldn't - possibly quickly - be followed by an upgraded sensor suite.

And it is hard to see that the AP2 sensor suite would somehow be superior to the far more robust suites others have been working on for much longer.

Is there some track record you can point to from 2008 'til now that shows the rest of the industry has been getting things right/better (with propulsion systems, sales channels, autopilot systems) with what they bring to market vs Tesla?

There are many things the rest of the industry does better than Tesla, but most imporantly Tesla's successes in some areas do not automatically translate into others. Out of the ones you mentioned, I will obviously agree in the world-changing work on BEVs by Tesla. They changed the world. Also, AP1 was a formidable achievement, no doubt. You did not mention Supercharger network, but I will also chalk that up to Tesla. Sales channel, well, there's so many issues with the Tesla model that some refuse to see, but I would hardly call it automatically better. I've certainly had better service through the traditional model than through Tesla.

But again the thing is: We are talking about the AP2 suite and what/when might be in AP3. Even as you list AP1 as a major achievement, you know it was eventually replaced by AP2. You also know that not all AP1 features made it in the end. You know the history that Tesla has been failing to meet Performance specs with P85D, P90DL Vx, that have eventually meant new products being introduced to finally meet said missed specs. There is plenty of history there as well.

So the idea that an AP3 sensor suite would add more sensors, e.g. more cameras for redundancy and/or blindspots and/or depth vision, or more radar coverage for blind-spot monitoring or cross-traffic monitoring... or even lidar if the rest of the industry made a better bet, seem all quite plausible to me. And given Tesla's missed specs in the past, the idea that an AP3 suite might even fix mistakes in the AP2 is possible.

What Tesla shipped in AP2 is IMO a bare minimum and sort of a first effort. What they ship once costs come down and software matures and they learn more from what they have been doing, might - and IMO likely is - different.
 
Last edited:
  • Like
Reactions: calisnow
So you believe the demo is the production version and it's ready to go lvl 5? I don't think so.

IMO of course not. But those things speak to us of the software development history - and that can explain why some hardware choices were made too.

I think there is some reason to believe the ultrasonics and radar are in AP2 partly for legacy reasons, to allow porting of AP1 age stuff (the AP1 parity stuff, autoparking, early blind-spot monitoring etc.). The 7+1 cameras are there to allow to start working on a vision-only system in parallel to that (the FSD part).

How these eventually merge is of course another question, I will expect them to make use of all the sensors as they are there. When/what sensors they might include in an AP3 - which still IS the topic of this thread, people seem to forget - can be different, though, because Tesla doesn't have to start from an AP1 legacy position where use of vison was limited to forwards.

Personally I would expect AP3 to include more radars than less, and the ultrasonics are probably not going anywhere, but I guess anything is possible...
 
  • Like
Reactions: JeffK
Found one - obviously partial - slide about the various technologies, FWIW:

Why+LiDAR+Most+accurate+perception+sensor:+LiDAR+Radar+Camera+Range+++.jpg
 
Wrong, because even with that the performance is NOT good if the lane markers are not at-least raised 2mm.
It's not wrong... see papers on the subject. In addition, many lidar units do not have that kind of vertical resolution

This is/was $75000 and it's vertical resolution is only 0.4 degrees, are you going to say that can see 2mm at 10m+
HDL-64E
 
Last edited:
That table is a little misleading as rain, snow, and dust can block lasers,

I don't think the slide is impartial at all, so probably has some marketing to it. Just something I came across.

It indeed suggests lidar is roughly equal to radar in rain/snow/dust, though not in fog. I am no expert to analyze further.

However FWIW, it does seem to roughly suggest the following points:

1) Radar is best in fog and range rate
2) Vision is best in seeing signs and color
3) Lidar they suggest is best for the rest
 
That table is a little misleading as rain, snow, and dust can block lasers, Obj Rec @ long range is showing ++ but cameras actually have further range than the lidar units you find on most cars.


You clearly have alot of catching up to do. there is nothing misleading about that chart.

Driverless cars have a new way to navigate in rain or snow

Everything in that chart literally repeated every thing i said in my post which you disagreed.
Now who is the clueless one?
 
You clearly have alot of catching up to do. there is nothing misleading about that chart.

Driverless cars have a new way to navigate in rain or snow

Everything in that chart literally repeated every thing i said in my post which you disagreed.
Now who is the clueless one?
read the actual article... they mention that lidar cannot see past the obstacles. They are using an algorithm to ignore the data points. In addition, the article mentions the difficulties with snow.
 
It's not wrong... see papers on the subject. In addition, many lidar units do not have that kind of vertical resolution

"It is noted that this algorithm works best on roads with dark asphalt with highly contrasting lane markings raised approximately 2mm above the road's surface, typical to those lane markings found in Germany. The worst road surface for this algorithm is concrete"
 
Radar cannot see small object, it also can't differentiate objects. To it a bike and a street sign can be exactly the same thing.
A parked car and a wall can be exactly the same thing.

This is why again manufacturer ignore objects that have no velocity.

C'mon, you can do better than that.

Relative velocity.

A parked car and a wall will both have relative velocity when you drive past them, so neither would be ignored by the radar.
 
The thing about Tesla's approach, IMO, is that they are really trying to take the easy way into FSD.

I'm not saying easy way as in "FSD is easy", but as in really basically easy by going vision-only, compared to triple-redundancy vision/radar/lidar fusion by others (the ultrasonics I would ignore for much of this, they have a very specific low-speed application only).

This has allowed Tesla to ship an FSD suite sooner than the rest of the industry and keep the cost of a 360 suite at a much lower level than doing triple or even just double sensor fusion all round the car. It also means the CPU/GPU handling this stream of information has less to work with, not just less types of data to merge, but also simply less data volume to handle.

All this comes down to Tesla being able to ship some level of FSD capable hardware and software quite possibly sooner than the rest of the industry. It means that when others are putting their efforts into shipping Level 3+ motorway with triple redundant sensor fusion in the front, Tesla could in theory and at least in some level of practice already work on urban FSD as their suite looks all around - limited as it may be, but still covering wider ground.

At the same times, IMO, this does not necessarily tell us much about that the eventual optimal FSD platform is, or what might be in an AP suite beyond AP2. It just tells us what is probably the fastest and easiest way to get there. It does not tell us what is the best way.
 
  • Love
  • Like
Reactions: zmarty and lunitiks
read the actual article... they mention that lidar cannot see past the obstacles. They are using an algorithm to ignore the data points. In addition, the article mentions the difficulties with snow.

No you read the article without those tinted tesla lens. Can you see past a snow flake? exactly, you ignore it.

If i sent 1,000 beams and 100 of them hit a snowflake, guess what? my 3d image of a pedestrain is stick in tact and not affected at all.

Snow, rain, dust for Lidar have been solved. Plain and simple
 
"It is noted that this algorithm works best on roads with dark asphalt with highly contrasting lane markings raised approximately 2mm above the road's surface, typical to those lane markings found in Germany. The worst road surface for this algorithm is concrete"

it's all about the contrast and I just pointed to the very lidar they are using and it cannot see 2mm vertical resolution unless the car is right on top of it.

Can you see past a snow flake? exactly, you ignore it.
radar can ;)
 
C'mon, you can do better than that.

Relative velocity.

A parked car and a wall will both have relative velocity when you drive past them, so neither would be ignored by the radar.

I don't think you are understanding. Its not that radar won't see it.
Its that:

1) Radar can't differentiate them.
2) Manufacturer ignores all radar readings without delta, which is the reason why you see the warnings that cruise control "will not see stationary objects"
 
IMO @Bladerskb and @JeffK, I can see your points obout the lane markings - I have no idea who is right, but it seems to be you may be both right in some ways.

The end-result is the same: lidar can see lane markings.

2) Manufacturer ignores all radar readings without delta, which is the reason why you see the warnings that cruise control "will not see stationary objects"

Well, Tesla doesn't ignore all of them. :) Hence AP2 ghost braking...