Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What are the chances of Autopilot 3.0?

This site may earn commission on affiliate links.
No camera + lidar is FAR superior to camera + radar. Its not even comparable.
Both lidar and cameras can't see through water/fog ... radar can. Enough said.

Mobileye never promoted a vision only system.
They've actually demoed one. Check their youtube channel.

Wait how in the world is it that you dont understand. Do you drive? Do you park and unpack? Serious question.
Sit is any parking lot and count the number of people who walk all the way around their car. 360 degrees. haha it'll be an incredibly small percentage if anyone at all.

If you don't have a backup camera and a toddler walks behind your car as you are backing out ... you think you will magically see it? Do you think you'd randomly get out of the car, walk around, and check behind your back bumper as you are backing out?
 
No camera + lidar is FAR superior to camera + radar. Its not even comparable.

Yeah, sure. But I guess there is something to radar seeing through visual obstacles. That said, I don't see anyone doing just vision + lidar, everyone but Tesla seems to be going for vision + lidar + radar + ultrasonics... that sounds IMO the safest and superior method.

Mobileye never promoted a vision only system. They offered only vision only software but have always promoted that a level 3+ car would need triple redundancy.

Infact they called tesla ap2 sensor configuration a beefed up level 2.

Agreed that this is the MobilEye and industry approach overall. Whether or not they've demoed a vision only system, I don't know.

I agree MobilEye were not impressed with the way Tesla was pushing the envelope of current systems. Their roadmap is much more conservative and robust in that sense.

Finally, I do think vision-only FSD is possible. Tesla will probably get something done on the AP2 suite some day in the future. I don't think it will be very robust regarding environmental factors, nor do I think the vision suite in AP2 is sufficient for all scenarios. I would expect more vision around the nose and more redundancy in rear and the overall 360 vision, for a more robust vision-based solution. Perhaps 3D vision all around... and better automated cleaning for all cameras.

All that said, vision + lidar + radar would obviously be better. I see absolutely no downsides to redundancy. Sensor fusion seems like an infinitely more solvable problem than no redundancy (or only one type of redundancy) at all running into the expected problems...
 
Last edited:
Sit is any parking lot and count the number of people who walk all the way around their car. 360 degrees. haha it'll be an incredibly small percentage if anyone at all.

If you don't have a backup camera and a toddler walks behind your car as you are backing out ... you think you will magically see it? Do you think you'd randomly get out of the car, walk around, and check behind your back bumper as you are backing out?

To be fair, @Bladerskb is not claiming people walk around their cars at car parks, but he is claiming there is more situational awareness due to walking to the car than an automated system might have - if that system has blindspots. Visually checking the area where you will be driving will often automatically happen, simply because people often walk over and around the area where they will soon be driving, something a driverless FSD car is not able to do.

IMO it is a fair point. A driverless car thus needs more "sensors" than a human has, to be able replicate a human in some scenarios, simply because humans have this additional mobility... As said, I was parking a car yesterday and stepped out to see how close to the obstacle really was. If a pole is blocking the view from one vantage point, I can also move my head and so forth. If ultrasonics scream at me and I see nothing, I will step out and go take a look. What will FSD do, if this is a blindspot for other sensors?
 
because people often walk over and around the area where they will soon be driving
I can think of a number of parking scenarios where people do not naturally do this.

What will FSD do, if this is a blindspot for other sensors?
It will simply not move. This is a safest action. Just as elevators don't start moving if they detect something preventing the door from closing.

As far as situational awareness, humans suck at this. Which is why as you mentioned, ultrasonics are extremely helpful.

We have to ask ourselves which is better:

  • Human only
  • Human plus sensors
  • Human plus sensors plus ADAS
  • Human plus sensors plus ADAS plus FSD
note FSD doesn't have to mean lvl 4 or 5
 
I can think of a number of parking scenarios where people do not naturally do this.

Of course! I am not trying to claim this is what always happens, I am trying to discuss what kind of things affect autonomous suites and what improvements might be necessary in the future.

It will simply not move. This is a safest action.

That may be. However, we are talking about a car that Tesla advertised as coming to pick you up without a driver. That failsafe with as limited as technology as ultrasonics being relied on in certain visual blindspots seems like a troublesome recipe.

As far as situational awareness, humans suck at this. Which is why as you mentioned, ultrasonics are extremely helpful.

Ultrasonics are extremely helpful when combined with other types of sensing. If ultrasonics do something that I do not expect, which they sometimes do, that can and needs to be complemented by (my) vision and mobility as the need may be. If an autonomous car could combine ultrasonic data with lidar, radar and vision, that would allow for quite a bit of redundancy... or even just one additional other sensor type...

But if it only has ultrasonics like Tesla in the nose area and side-blocked front cross-traffic... seems problematic.
 
What perplexes me is the decision to make the FSD claim when AP2 was announced. I understand the hardware choice. I don't understand the claim.

There doesn't appear to be an upside to doing AP 3.0 hardware now. 1) Tesla AP seems software constrained, and 2) The really good stuff, like cheap LIDAR, hasn't arrived yet.

AP 2.5 as a processor bump doesn't seem like Tesla's style.
 
You mean, like human beings?

FSD really is a software problem. Adding sensors on top of sensors makes the software harder without necessarily improving the outcome.

AI and 7 stationary non-3D cameras are not a human. But I do believe they can do good weather FSD.

As for more sensors, less certainly make software simpler, but I am confident Tesla is far from the optimal sensor suite yet.

I doubt anyone knows what is optimal. But I am sure the first production suite is not. Just common sense.

I think Tesla shipped the bare minimum for FSD vision (7 cameras) and bare minimum to port legacy and AP1 stuff (ultrasonics and radar - and would have liked to include Mobileye too).

Once software matures the eventual real suite may be quite different.
 
8 cameras... and I agree that the final "industry standard" solution for FSD is still a way off.

Also worth remembering that the AP2 solution wasn't designed one afternoon by Musk on the back of a Starbucks napkin, but is the work of many skilled engineers, who would have been able to verify common scenarios like how the system would handle your Volvo blindspot example.

This design work would have been in progress whilst they still had a good working relationship with ME, probably as AP3 with EyeQ5 (with AP2 planned as the dual cam/2 radar solution that almost arrived with the MX).
 
note FSD doesn't have to mean lvl 4 or 5
Elon is on record* -- voice at the beginning of conf call / announcement (I forget which) -- saying Level 5 explicitly. So from a "contractual obligation" perspective, I think it does have to be L5.

Whether I think they'll deliver or not (and when) is somewhat irrelevant, I'm sure I'm not the only purchaser of the FSD feature that had Elon's voice from that recording involved in the purchase decision.


* Here's the quote:
The basic news is that all Tesla vehicles exiting from the factory have the hardware necessary for Level 5 autonomy ....
 
Elon is on record* -- voice at the beginning of conf call / announcement (I forget which) -- saying Level 5 explicitly. So from a "contractual obligation" perspective, I think it does have to be L5.
You're misinterpreting my post. I'm saying that with AP2 sensors, even a limited FSD capability would safer than all lesser versions such as simple ADAS, driver warnings, or the driver alone.

There's no doubt Tesla is aiming for lvl 5
 
  • Like
Reactions: scottf200
8 cameras... and I agree that the final "industry standard" solution for FSD is still a way off.

Also worth remembering that the AP2 solution wasn't designed one afternoon by Musk on the back of a Starbucks napkin, but is the work of many skilled engineers, who would have been able to verify common scenarios like how the system would handle your Volvo blindspot example.

This design work would have been in progress whilst they still had a good working relationship with ME, probably as AP3 with EyeQ5 (with AP2 planned as the dual cam/2 radar solution that almost arrived with the MX).

Yes. 7+1 cameras. But my point wasn't the number. It was the stationary and fixed nature of the cameras vs. a moving human with hands and feet...

As for designs. The rest of the industry has had brilliant minds on this much longer and has been gravitating towards much more robust suites. Any reason to believe Tesla got it better?
 
What makes you think that Tesla hasn't had the benefit of those brilliant minds? Do you think they just bought parts from Mobileye?

Of course they have had their share of the brilliant minds, no doubt.

But my question remains: when the rest of the industry has been working on this much longer and is gravitating towards much larger sensor suites, what makes you think Tesla got it better?
 
Of course they have had their share of the brilliant minds, no doubt.

But my question remains: when the rest of the industry has been working on this much longer and is gravitating towards much larger sensor suites, what makes you think Tesla got it better?

What makes you think that the fact that the rest of the industry is doing something differently (and has been for longer) serves as evidence that the rest of the industry is somehow doing things in a better way than Tesla? Is there some track record you can point to from 2008 'til now that shows the rest of the industry has been getting things right/better (with propulsion systems, sales channels, autopilot systems) with what they bring to market vs Tesla?

Normally I would say you are right by saying that if 99 people do things one way that is some kind of evidence that the 1 person doing things a different way is wrong. But with Tesla nothing is ever normal. Elon's entire business history has been that of doing things against the tide of the rest of industry - and coming out on top.

So personally I think at this point in his business accomplishment history - both with SpaceX and Tesla - his track of doing more with less is successful enough that I am willing to give him the benefit of the doubt at least for a time.

With AP1 he built the most functional semiautonomous system on the road using the least number of sensors. That alone should show us "it's the software, stupid" - or at least "let's withhold judgment."
 
Both lidar and cameras can't see through water/fog ... radar can. Enough said.
Is this a Joke?

Microbats have echolocation(sonar) and can see through water/fog, but does that make their vision better than humans? no.
That's exactly how laughable your statement just now is.

Radar actually only return single values and have very small fov, they are basically 1D or rather 2D while Lidar is 3D.

This is why all car manufacturer manual tells you that cruise control which uses radar CANT stop for stopped cars.
Why? because radar returns single values like a horizontal line. You can't tell if that line is part of the environment or an actual car.
Like you can't tell if that line is a stop sign or a parked car, or whether that line is a pop can on the road or a tractor trailer.

This is why all manufacturers ignore all returning values that have no delta (speed).

Lidar in self driving cars however has this view, literally nothing can be hide from it and it works under sunlight, in pitch black darkness, rain, snow, and dust:

PSC1013_RR_101.jpg


Literally nothing.

luminar.gif



Lidar can:
- classify objects (cars, peds, cyclist, street sign, bottle, helmet, pole, traffic light, popcan, anything)
- give precise measurement and 3d dimensions
- give distance and velocity


Radar can:
give distance and velocity



Radar cannot see small object, it also can't differentiate objects. To it a bike and a street sign can be exactly the same thing.
A parked car and a wall can be exactly the same thing.

This is why again manufacturer ignore objects that have no velocity.

Compared to Lidar where Google's "...newest sensors can keep track of other vehicles, cyclists, and pedestrians out to a distance of nearly two football fields."

In-fact they can "see a football helmet two football fields away, Krafcik said. " and classify it.

That's stunning.

In Conclusion

A Lidar only car can drive it self.
a camera only car can drive itself.
a radar only car CANNOT drive itself.

Just because radar can see through rain/fog doesn't make it a primary sensor. its just as useless.

Besides all of the above, Lidar with advanced software CAN see through rain, snow and dust now. Technology is expanding at a fast and rapid pace.

Driverless cars have a new way to navigate in rain or snow

Radar doesn't even sniff Lidar, enough said.
 
  • Like
  • Disagree
Reactions: kavyboy and JeffK
Is this a Joke?

Microbats have echolocation(sonar) and can see through water/fog, but does that make their vision better than humans? no.
That's exactly how laughable your statement just now is.

Radar actually only return single values, they are basically 1D while Lidar is 3D.
This is why all car manufacturer manual tells you that cruise control which uses radar CANT stop for stopped cars.
Why? because radar returns single values like a horizontal line. You can't tell if that line is part of the environment or an actual car.
Like you can't tell if that line is a stop sign or a parked car, or whether that line is a pop can on the road or a tractor trailer.

This is why all manufacturers ignore all returning values that have no delta (speed).

Lidar in self driving cars however has this view, literally nothing can be hide from it and it works under sunlight, in pitch black darkness, rain, snow, and dust:

PSC1013_RR_101.jpg


Literally nothing.

luminar.gif



Lidar can:
- classify objects (cars, peds, cyclist, street sign, bottle, helmet, pole, traffic light, popcan, anything)
- give precise measurement and 3d dimensions
- give distance and velocity


Radar can:
give distance and velocity



Radar cannot see small object, it also can't differentiate objects. To it a bike and a street sign can be exactly the same thing.
A parked car and a wall can be exactly the same thing.

This is why again manufacturer ignore objects that have no velocity.

Compared to Lidar where Google's "...newest sensors can keep track of other vehicles, cyclists, and pedestrians out to a distance of nearly two football fields."

In-fact they can "see a football helmet two football fields away, Krafcik said. " and classify it.

That's stunning.

In Conclusion

A Lidar only car can drive it self.
a camera only car can drive itself.
a radar only car CANNOT drive itself.

Just because radar can see through rain/fog doesn't make it a primary sensor. its just as useless.

Besides all of the above, Lidar with advanced software CAN see through rain, snow and dust now. Technology is expanding at a fast and rapid pace.

Driverless cars have a new way to navigate in rain or snow

Radar doesn't even sniff Lidar, enough said.
Name a single company that is using a lidar only or even a lidar plus camera system without radar.... that's right, there are none. You are the only one who feels this way.
 
Name a single company that is using a lidar only or even a lidar plus camera system without radar.... that's right, there are none. You are the only one who feels this way.

If u disagree with basic facts you don't belong in this sub-forum.
I like how this autonomous forum is about technology and evidence i don't want it to become about someone whose love for a company has clouded their judgement.

Good bye.