Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Can level 5 Autonomy be achieved with Hardware suite 2.0?

This site may earn commission on affiliate links.
My major question is: If Tesla determines that AP 2.0 hardware is inadequate for level 5 in the next year, will they adjust the hardware setup before the Model 3 is released or will those of us with pre-orders get screwed...
I think what you really mean is will there be a better alternative available within a certain time period after which you won't feel like you were screwed? There will always be better hardware required to improve a car's driving ability for the scope of "full autonomy". It's an evolution and all the problems to solve have not even been identified yet. You might want to get used to that fact sooner, rather than later.
 
  • Like
Reactions: Matias
Given the rate that nVidia advances its technology, the hardware will be much cheaper and less power hungry and it will make no sense not upgrading the hardware again in 2 years along with cameras and sensors, in order to achieve really fully L5.

Hardware is not critical anyway, but software AI. For that I think Tesla has to transformed itself into an AI powerhouse with more talented engineers first.
 
I can't even tell if that hallway is a drivable road or not (as a human). As electracity mentioned as a human you'd have to pull out pretty far see around that corner... the difference between your head and the B pillar is what, a few inches?

It's a shame that they put the forward looking side cameras in the B pillars. I often have to pull forward a bit past the car along side me and lean my head forward to see cross-traffic while pulling out onto a street. The B pillars are probably two feet or more behind my head when I lean forward. I wonder why they didn't put them in the A pillars.
 
It's a shame that they put the forward looking side cameras in the B pillars. I often have to pull forward a bit past the car along side me and lean my head forward to see cross-traffic while pulling out onto a street. The B pillars are probably two feet or more behind my head when I lean forward. I wonder why they didn't put them in the A pillars.
That's actually a good question. Seeing around the car next to you is a human mastered technique. We move our heads, we look through the car next to us if we can, we creep up if it helps, etc. How well will the Tesla system be able to do it? Will it have problems with the camera placement?

I'm not sure where all the cameras are. 3 are forward facing; not sure they can see sufficiently to the side. 2 are on the B pillars that might be location challenged. I think there are 2 in the side emblems. Don't know for sure if they would be useful or not. I guess number 8 is in the back looking straight behind.
 
Last edited:
It's a shame that they put the forward looking side cameras in the B pillars. I often have to pull forward a bit past the car along side me and lean my head forward to see cross-traffic while pulling out onto a street. The B pillars are probably two feet or more behind my head when I lean forward. I wonder why they didn't put them in the A pillars.

Probably to help keep them clean. Also, I suspect that the rear facing cameras have a wide enough view to be somewhat useful. (And they are the ones that are active in EAP, not the forward facing ones.)
 
It's a shame that they put the forward looking side cameras in the B pillars. I often have to pull forward a bit past the car along side me and lean my head forward to see cross-traffic while pulling out onto a street. The B pillars are probably two feet or more behind my head when I lean forward. I wonder why they didn't put them in the A pillars.
It's to give overlapping coverage with the rear facing cameras. (I personally don't lean my head forward... where's my head going to go?) Sometimes, you just can't see and have to wait for the light, I'd guess the car would do the same and be as conservative as possible. I should also mention that unless you're really short the B pillar should be only a few inches behind your head. You must be driving a ginormous vehicle if it's two feet behind your head.

The reason I ask about the hardware is exactly that... what if they decide corner radars are appropriate? I'm not worried about computing power... there's plenty. The future volta based PX 3 isn't supposed to be faster (according to rumors)... just smaller and more efficient.

I just want to make sure the Model 3 fixes anything that's possibly incorrect on these Model S/X coming out in December with AP 2.0
 
Last edited:
Adding on additional Radar, LIDAR. I don't think there are existing wirings for them, thus, you need to trade your car in.

I keep reading LIDAR as part of the AP5. I don't think we know that. LIDAR is bulky and expensive at this point, and Tesla says they can do more with phased radar: ie. see metal and still see, and see through, soft objects such as pedestrians. This eliminates the need for another hardware item. In the announcement I saw, I didn't see LIDAR. I did see several cameras, sonar, and radar. Eight cameras should be able to see 360 degrees. What's the advantage of LIDAR?
 
I keep reading LIDAR as part of the AP5. I don't think we know that. LIDAR is bulky and expensive at this point, and Tesla says they can do more with phased radar: ie. see metal and still see, and see through, soft objects such as pedestrians. This eliminates the need for another hardware item. In the announcement I saw, I didn't see LIDAR. I did see several cameras, sonar, and radar. Eight cameras should be able to see 360 degrees. What's the advantage of LIDAR?
LIDAR has a higher resolution and can function in the dark... while radar is fine in the dark, cameras don't function very well.
 
It's a shame that they put the forward looking side cameras in the B pillars. I often have to pull forward a bit past the car along side me and lean my head forward to see cross-traffic while pulling out onto a street. The B pillars are probably two feet or more behind my head when I lean forward. I wonder why they didn't put them in the A pillars.
That's only a corner case. In the general case, putting it on the B-pillars gives better forward coverage (it'll have a wider view as it is set back more).
 
LIDAR has a higher resolution and can function in the dark... while radar is fine in the dark, cameras don't function very well.
I want to correct this notion that LIDAR by itself has better resolution than RADAR. If you compare a front facing LIDAR unit vs a RADAR unit used for ACC, they have pretty much the same resolution (the RADAR unit probably has more given more development for use in front facing units).

It is the expensive (and unsightly/unaerodynamic) top mounted rotating LIDAR units that have better resolution.
 
I want to correct this notion that LIDAR by itself has better resolution than RADAR. If you compare a front facing LIDAR unit vs a RADAR unit used for ACC, they have pretty much the same resolution (the RADAR unit probably has more given more development for use in front facing units).

It is the expensive (and unsightly/unaerodynamic) top mounted rotating LIDAR units that have better resolution.
The velodyne VLP16 puck is said to have an accuracy of +/-3cm
Tesla's Bosch radar is +/- 10cm
 
I keep reading LIDAR as part of the AP5. I don't think we know that. LIDAR is bulky and expensive at this point, and Tesla says they can do more with phased radar: ie. see metal and still see, and see through, soft objects such as pedestrians. This eliminates the need for another hardware item. In the announcement I saw, I didn't see LIDAR. I did see several cameras, sonar, and radar. Eight cameras should be able to see 360 degrees. What's the advantage of LIDAR?


I was talking in theory if L5 fails and if Tesla wants to beef up the system with what additional hardware that are not already present.

But remember Tesla's principle from the horse's mouth: LIDAR is "unnecessary."

Every one ELSE who is testing Driverless Cars have been using LIDAR in addition to what Tesla has. It sends out laser light 360 degrees and gets those rays bouncing back then reconstructs the whole surrounding in 3 dimensions in real time.

It knows exactly how big the Florida tractor-trailer, how far, wide, depth... is while Radar may not know whether it's an obstacle or a road sign.

Your 8 cameras does not khow the depth of an object is: Is this tractor trailer only paper thin that I can tear into?

LIDAR does not have the problem that Tesla blog says how a radar could be fooled: "A discarded soda can on the road, with its concave bottom facing towards you can appear to be a large and dangerous obstacle,"

Version 8 should have improved Radar's problem.

With the new supercomputer onboard your new Tesla, your radar and cameras report back to the neural net which then shared what the net learned to your car and your car would say: Ah Hah! I have been well trained and I can tell you that: This is not a road sign. And this is not a paper thin tractor-trailer in width and I can plot it in 3-D for you! All this in milliseconds of course.

LIDAR critics say it's useless in inclement weather but pro-LIDAR says algorithm has been written to filter out noises of inclement weather such as demonstrated by Ford:

cq5dam.web.480.480.jpeg


It just doesn't mean that Elon Musk is against LIDAR. He uses it for SpaceX.
 
The media keep saying level 5, but it seems Elon only mentioned that was a possibility with the new hardware (not a hard promise). Tesla in written text promises "self-driving" which may not necessarily be level 5 (however you want to define level 5).
In the conference call, Elon *opened* with "level 5":
Elon Musk said:
The basic news is that all Tesla vehicles exiting the factory have the hardware necessary for level 5 autonomy.
Couple that with having an explicit option for "Full Self-Driving Capability" on the order site distinct from "Enhanced Autopilot". If Tesla doesn't deliver L5 it at least some states, it's reasonable to assume they will be facing a significant customer and media backlash.
 
Last edited:
  • Like
Reactions: zenmaster
The velodyne VLP16 puck is said to have an accuracy of +/-3cm
Tesla's Bosch radar is +/- 10cm
The VLP16 is the type of expensive top mounted rotating LIDAR units I was talking about. The ones used for ACC is going to be considerably worse (but much cheaper, similar in cost to a radar unit).

I haven't seen the actual numbers for Tesla's radar, but I suspect your 10cm number is for a Bosch LRR (250m range). Tesla uses a MRR (160m range) but I haven't been able to find a datasheet for that.

My point was mainly that it is possible to add a cheap LIDAR sensor (of the type used for ACC) just to claim you have "LIDAR" but that doesn't give you any better resolution than a RADAR sensor. The general public seems to think all LIDAR was created equal.
 
  • Informative
Reactions: dhanson865
having a puck radar/lidar is intended to be cheaper than roof mounted baby r2d2.

use of a $200 lidar will at least help with verification of camera and radar distances, probably resulting in meaningful programming simplification and regulatory satisfaction.

Remember the old PC software that was optimized for dual drives became roadkill for the bloat software that assumed hard drives would be cheap enough. Betting against hardware cost reductions is not a new or smart idea.
 
I

Your 8 cameras does not khow the depth of an object is: Is this tractor trailer only paper thin that I can tear into?

Neural net produces object recognition. It classifies tractor trailer as tractor trailer and knows, that don't try to drive trough a tractor trailer. It needs only one camera for that. As human with one eye can identify tractor trailer as one sees it.

Properly working neural net should also identify plastig bag or newspaper page floating in the wind as something that you can drive trough if necessary.

Without object recognition a self driving car is impossible to create.
 
Last edited:
  • Like
Reactions: Tam
Neural net produces object recognition. It classifies tractor trailer as tractor trailer and knows, that don't try to drive trough a tractor trailer. It needs only one camera for that. As human with one eye can identify tractor trailer as one sees it.

Properly working neural net should also identify plastig bag or newspaper page floating in the wind as something that you can drive trough if necessary.

Without object recognition a self driving car is impossible to create.
Yep, that's how the Mobileye system works to do ACC and that's how our own brain works too. We can judge depth simply from one eye (no stereo image) by object recognition (and other visual cues like perspective vs the road surface).