Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Sensors (overflow from 2018.39 thread)

This site may earn commission on affiliate links.
A deaf person with one eye is capable of driving. Thus I’d argue that it’s definitely possible with one camera (possibly more, as humans can turn their head?) to drive a car. However that doesn’t mean that that would be the best solution, nor the technologically simplest to achieve. I’d say the more sensors the better, but it should definitely be possible to control a car with only the sensors a human has.

The more sensors you need, the less reliable you are (each has an additional failure rate). If the sensors are redundant, you are adding cost for little to no benefit.
Regarding the "best solution" line of thought, radar works better for some cases than vision, but are we shooting for driving in 100% of all conditions with 0 risk of accident, or just equivalency to better than humans. You can't go over every hill crest at 5 MPH just because there may be an accident. (not trying to straw man, but people drive on the assumption things are going to be fine, which they sometimes aren't)
 
Last edited:
  • Like
Reactions: outdoors
That's a valid argument when the computer in your dash in equivalent to the computer in your head; it's not.

To some degree you can make up with a shortfall in processing with additional & redundant sensing.
The point, though, is that the minimum *required* set of sensors is two eyes (sometimes one), arguably with some kind of swivel/pivoting system, so it doesn’t follow to say that lidar/radar/a trunk monkey are absolutely required for self driving. That said, point taken that current technology requires more sensors.
 
Nobody has a lidar-only system. Lidar is used in combination with radar and cameras. Sensor diversity is the key ingredient here.
Lidar doesn't help anything with even a little rain or snow, this means their system Will Be Safe in good weather but not in bad weather. Who wants a car who stops or is barely Safe when it rains? If they want it to work in all (most) weather conditions they need camera's and radars who Will perfectly work in good weather too without lidar.
 
Lidar doesn't help anything with even a little rain or snow, this means their system Will Be Safe in good weather but not in bad weather. Who wants a car who stops or is barely Safe when it rains? If they want it to work in all (most) weather conditions they need camera's and radars who Will perfectly work in good weather too without lidar.

You have failed to argue that sensor diversity is not advantageous. In fact you are arguing that it is advantageous, because you are pointing out that every sensing modality has limitations which are balanced by other sensing modalities. Lidar and radar are vastly better than cameras when it is dark out or there are challenging lighting conditions (direct sunlight, entering/existing tunnels). Radar is better than either lidar or cameras in very heavy snow or fog. etc etc etc.
 
  • Like
Reactions: APotatoGod
A deaf person with one eye is capable of driving. Thus I’d argue that it’s definitely possible with one camera (possibly more, as humans can turn their head?) to drive a car. However that doesn’t mean that that would be the best solution, nor the technologically simplest to achieve. I’d say the more sensors the better, but it should definitely be possible to control a car with only the sensors a human has.
The amount of people who think the eyes are just cameras and also disregard the massive amount of work the brain behind those eyes does to create a useful picture of the data it gets is too damn high....
seriously, you people that bring this retarded eye comparison argument should sit down and read up on some medical papers.
 
  • Like
Reactions: rnortman
The amount of people who think the eyes are just cameras and also disregard the massive amount of work the brain behind those eyes does to create a useful picture of the data it gets is too damn high....
seriously, you people that bring this absolute bogus argument should sit down and read some medical papers.

The eyes are just cameras... They provide the raw data for the processing system to use.
No one is disregarding the need for processing. The point is that, with sufficient processing, vision is sufficient.
 
If we’re reasoning by biological comparisons (dangerous but fun), don’t forget the preprocessing done in the retina. Lots of edge detection, motion evaluation, etc. the brain gets help. The real amazing thing for me is that we do it with 20 watts of brain power.

Very cool indeed. Retina - Wikipedia
Also weird that we don't 'see' the hole in our vision...
 
  • Like
Reactions: BeesKnees
The eyes are just cameras... They provide the raw data for the processing system to use.
No one is disregarding the need for processing. The point is that, with sufficient processing, vision is sufficient.

You say this as if all we need to do is the same thing machine learning and artificial neural nets currently do, but bigger and faster. This is completely false given what we know about the brain and what we know about what deep learning can and cannot do. We don't just need bigger/faster GPUs, we need different machine learning and AI techniques to match what the human brain can do. These are still very much unsolved problems.
 
You say this as if all we need to do is the same thing machine learning and artificial neural nets currently do, but bigger and faster. This is completely false given what we know about the brain and what we know about what deep learning can and cannot do. We don't just need bigger/faster GPUs, we need different machine learning and AI techniques to match what the human brain can do. These are still very much unsolved problems.

No, I say this as: a system currently exists that performs driver based only on visual inputs (ignoring motion sensing for dynamics).
Therefore, vision is sufficient, given the proper possessing system.

Whether or not silicon can replace carbon is separate issue, but at this point there is no data to say more than vision is needed for equivalent performance.
 
  • Like
Reactions: rnortman
Therefore, vision is sufficient, given the proper possessing system.

OK, this wording ("given the proper processing system") is more clear. Previously you have said things like "given sufficient processing" which can be intepreted many ways, but one obvious way to interpret that is "given big enough and fast enough GPUs".

We need the right kind of processing, and nobody knows how to build that kind of processing yet. I have little doubt it can be done, but nobody even knows what it takes to get there yet and we don't understand how the brain does it yet. I think I can safely say "3 months maybe, 6 months definitely" is waaaaaay off the mark.
 
  • Helpful
Reactions: mongo
The eyes are just cameras... They provide the raw data for the processing system to use.
considering how far off any hard and software still is from actually simulating what the brain does in daily traffic you`re making the "answer" way too simple here.

Basically I think that LIDAR in the sensor mix is a possibility to get more "CPU"-friendly data at the moment and that completely disregarding its possibilities while hoping for a swift solution to the vison-based systems' issues is....hmm narrow minded.
 
Last edited:
  • Like
Reactions: rnortman
considering how far off any hard and software still is from actually simulating what the brain does in daily traffic you`re making the "answer" way too simple here.

Basically I think that LIDAR in the sensor mix is a possibility to get more "CPU"-friendly data at the moment and that completely disregarding its possibilities while hoping for a swift solution to the vison-based systems' issues is....hmm narrow minded.

I do not disagree that lidar makes certain parts of autonomy a lot easier. However, I feel that lidar at best is redundant to vision, and as such should not take up development time to implement.
 
I do not disagree that lidar makes certain parts of autonomy a lot easier. However, I feel that lidar at best is redundant to vision, and as such should not take up development time to implement.

But redundancy -- better yet, diversity -- is a really nice thing to have, don't you think? It can't possibly make it less safe.

Face it, the reason Teslas don't have lidar has nothing to do with how useful lidar is, it's because lidar is expensive and ugly, and Elon doesn't like either of those things. So he argues that "lidar is a crutch" and everybody laps it up because he's a genius don't you know? Meanwhile Waymo is years ahead of Tesla in truly autonomous capabilities... Autopilot is a sophisticated L2 driver assistance system and always will be (on the current hardware/sensor suite).
 
But redundancy -- better yet, diversity -- is a really nice thing to have, don't you think? It can't possibly make it less safe.

Face it, the reason Teslas don't have lidar has nothing to do with how useful lidar is, it's because lidar is expensive and ugly, and Elon doesn't like either of those things. So he argues that "lidar is a crutch" and everybody laps it up because he's a genius don't you know? Meanwhile Waymo is years ahead of Tesla in truly autonomous capabilities... Autopilot is a sophisticated L2 driver assistance system and always will be (on the current hardware/sensor suite).

If you need both sensors, you don't have redundancy.

I was a proponent of vision without lidar before Elon's crutch comment. Concider, lidar may get you 70+% functionality, but to get 100% you need fully working vision for everything beyond physical detection.. So solving the vision side of things without lidar keeps you limited due to needing vision based detection algorithms, but once you have those, you are most of the way to the full solution.