Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Dr. Mary (Missy) Louise Cummings should be opposed for NOT really being expert on FSD for cars

This site may earn commission on affiliate links.

Can you provide link to tweet? This seems to be a tread response.

Meanwhile found this posted September 22, 2021
Missy Cummings: I know there’s a whole set of parents out there that are with me. I call myself Eeyore sometimes about the status of self-driving cars in the future.

What I truly want, being the mother of a 14-year-old, is for self-driving cars to be here. I don’t want my 14-year-old daughter behind the wheel of a car ever. I do research in this field. I do understand how bad human drivers are. What I want would be for this technology to work, but understanding my job is to do research in this field, I recognize that it’s not going to happen.

Seems not just cautious but really pessimistic about FSD, granted more work needs to be done, but its progressing.

This post (September 25, 2018) she mentions limitations of Radar and LiDAR, but no mention of cameras, an impression that they are not desirable or useful.

I think I know why.
Pilots are very much numbers people. They need to know how fast, how far, how long, how high, etc. Numbers.
Radar and LiDAR provide volumes of discrete data points, while a camera system might be limited to a few specified by the attention points in the field of view.

It may be that Tesla is not trying to map, but trying to "see" (and I could be wrong on that).
 
Can you provide link to tweet? This seems to be a tread response.

Meanwhile found this posted September 22, 2021


Seems not just cautious but really pessimistic about FSD, granted more work needs to be done, but its progressing.

This post (September 25, 2018) she mentions limitations of Radar and LiDAR, but no mention of cameras, an impression that they are not desirable or useful.

I think I know why.
Pilots are very much numbers people. They need to know how fast, how far, how long, how high, etc. Numbers.
Radar and LiDAR provide volumes of discrete data points, while a camera system might be limited to a few specified by the attention points in the field of view.

It may be that Tesla is not trying to map, but trying to "see" (and I could be wrong on that).
Oops thought I did. Somehow link doesn't post correctly. Here's an image of link. haha
1639421084573.png


She just doesn't think perception is the hardest problem to be solved in autonomous driving, so all this drama over sensor suites is silly.
 
  • Like
Reactions: pf-flyer
Do you have reference for that?
She was on the board of Veoneer and compensated in stock. They also make vision systems. This whole controversy is so bizarre.
There was supposedly a tweet, but apparently she deleted her account.
Several sites mention to various degrees that camera alone was insufficient, but plenty more saying from months/years past that FSD was extremely dangerous.

So, no, unable to find a hard post, just many collaborating topics.

She does seem analytical, but also feel like she is... slow. Like changes need months or years, not weeks, or even days.
 
Can you provide link to tweet? This seems to be a tread response.

Meanwhile found this posted September 22, 2021


Seems not just cautious but really pessimistic about FSD, granted more work needs to be done, but its progressing.

This post (September 25, 2018) she mentions limitations of Radar and LiDAR, but no mention of cameras, an impression that they are not desirable or useful.

I think I know why.
Pilots are very much numbers people. They need to know how fast, how far, how long, how high, etc. Numbers.
Radar and LiDAR provide volumes of discrete data points, while a camera system might be limited to a few specified by the attention points in the field of view.

It may be that Tesla is not trying to map, but trying to "see" (and I could be wrong on that).
Uh, a computer can't "see". It is only numbers - whatever sensor it uses. A digital camera is not an eye with brain. The retinal cortex is in a way an extension of your brain.

In my experience, pilots need to know the operational limits of their equipment, and what happens if exceeded. It is about preparation so they can act fast based on the signs. They are good at multitasking, so human-machine interface is very important.
 
  • Like
Reactions: pilotSteve
She just doesn't think perception is the hardest problem to be solved in autonomous driving, so all this drama over sensor suites is silly.
Totally agree. Arguing about benefits of cm resolution in LIDAR vs. Radar vs. Vision is not productive when there are much bigger autonomous driving problems to solve. There are a range of AI reasoning, machine learning, engineering, human factors, sociological, and regulatory issues that still need to be resolved before FSD is something that will be widely adopted by the public.
 
  • Like
Reactions: scoobybri
I wrote it in quotes for a reason, I certainly did not mean see as we perceive, but "see" as how FSD does it.
From my impression of the system, FSD identifies objects. LiDAR maps distances to objects, but I wonder if it can identify objects.
FSD uses digital cameras which are a 2D grid of pixels. FSD identifies objects by data processing. LiDAR is a 3D cloud of returned data, objects are identified by data processing. Quite different processing but the camera does not have any magic difference that makes identifying objects somehow more obvious.

Here is an article I google-picked at random.

Deep Multi-modal Object Detection for Autonomous Driving
Amal Ennajar, Nadia Khouja, Rémi Boutteau, Fethi Tlili

Cameras and LiDARs have complementary characteristics that make camera-LiDAR combination models
more viable and Well-known compared to other sensor combination setups (radar-camera, LiDAR-radar, etc.,). To be more specific, vision-based recognition frameworks accomplish palatable performance at low-cost, regularly beating human experts. Nevertheless, a mono-camera discernment framework cannot give a solid 3D geometry, which is required for self-driving. On the other hand, stereo cameras can give 3D geometry, but do so at a high computational cost and fail in high-occlusion and texture-less situations. Most later sensor combination strategies focus on harnessing LiDAR and camera for 3D object detection. PointFusion [13] is a generic 3D object detection method that exploits both image and 3D point cloud information. It processes the image and LiDAR information using a CNN and a PointNet architectures and then generates 3D bounding boxes using the extracted features.
...
LiDARs give precise 3D estimations at near range,
but the coming about point cloud gets to be scanty
at long extend, decreasing the system capacity to pre-
cisely identify far off objects. Cameras offer wealthy
appearance characteristics, but are not a great source
of data for depth estimation. These extra highlights
have made LiDAR-camera sensor combination a theme
of investigation in recent years. This combination
has been demonstrated to attain high accuracy in
3D object detection for numerous applications, count-
ing autonomous driving, but it has its impediments.
Both cameras and LiDARs are touchy to unfavorable
climate conditions (eg snow, fog, rain), which can
radically decrease their perception range and detection
capabilities. Moreover, LiDARs and cameras are not
able to identify the speed of objects without utilizing
temporal data. Estimating the speed of objects may be
a necessity to maintain a security distance to avoid
collisions in numerous scenarios, and depending on
temporal information may not be a doable arrangement
in time. For a long time, radars have been utilized
in vehicles for ADAS (Advanced Driver Assistance
System) applications to avoid collision and control
velocity. Compared to LiDARs and cameras, radars are
exceptionally strong to unfavorable climate conditions
and are able to distinguish objects at exceptionally long
extend (up to 200 meters for car radars). Radars utilize
the Doppler effect to precisely gauge the speeds of all
identified objects, without requiring any temporal data.
In addition, compared to LiDARs, radar point clouds
require less handling time and recently they can be
utilized as object detection results. These highlights
and their lower cost compared to LiDARs have made
radars a prevalent sensor in autonomous driving appli-
cations. Few research has focused on combining radar
information with other sensors.
 
Actually it does have a "magic difference".

Color.
Yes that's true, and also light brightness which makes it the best sensor for identifying lights. However it's poor at velocity, 3D spacialization, and distance measurement, where radar and LiDAR are better, and radar is better in poor weather. Each has advantages and disadvantages so vision is not ideal on its own. None of the sensors are a perfect solution on their own, and all require intensive data processing to yield useful results.
 
Yes that's true, and also light brightness which makes it the best sensor for identifying lights. However it's poor at velocity, 3D spacialization, and distance measurement, where radar and LiDAR are better, and radar is better in poor weather. Each has advantages and disadvantages so vision is not ideal on its own. None of the sensors are a perfect solution on their own, and all require intensive data processing to yield useful results.
True, but you pointed out a camera has many advantages over LiDAR and RADAR.
Add another camera and LiDAR is rendered superfluous, and RADAR is diminished (primary advantage is as you said, detection through rain and dust)
 
True, but you pointed out a camera has many advantages over LiDAR and RADAR.
Add another camera and LiDAR is rendered superfluous, and RADAR is diminished (primary advantage is as you said, detection through rain and dust)
Nice try. I am firmly against vision-only, or the use of any other single sensor type. Each sensor has its advantages and disadvantages and sensor fusion is IMO the way to go for a robust, safe, and redundant autonomous system.
 
Nice try. I am firmly against vision-only, or the use of any other single sensor type. Each sensor has its advantages and disadvantages and sensor fusion is IMO the way to go for a robust, safe, and redundant autonomous system.
Clearly Elon disagrees.
And Tesla is on a vision-only development program, so everyone else can argue all day long to no effect.
Place your bet on the other attempts at autonomous driving.
 
Last edited:
Nice try. I am firmly against vision-only, or the use of any other single sensor type. Each sensor has its advantages and disadvantages and sensor fusion is IMO the way to go for a robust, safe, and redundant autonomous system.
Birds have vision only, operating at extreme speeds and complex uncontrolled environments (trees), works out for them very well.

Humans have done amazing things with dual Eyeball Mk I too, including flying supersonics jets without radar.

Vision only is more than adequate for FSD, it is the brains behind that matter most.
Prove me wrong.
 
Birds have vision only, operating at extreme speeds and complex uncontrolled environments (trees), works out for them very well.

Humans have done amazing things with dual Eyeball Mk I too, including flying supersonics jets without radar.

Vision only is more than adequate for FSD, it is the brains behind that matter most.
Prove me wrong.
Doesn't work out so well for them and my floor to ceiling windows.
Just need to make an artificial human brain and FSD will be solved.
 
  • Like
Reactions: MontyFloyd
Birds have vision only, operating at extreme speeds and complex uncontrolled environments (trees), works out for them very well.

Humans have done amazing things with dual Eyeball Mk I too, including flying supersonics jets without radar.

Vision only is more than adequate for FSD, it is the brains behind that matter most.
Prove me wrong.
I think most agree that eventually vision only should be able to do the job. The question I wonder about is will that be accomplished first or second?

Assuming the resources are available to handle the load of the additional data, it’s hard to argue that giving a system more (valid) data should negatively impact the quality of the resulting answer. I get how that can happen as it’s happened to me. But it’s a problem/limitation in the implementation and at best should be a wash.

I think T is vision only right now because there’s proof that it works, it’s cheaper on the per-car side, and perhaps there’s some intuitiveness behind it. Eg asking humans to develop a system that we more easily relate too. Just my opinion there. In the future I can imagine that adding other data types could increase the performance. But before doing that, make it basically work first.
 
In the future I can imagine that adding other data types could increase the performance. But before doing that, make it basically work first.
Right, Musk has referred to LiDAR as a crutch that helps you get going fast but then you are stuck in Chandler for years.

PS : Personally, I don’t think LiDAR is the issue. The problem is reliance on HD Maps.
 
Last edited:
  • Like
Reactions: jabloomf1230