Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Dr. Mary (Missy) Louise Cummings should be opposed for NOT really being expert on FSD for cars

This site may earn commission on affiliate links.
--------------------------------------

From looking at Dr. Mary (Missy) Louise Cummings her primary background is human/user to machine interface design (jet pilot to control panel, etc), aerial autonomous drones, and some AI, BUT she is not a full-time researcher on FSD or for autonomous ground vehicles on city streets. I am sure she knows much more than the average person about these but she is definitely NOT a world expert on either FSD or AI neural networks for autonomous ground vehicles on city streets.


Dr. Mary Cummings should be opposed not only for bias (besides working for LIDAR company, she also works for Jeff Bezos' Amazon Robotics) she has been proven wrong just like flat-earthers and she refuses to update her views despite clear evidence she is wrong.
Dr. Missy Cummings claimed FSD would require LIDAR/RADAR and that FSD could not be done using cameras only. (see video in next section below)


--------------------------------------


time point: 8:55
Mary "Missy" Cummings says
1. Tesla dropping radar will cause FSD to kill people
2. No researcher believe in camera vision only

Ep.13 Missy Cummings asks: should the US Military use AI weapons?
Jul 12, 2021
The Robot Brains Podcast


Mary "Missy" Cummings is currently a Professor in the Duke University Pratt School of Engineering, the Duke Institute of Brain Sciences, and is the director of the Humans and Autonomy Laboratory and Duke Robotics. Her research interests include human-unmanned vehicle interaction, human-autonomous system collaboration, human-systems engineering, public policy implications of unmanned vehicles, and the ethical and social impact of technology.


--------------------------------------



1. Humans drive using passive optics only (eyes, biologic cameras)

2. Many human drivers with usable vision of only one eye are still permitted & able to drive passenger cars & even commercial trucks.

3. Everyday tens of thousands of mid teen (15-17) drivers take their first drives with safety drivers who are not professionals (their parents or other family) and where the safety drivers have no secondary controls (steering or brakes).

4. Dr. Mary "Missy" Cummings claims Elon is only person that believes camera only. That is false.
Elon & Tesla is not only company that says doing FSD using cameras only is possible.
Outside of Tesla there are other FSD researchers & professionals that believe camera only was possible over 4 years ago, this is many years before Elon & Tesla PROVED it was possible.

5. Elon, Tesla, MobileEye and others have demonstrated camera only is possibly.


Dr. Mary "Missy" Cummings has attitude that FSD systems must initially be perfect even though hundreds of thousands of human drivers cause crashes everyday.
Every year worldwide, human drivers cause accidents resulting in 1.36 million deaths & tens of millions of injuries.
In 2019 USA there were about 35,000 deaths, over 190,000 incapacitating injuries, and over 2 million other injuries.


--------------------------------------


Can You Drive with One Eye?

People with monocular vision can legally drive in all 50 U.S. states and in the District of Columbia. If you lose vision in one eye as an adult, you may benefit from visual training activities with an occupational therapist. Learning or relearning to drive with monocular vision is possible. ... Driving with monocular vision.1 mar. 2021

People who grow up with vision in one eye can often judge distance and depth almost as well as a person with vision in both eyes.

If you lose vision in one eye as an adult you, may find it harder to drive, especially at first. If so, you may benefit from working with an occupational therapist or a vision rehabilitation therapist.

With training and practice, many people find that driving and parking are possible and safe.


--------------------------------------


Watch Mobileye’s self-driving car drive through Jerusalem using only cameras
12 cameras, and that’s it
By Andrew J. Hawkins@andyjayhawk Jan 7, 2020


--------------------------------------


Predicting the Future from Monocular Cameras in Bird’s-Eye View
22nd April 2021

.....
We present FIERY: a future instance prediction model in bird’s-eye view from monocular cameras only. Our model predicts future instance segmentation and motion of dynamic agents that can be transformed into non-parametric future trajectories.

Our approach combines the perception, sensor fusion and prediction components of a traditional autonomous driving stack end-to-end, by estimating bird’s-eye-view prediction directly from surround RGB monocular camera inputs. We favour an end-to-end approach as it allows us to directly optimise our representation, rather than decoupling those modules in a multi-stage discrete pipeline of tasks which is prone to cascading errors and high-latency.

Further, classical autonomous driving stacks tackle future prediction by extrapolating the current behaviour of dynamic agents, without taking into account possible interactions. They rely on HD maps and use road connectivity to generate a set of future trajectories. In contrast, FIERY learns to predict future motion of dynamic agents directly from camera driving data in an end-to-end manner, without relying on HD maps or LiDAR sensing. It can reason about the inherent stochastic nature of the future, and predicts multimodal future trajectories


--------------------------------------


Self-driving cars: Lower-cost navigation system - camera-based system
Date January 14, 2015
Contact: [email protected]

.....
The camera-based system still faces many of the same hurdles as laser-based navigation, including how to adapt to varying weather conditions and light levels, as well as unexpected changes in the road. But it’s a valuable new tool in the still-evolving arsenal of technology that’s moving driver-less cars toward reality.


--------------------------------------


Unmanned ground vehicle perception using thermal infrared cameras
May 2011 Proceedings of SPIE - The International Society for Optical Engineering 8045
DOI:10.1117/12.884349
Authors:
Arturo Rankin
Andres Huertas - California Institute of Technology
Larry Matthies
Max Bajracharya


.....
The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception.


--------------------------------------


.....
Snuffy • 2017

This is exactly right, I do believe it is possible for camera only system to match and exceed human levels of performance. But we can get their much quicker, by adding extra sensors.

Samcrut • 2017

Babies learn to interpret distance by using their eyes and hands. After a while, we learn how to use just our eyes to gauge distance. Put on a strong pair of glasses and look at how fast your hands go up to re-calibrate your distance measurements. I figure that using LIDAR, RADAR, and SONAR help to accelerate learning, but once the visual systems are suitably calibrated to understand distances based on cameras only, then reliance on other sensors will probably be pulled back. Other sensors might be taken off of the production lines.


--------------------------------------


Leading Causes of Nonfatal Injury Reports, 2000 - 2019, USA


Content source: Centers for Disease Control and Prevention, National Center for Injury Prevention and Control

.....
Some Causes of Nonfatal Unintentional Emergency Department Visits
2019, All Races , Both Sexes , Disposition: All Cases, Ages: 0-85

Cause of Injury Estimated #
Unintentional MV-Occupant 2,107,751
Unintentional Pedal Cyclist 328,094
Unintentional Motorcyclist 206,981
Unintentional Pedestrian 172,605

Definitions Surveillance for Fatal and Nonfatal Injuries --- United States, 2001

Transportation-related:

--- MV-traffic occupant. Injury to a driver or passenger of a motor vehicle caused by a collision, rollover, crash, or other event involving another vehicle, an object, or a pedestrian and occurring on a public highway, street, or road (i.e., originating on, terminating on, or involving a vehicle partially on the highway). This category includes occupants of cars, pickup trucks, vans, heavy transport vehicles, buses, and sport utility vehicles (SUVs). Injuries to occupants of other types of vehicles (e.g., all-terrain vehicles [ATVs], snowmobiles, and go-carts) fall in the other transport category.

--- Motorcyclist: Injury to a driver or passenger of a motorcycle resulting from a collision, loss of control, crash, or other event involving a vehicle, object, or pedestrian. This category includes drivers or passengers of motorcycles (i.e., classic style), sidecars, mopeds, motorized bicycles, and motor-powered scooters.

--- Other transport: Injury to a person boarding, alighting, or riding in or on all other transport vehicles involved in a collision or other event with another vehicle, pedestrian, or animal not described previously. This category includes incidents involving nontraffic or off-road MV collisions, water, air, space, animal, and animal-drawn conveyances (e.g., horseback riding), ATVs, battery-powered carts, ski lifts, and other cable cars not on rails.

--- Pedal cyclist: Injury to a pedal cycle rider from a collision, loss of control, crash, or an event involving a moving vehicle or pedestrian. This category includes riders of unicycles, bicycles, tricycles, and mountain bikes, but excludes injuries unrelated to transportation (i.e., moving) (e.g., repairing a bicycle).

--- Pedestrian (struck by/against a vehicle): Injury to a person involved in a collision, where the person was not at the time of the collision riding in or on an MV, railway train, motorcycle, bicycle, airplane, streetcar, animal-drawn vehicle, or other vehicle. This category includes persons struck by cars, pickup trucks, vans, heavy transport vehicles, buses, and SUVs. This category does not include persons struck by other vehicles (e.g., motorcycles, trains, or bicycles), which are in the other transport category.


--------------------------------------


NHTSA Traffic Safety Facts Annual Report Tables


--------------------------------------


In the crash causal chain, driver was assigned 94 percent (±2.2%) of the time

Critical Reasons for Crashes Investigated in the
National Motor Vehicle Crash Causation Survey
Published by NHTSA’s National Center for Statistics and Analysis
1200 New Jersey Avenue SE., Washington, DC 20590
TRAFFIC SAFETY FACTS Crash
StatsDOT HS 812 115 A Brief Statistical Summary
February 2015

SummaryThe National Motor Vehicle Crash Causation Survey (NMVCCS), conducted from 2005 to 2007, was aimed at collecting on-scene information about the events and associated factors leading up to crashes involving light vehicles. Several facets of crash occurrence were investigated during data collection, namely the pre-crash movement, critical pre-crash event, critical reason, and the associated factors. A weighted sample of 5,470 crashes was investigated over a period of two and a half years, which represents an estimated 2,189,000 crashes nationwide. About 4,031,000 vehicles, 3,945,000 drivers, and 1,982,000 passengers were estimated to have been involved in these crashes. The critical reason, which is the last event in the crash causal chain, was assigned to the driver in 94 percent (±2.2%)† of the crashes. In about 2 percent (±0.7%) of the crashes, the critical reason was assigned to a vehicle component’s
failure or degradation, and in 2 percent (±1.3%) of crashes, it was attributed to the environment (slick roads, weather, etc.). Among an estimated 2,046,000 drivers who were assigned critical reasons, recognition errors accounted for about 41 percent (±2.1%), decision errors 33 percent (±3.7%), and performance errors 11 percent (±2.7%) of the crashes.

.....
Driver‑Related Critical Reasons
Critical reason attributed to drivers
The critical reason was assigned to drivers in an estimated 2,046,000
crashes that comprise 94 percent of the NMVCCS crashes at the
national level. However, in none of these cases was the assignment
intended to blame the driver for causing the crash. The driver-
related critical reasons are broadly classified into recognition
errors, decision errors, performance errors, and non- performance
errors. Statistics in Table 2 show that the recognition error, which
included driver’s inattention, internal and external distractions,
and inadequate surveillance, was the most (41% ±2.2%) frequently
assigned critical reason. Decision error such as driving too fast
for conditions, too fast for the curve, false assumption of others’
actions, illegal maneuver and misjudgment of gap or others’ speed
accounted for about 33 percent (±3.7%) of the crashes. In about 11
percent (±2.7%) of the crashes, the critical reason was performance
error such as overcompensation, poor directional control, etc.
Sleep was the most common critical reason among non-perfor-
mance errors that accounted for 7 percent (±1.0%) of the crashes.
Other driver errors were recorded as critical reasons for about 8
percent (±1.9%) of the drivers.


--------------------------------------


THE RELATIVE FREQUENCY OF UNSAFE DRIVING ACTS IN SERIOUS TRAFFIC CRASHES


THE RELATIVE FREQUENCY OF UNSAFE DRIVING ACTS IN SERIOUS TRAFFIC CRASHES
SUMMARY OF IMPORTANT FINDINGS

This study was conducted to determine the specific driver behaviors and unsafe driving acts (UDAs) that lead to crashes, and the situational, driver and vehicle characteristics associated with these behaviors. A sample of 723 crashes involving 1284 drivers was investigated at four different sites in the country during the period from April 1, 1996 through April 30, 1997. The crashes were selected using the National Automotive Sampling System (NASS) protocol and provided a fair sample of serious crashes involving passenger vehicles in the United States. In-depth data were collected and evaluated on the condition of the vehicles, the crash scene, roadway conditions, driver behaviors and situational factors at the time of the crash. Investigators used an 11 step process to evaluate the crash, determine the primary cause of each crash, and uncover contributing factors.

Crash causes were attributed to either driver behavior or other causes. In 717 of the 723 crashes investigated (99%), a driver behavioral error caused or contributed to the crash. Of the 1284 drivers involved in these crashes, 732 drivers (57%) contributed in some way to the cause of their crashes. There were six causal factors associated with driver behaviors that occurred at relatively high frequencies for these drivers and accounted for most of the problem behaviors. They were:

DRIVER INATTENTION 22.7%
VEHICLE SPEED 18.7%
ALCOHOL IMPAIRMENT 18.2%
PERCEPTUAL ERRORS (e.g. looked, but didn't see) 15.1%
DECISION ERRORS (e.g. turned with obstructed view) 10.1%
INCAPACITATION (e.g. fell asleep) 6.4%

Problem types in terms of crash configuration and specific behavioral errors were also identified. The following seven crash problem types, when associated with specific behavioral errors, accounted for almost half of the crashes studied where there was a driver behavioral error:

SAME DIRECTION, REAR END - (Driver Inattention Factors) 12.9%
TURN, MERGE, PATH ENCROACHMENT- (Looked, Did Not See, etc.) 12.0%
SINGLE DRIVER, ROADSIDE DEPARTURE - (Speed, Alcohol) 10.3%
INTERSECTING PATHS, STRAIGHT PATHS - (Looked, Did Not See, etc.) 4.1%
SAME TRAFFIC-WAY, OPPOSITE DIRECTION - (Inattention, Speed) 2.6%
BACKING, OTHER, MISCELLANEOUS, ETC. - (Following Too Closely, Speed) 1.3%


--------------------------------------
 
Dr. Mary Cummings should be opposed not only for bias (besides working for LIDAR company, she also works for Jeff Bezos' Amazon Robotics) she has been proven wrong just like flat-earthers and she refuses to update her views despite clear evidence she is wrong.
Agree.
She is completely ignoring technological advancements in camera tech, and the obvious limitations in RADAR and LiDAR.

Mary, despite being ex-Airforce, is surprisingly ignorant of radar in aircraft, she should know of radar resolution. Elon is dead on with the need for 4cm or less resolution to correctly ID objects.
As for LiDAR, the only value I see there is as a calibration device for drive cameras, being a fixed point ahead to ping off an object for range precision (perhaps other benefits).
 
My view, as an expert in AI and human-AI systems and an owner of two Teslas, is that Missy Cummings is correct and justified in her criticisms.
Generally, Radar + Vision is going to be better than vision alone. Vision can do a lot, but multimodal redundant systems will be better. They also cost more. That's a trade-off.

In my work, I push the boundaries of putting AI systems into new areas. But I am also held to building systems that will be trusted by the public. Autonomous systems are held to very high standards when public safety is involved, and those standards are often set so that a system has to perform better than humans. In addition, semi-autonomous systems which require humans to potentially take over have a wide range of human factors and public policy issues that still haven't been fully solved.

I enjoy driving my Teslas and testing the state of the latest features. I would love to say that the current Tesla FSD solution is ready for prime time. But it is not. I welcome Elon's efforts on pushing the boundaries. I also welcome Dr. Cummings' reasoned criticisms to keep things safe. That's how innovation, science, and public policy work. She is highly knowledgeable about the issues and the state of the technology. I see her criticism as coming from trying to improve the technology and make people aware of the actual state of the field. It is not because she is grinding a particular political or economic ax.
 
My view, as an expert in AI and human-AI systems and an owner of two Teslas, is that Missy Cummings is correct and justified in her criticisms.
Generally, Radar + Vision is going to be better than vision alone. Vision can do a lot, but multimodal redundant systems will be better. They also cost more. That's a trade-off.

In my work, I push the boundaries of putting AI systems into new areas. But I am also held to building systems that will be trusted by the public. Autonomous systems are held to very high standards when public safety is involved, and those standards are often set so that a system has to perform better than humans. In addition, semi-autonomous systems which require humans to potentially take over have a wide range of human factors and public policy issues that still haven't been fully solved.

I enjoy driving my Teslas and testing the state of the latest features. I would love to say that the current Tesla FSD solution is ready for prime time. But it is not. I welcome Elon's efforts on pushing the boundaries. I also welcome Dr. Cummings' reasoned criticisms to keep things safe. That's how innovation, science, and public policy work. She is highly knowledgeable about the issues and the state of the technology. I see her criticism as coming from trying to improve the technology and make people aware of the actual state of the field. It is not because she is grinding a particular political or economic ax.
Also work in the field. Could not agree more. Whether humans can drive with vision-only is completely besides the point because any "AI" today is still dumb as a rock and has no understanding of the world, the underlying rules, constraints, physics, etc. Which is why, despite all the valiant efforts of Karpathy and team, FSD beta is still very much a beta and will often do completely inexplicable things like try to drive into oncoming traffic. People who don't work in the field seem to have been hoodwinked into believing the hype around "AI". I'd love for "AI" to be a lot smarter but literally every day of my job is trying to make neural networks and computer vision systems not make stupid mistakes that even my 8 year old would never make. They are still amazing advances in the scale and speed at which they can operate but they are still fundamentally, dumb, black-box correlation engines with no smarts to them when all is said and done.

So yes, in the near term, with the "AI" technology we have today, we should absolutely augment vision-only with other sensor modalities for safety and improved performance. Just because humans don't come equipped with Lidar or radar doesn't mean they are bad sensor modalities to augment your fsd system. Sure, you could try to do it all with vision-only (and even there, our eyes are way better sensors than the crappy camera sensors with very limited dynamic range), but with the current state of AI, it is going to take many many more years to get to a safe and truly autonomous system without big jumps in the field (which have not been forthcoming despite the best and brightest minds working in this area across academia and industry for over a decade).
 
Last edited:
I'd love for "AI" to be a lot smarter but literally every day of my job is trying to make neural networks and computer vision systems not make stupid mistakes that even my 8 year old would never make.
Thats because the 8 year old has the advantage of a billion years of evolution.

What you are basically stating is CW / Dogma. Tesla is betting on something different - I don't think Karpathy and co are either dumb or hoodwinking people.
 
Thats because the 8 year old has the advantage of a billion years of evolution.
Yes, the brain has evolved for rapid multi sensory perception and reasoning based on what it takes in. An 8 year old also has the advantage of tens of thousands of hours of training through grounded experience. When a ball is coming at them, they have to duck, or catch it, or get hit by it. Those skills and implicit knowledge of speed, acceleration, the physics of motion, and consequences of actions all transfer to driving. FSD is only trained on driving examples, and so has less of the ability to use broad grounded world experience to generalize to more possible real-world cases.
What you are basically stating is CW / Dogma. Tesla is betting on something different - I don't think Karpathy and co are either dumb or hoodwinking people.
AI continues to be successful for well defined domains. Driving is complex, even though much of the mundane part is well defined. More training with better and faster methods will continue to expand what it can do. But there are still many edge cases where handling unusual complex input and/or decision-making and/or moralistic reasoning all have to be applied.
Karpathy's underlying AI methods aren't that different from what other companies are doing: Large neural networks, picking the right kinds of labeled training data, squashing edge cases. They are not dumb. They are doing really good work. But they may be somewhat naive on how far they have to go to hit a threshold of public acceptability and trustworthiness.
 
Tesla is betting on something different - I don't think Karpathy and co are either dumb or hoodwinking people.
You're misstating what I said. I never said that Karpathy is dumb (I actually really respect him and see him as a no BS, pragmatic person who is one of the best people Tesla could have to lead their efforts), nor that he is intentionally hoodwinking people (Elon on the other hand...).

What I am saying is that the technology for a more general intelligence to reason about more abstract concepts and truths about the world doesn't really exist today and yet is fundamentally critical to what enables us to drive cars and do other complex reasoning tasks in a split second with "vision-only". Tesla isn't betting on something different. While their engineering efforts are amazing and impressive, they have not really made any huge fundamental leaps beyond what others in the field are doing outside of using their massive trove of data to good use training some nice networks. A lot of their approach is very pragmatic and based on a combination of cutting-edge and more established approaches. They have definitely pushed the boundaries of what can be achieved with tech today with impressive engineering and tenacity and I certainly respect how far they have gotten. But they are still a long way away from being remotely close to the promise of FSD that once was and I still don't see them getting there without some more fundamental leaps in the field which aren't necessarily guaranteed.
 
  • Like
Reactions: scoobybri
To the two experts above who are of the opinion that having radar in addition to vision is essential:

- you have pointed out the deficiency and the immaturity of the current state of AI. But the question is, how will adding another dataset from Radar will make AI any better?

- Karpathy and Musk have maintained that adding Radar data with vision, only adds to more noise and many false positives, that in the end getting useful and actionable tasks from radar that will supplement vision is not possible. You think they are wrong? If so why have the other OEMs that use radar have not made much progress beyond a simple Geo fenced area?
 
Radar is a fundamentally different sensor modality with different associated physics and information they can contribute back to reason about the world. Same goes for Lidar. I am not saying that incorporating them will get you to general FSD with no geofencing, but adding other modalities in improves the net information *grounded in the real world* you have to work with to plan your driving strategy. Waymo has already demonstrated driverless experiences using this approach in geofenced areas with a mix of all kinds of driving/roads. Tesla is still very far away from a truly driverless experience, even if they geofenced (with their current strategies).

If you read the NYT article recently based on interviews with lots of current/ex engineers at Tesla, vision-only was primarily pushed by Musk, and he really does not understand the nuts and bolts of the technology, it's limitations and has constantly painted over optimistic pictures for years and years. Based even on experiences listed in threads here and on Reddit, every person who has owned or currently owns a Tesla with radar and a newer "vision-only" has stated that phantom braking has been a LOT worse on vision-only. Talk is cheap, and it's easy to publicly state that radar isn't necessary when your CEO has mandated it as a technical approach and your radar supplier woes prevent you from shipping out cars in time if you want to still keep it on your high volume production cars. The proof is in the pudding and so far nothing has demonstrated that Tesla has even achieved parity with their own admittedly poor Radar sensor + fusion approaches, let alone surpass it. Heck, their max speed limit for AP is still gimped at 80mph for vision-only.
 
- you have pointed out the deficiency and the immaturity of the current state of AI. But the question is, how will adding another dataset from Radar will make AI any better?

- Karpathy and Musk have maintained that adding Radar data with vision, only adds to more noise and many false positives, that in the end getting useful and actionable tasks from radar that will supplement vision is not possible. You think they are wrong? If so why have the other OEMs that use radar have not made much progress beyond a simple Geo fenced area?
AI is't perfect (humans aren't either) and AI development is all about balancing trade-offs; Maximizing your hit rate (AI detects something that is there) and minimizing the false alarm rate (AI detects something that isn't really there). If you narrow the scope of the world you are modeling, there are fewer chances to have false alarms because it is simpler to model. On the other hand, a narrow system with fewer inputs and a simpler model of the world can't handle as many exception cases (like a white truck against a white sky). So, adding radar may cause more false alarms. But it may also catch cases that are entirely missed by vision. As a society, we have to decide what is the right level accuracy that we will tolerate versus the benefit of having an autonomous system.

Let's also not lose the bigger picture of the thread. It's not really about Vision vs. Vision + Radar. There are scientifically justified concerns about the overall state of FSD. Some of it is about the state of the technology, some of it is about how new technology fits into a world that was not originally designed for it. Open discussion and criticism is a way to help keep the field advancing.
 
Last edited:
  • Like
Reactions: daktari
You're misstating what I said. I never said that Karpathy is dumb (I actually really respect him and see him as a no BS, pragmatic person who is one of the best people Tesla could have to lead their efforts), nor that he is intentionally hoodwinking people (Elon on the other hand...).

This is what you said. So, who exactly did the hoodwinking - if not Karpathy?

People who don't work in the field seem to have been hoodwinked into believing the hype around "AI".​
BTW, it’s extremely arrogant to say
- people who don’t work in AI have been hoodwinked.
- you are somehow better than likes of Karpathy

I’ve been in projects where some external “experts” comment on my teams work after thinking about it for 10 minutes. Generally they are talking about 101 level stuff whereas my team would be working on cutting edge stuff for years.
 
  • Love
Reactions: Electroman
This is what you said. So, who exactly did the hoodwinking - if not Karpathy?

People who don't work in the field seem to have been hoodwinked into believing the hype around "AI".​
BTW, it’s extremely arrogant to say
- people who don’t work in AI have been hoodwinked.
- you are somehow better than likes of Karpathy

I’ve been in projects where some external “experts” comment on my teams work after thinking about it for 10 minutes. Generally they are talking about 101 level stuff whereas my team would be working on cutting edge stuff for years.
I mean if you want to twist words and make up a narrative, have at it. I don't think I am better than Karpathy and have mentioned here and in other threads how I have an immense amount of respect for him and that he is a pragmatic, no BS kind of guy. I've also commended Tesla's team for their amazing engineering effort.

And the hoodwinking has been done by all the hype men of "AI" when ML and neural nets got cool again, and certainly Musk is the primary guilty party on that front when it comes to Tesla. Not sure how you got to just assuming I mean Karpathy and then went on a rant about it. It is pretty safe to say that people who paid $5K+ several years ago based on continued promises of robotaxis and coast to coast FSD within the year, who are yet to have any hope of experiencing anything like that within their ownership time horizon of their Tesla vehicles were indeed hoodwinked. If there is any arrogance involved, it is in acting like Tesla and Musk haven't been gaslighting their customer base about FSD for years and years now.

In any case, this isn't constructive in anyway so I'm going to refrain from engaging with you if you are going to try to make personal attacks instead of engaging in good-faith discussions. Unfortunately this tends to be what every discussion in this sub-forum degenerates into.
 
Last edited:
  • Like
Reactions: frankvb
My view, as an expert in AI and human-AI systems and an owner of two Teslas, is that Missy Cummings is correct and justified in her criticisms.
Generally, Radar + Vision is going to be better than vision alone. Vision can do a lot, but multimodal redundant systems will be better. They also cost more. That's a trade-off.

A poor quality sensor is worse than no sensor.
Poor sensors generate garbage data,... you should know the results.

Elon is correct, radar must have maximum resolution that makes identifying objects possible (Elon posted 4cm, 1.5 inch).
Analogy of a radar: It is a stick with a disk at the end, if that disk touches something, radar will know something is there. What it cannot discern is angle, size, exact location (unless an array or scanning), or color (probably missed a few). Also car radars are fixed forward, making them near useless in hilly and curve roads, off road, dense urban, or anywhere with massive number of objects.
To make a truly useful radar would cost thousands to the car.

LiDAR is no better than a visible camera for visual acuity, just as vulnerable to occultation, and installed in scanning systems makes RADAR a bargain (let alone massive maintenance costs)

The one item that would improve the sensor suite is... another camera, a second "main forward", so 4 to front to improve stereoscopic view.

(and as I said, perhaps LiDAR narrow forward view as a calibration / verification tool, much like recent iPhone uses to help with auto-focus.)

(AI is something I have no knowledge, just various sensors).
PS: don't forgot the sonic proximity sensors Tesla uses, which is almost standard in todays cars.

Basically what Missy completely misses the boat in is, everything.
 
Basically what Missy completely misses the boat in is, everything.
1639417875405.png