Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Dan is demonstrating something that the system is NOT designed to do. The system is designed to meet and exceed the criteria set forth by NHTSA. The standard child VRU is what is used and is **MUCH** taller than the toddler size he is using. For the system to do more would be great, but Dan is showing what is just beneath the system's ability and then inferring that it will hit **ALL** kids which is demonstrably false/fraud.
Knowing what we know about these systems, the OEDRs, and how the camera + other sensor data are interpreted, I think someone who works or does research in this area could probably put together all sorts of specific scenarios that would capitalize on weaknesses in object detection/response and other facets of the technology.

On the flip side, exposing weaknesses is probably the best way to force improvement
 
Perhaps this is a MMD
Guess it was. The bottom was just 76 cents away from the 200 days moving average to pick up the stop loss orders.

1660666741122.png
 
Slightly OT, but it’s to close the thought that AAA has trucks out there charging stranded EV owners: this program was suspended in 2019.


It’s easier to just tow to a charger. And probably faster.
.. and you can charge while you are being towed :D Technically you only need to be towed half the distance to the supercharger site.
 
Knowing what we know about these systems, the OEDRs, and how the camera + other sensor data are interpreted, I think someone who works or does research in this area could probably put together all sorts of specific scenarios that would capitalize on weaknesses in object detection/response and other facets of the technology.

On the flip side, exposing weaknesses is probably the best way to force improvement
Strong disagree, as publishing a video that screams "TESLA AP KILLS CHILDREN" is not the way to "force" improvement.
 
Strong disagree, as publishing a video that screams "TESLA AP KILLS CHILDREN" is not the way to "force" improvement.
It's not the most agreeable method, but something tells me addressing that weakness will be moving up Tesla's list of priorities

Very few people have any idea how this technology really works or that changing some size parameters, coloring/patterns, or any number of other things could easily mess with object detection
 
Knowing what we know about these systems, the OEDRs, and how the camera + other sensor data are interpreted, I think someone who works or does research in this area could probably put together all sorts of specific scenarios that would capitalize on weaknesses in object detection/response and other facets of the technology.

On the flip side, exposing weaknesses is probably the best way to force improvement
I prefer white hat hacking which is believe supports the community and if you get a reward, then great.

I do not condone or support black hat hacking which is what I believe Dan is doing in a very public and harmful way.
 
It's not the most agreeable method, but something tells me addressing that weakness will be moving up Tesla's list of priorities

Very few people have any idea how this technology really works or that changing some size parameters, coloring/patterns, or any number of other things could easily mess with object detection
VRUs is already a paramount focus of the Tesla AP team. The "test" performed by Dan was completely fraudulent and of no value to the public safety.
 
Not sure if it was mentioned already, but O'Dowd's lies about FSD are now on CNBC. So...do shareholders have standing for a civil suit?
O'Dowd's video is garbage, but the concept that FSD is not safe without very close monitoring is not.

Our only response to O'Dowd should be that the driver is 100% in charge. That the autopilot is still very early in the learning mode and requires the driver to be even more attentive than driving without it. Which, from my experience is true.

Telling people that it isn't "safe" - but then winking at them saying that it really is - will not go well with juries.

The reason accidents are so low while on FSD is because the drivers are scared half the time. They're worried that the car will randomly swerve into oncoming traffic and are watching things like a hawk.
 
O'Dowd's video is garbage, but the concept that FSD is not safe without very close monitoring is not.

Our only response to O'Dowd should be that the driver is 100% in charge. That the autopilot is still very early in the learning mode and requires the driver to be even more attentive than driving without it. Which, from my experience is true.

Telling people that it isn't "safe" - but then winking at them saying that it really is - will not go well with juries.

The reason accidents are so low while on FSD is because the drivers are scared half the time. They're worried that the car will randomly swerve into oncoming traffic and are watching things like a hawk.
FSD being safe is questionable. The FSD program Tesla has implemented is very safe as proven in the data. They provided people a year worth of FSD beta performance videos by youtubers, precisely tell you what to expect from their disclaimers, strike you out if you are not paying attention, and have people take a safety score test. Lastly Tesla has zero responsibility if the car hits something as they clearly tell you that you are in control which prevents complacency.

All these senators, Cummings, and Dan are attacking the FSD program...which has already proven to be safe. We should be expecting ~40 injuries by now just based on national statistics of 84 injuries per 100M miles driven. However we have yet to hear of 1. They can beat a dead horse all they want but Tesla has all the receipts.
 
Last edited:
O'Dowd's video is garbage, but the concept that FSD is not safe without very close monitoring is not.

Our only response to O'Dowd should be that the driver is 100% in charge. That the autopilot is still very early in the learning mode and requires the driver to be even more attentive than driving without it. Which, from my experience is true.

Telling people that it isn't "safe" - but then winking at them saying that it really is - will not go well with juries.

The reason accidents are so low while on FSD is because the drivers are scared sh2tless half the time. They're worried that the car will randomly swerve into oncoming traffic and are watching things like a hawk.
This is completely contradictory to the messaging received from Elon/Tesla over the years and is surely why hyperbole is now coming from the other side: because Tesla was posting videos back in 2016 purporting a self-driving vehicle where the driver was in the seat only for legal reasons. And then the hyperbole following that, claiming functionality right around the corner that still hasn't materialized years later.


The next question I'd ask here is why the system doesn't detect objects below a certain size right in front of the vehicle. Is it inadequate camera placement? Is the system simply not trained to detect objects that low and that close to the vehicle? If so, why isn't it trained?
 
I prefer white hat hacking which is believe supports the community and if you get a reward, then great.

I do not condone or support black hat hacking which is what I believe Dan is doing in a very public and harmful way.
What Dan is doing is not hacking. I could paint street lines that go off a cliff and any lane following system from any manufacturer would drive the car off a cliff. That is not hacking.

What Dan is doing is very disingenuous with an alternate agenda attempting to attack Tesla.
 
Sticker on the back of the BMW iX involved in the accident. Why would a production vehicle have this? More questions.
BMW is on a roll with the fire recall and this L2 crash and fatality.

 
  • Informative
Reactions: EnzoXYZ
What Dan is doing is not hacking. I could paint street lines that go off a cliff and any lane following system from any manufacturer would drive the car off a cliff. That is not hacking.

What Dan is doing is very disingenuous with an alternate agenda attempting to attack Tesla.


Wile E. agrees!

 
The next question I'd ask here is why the system doesn't detect objects below a certain size right in front of the vehicle. Is it inadequate camera placement? Is the system simply not trained to detect objects that low and that close to the vehicle? If so, why isn't it trained?
NHTSA guidelines and the priority is for larger objects first.

Tesla is pushing the boundaries of what can be done in a vehicle. Even lidar at slow speeds is challenged by small non-moving objects.

For instance an empty plastic bag caught by the wind is hard to detect.
 
What Dan is doing is not hacking. I could paint street lines that go off a cliff and any lane following system from any manufacturer would drive the car off a cliff. That is not hacking.

What Dan is doing is very disingenuous with an alternate agenda attempting to attack Tesla.
He is hacking the system IMO like our best Tesla bear GJ. Finding shortcuts and cherry picking edge data to circumvent it's greater value while not showing the actual value. He is not showing how a slightly taller mannequin is 100% detected as an example.
 
He is hacking the system IMO like our best Tesla bear GJ. Finding shortcuts and cherry picking edge data to circumvent it's greater value while not showing the actual value. He is not showing how a slightly taller mannequin is 100% detected as an example.
Yeah the equivalent of pointing to Tesla having no demand because look at New Zealand