Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The catastrophe of FSD and erosion of trust in Tesla

This site may earn commission on affiliate links.
I am not dogging on Tesla, but my concern is how can you realistically build an AI system that is going to drive just the way everyone like it to?

You can't-- but there's already 3 different "style" settings.... and as the system gets better I expect there'll eventually be more customizations made available.

It won't "drive like" every possible individual person- and good thing, since many individuals are terrible drivers--- but if they ever get past L2 you'll care a lot less because you'll be watching a movie, reading a book, playing a game, etc... (or sleeping if they ever get to L4).


As a human we can anticipate situations better. For example, while the FSD make have access to all these cameras and have 360 degree view, it doesn't know that in order to make a left at the light ahead we need to start getting over two lanes to the left much sooner than if I let the FSD system do it. I mean I can see that there is a bus ahead and it is going to stop to pick up some people so I anticipate and get from around the bus much sooner, will FSD do such things? That is where I struggle with this system.

Up until fairly recently in its development, the AI/NN/ML stuff was only used for perception (ie what do I see around/head/behind me).

All planning and execution was traditional code. So pretty "dumb" and rigid.

Only much more recently are they starting to move other parts of the stack over and away from fixed/traditional coding. As Elon himself as noted, this is going to have weird results in the early stages- including making some things worse initially-- but will be much better in the long term as the training and models improve.




The point of the non-fsd regressions show they don’t really do this. At least for the non-fsd parts. Since that’s an easier problem I find it skeptical that they do much on the harder fsd problem either.

Again though, entirely different teams.

There isn't one "software department" that is run by one guy at the top of a vast hierarchy like most traditional companies doing software, and using one standard set of operational rules.

One of the reasons Tesla has been as successful as it is is by having relatively independent teams, with flat management structures, working on different things with different rules.

This ALSO has some downsides of course- (see the V11 UI as an example)- so it might mean ONE team produces something less good than if they'd done it another way. But generally when breaking new ground the advantages are massively bigger than the downsides.
 
The fact that to get FSD gou need a 90+ safety score is indicative that TESLA AI wants to be trained by the safest drivers.
I think this is all about avoiding PR catastrophe generating accidents while beta testing.

Driving with beta does not really train NN. Training is done internally at Tesla. While the training data may include segments recorded by drivers at disengagement points as input, I doubt the beta drivers driving behavior is used for training at all.
 
I think this is all about avoiding PR catastrophe generating accidents while beta testing.

Driving with beta does not really train NN. Training is done internally at Tesla. While the training data may include segments recorded by drivers at disengagement points as input, I doubt the beta drivers driving behavior is used for training at all.
I agree on the PR aspect.

That said, training the AI needs to use datasets. What better dataset than the one from the safest drivers?

The training using this curated dataset would be happening on a scheduled basis at the Tesla labs but not using their own driver data. That would be FOOLISH at best and IDIOTIC at worst.
 
  • Disagree
Reactions: DrDabbles
A bit of positivity for the FSD is catastrophe thread:

Autopilot is great. I use it constantly. It is now included in the car purchase price, which is great. With my next car being something else than a Tesla (due to not trusting Tesla any more), giving up Autopilot will probably be the most painful part alongside the ubiquitous charging network.

(EAP that I got is a badly working gimmick and thus was a waste of money. FSD I paid for was never delivered to my car in any form, so cannot judge its quality directly)
 
  • Like
Reactions: B@ndit
That said, training the AI needs to use datasets. What better dataset than the one from the safest drivers?

The training using this curated dataset would be happening on a scheduled basis at the Tesla labs but not using their own driver data. That would be FOOLISH at best and IDIOTIC at worst.

What exactly you think would be the dataset Tesla gets from beta cars, how do you see that being used in training NN?
 
What exactly you think would be the dataset Tesla gets from beta cars, how do you see that being used in training NN?
Seems like you have never worked in the field of AI.
The dataset will be used to build an algorithm. So instead of reading millions of records and fields, it is now just an algorithm, a formula, which can and will work faster and faster.
 
I agree on the PR aspect.

That said, training the AI needs to use datasets. What better dataset than the one from the safest drivers?

The training using this curated dataset would be happening on a scheduled basis at the Tesla labs but not using their own driver data. That would be FOOLISH at best and IDIOTIC at worst.
Wouldn't training data from the safest drivers only be available when they are not using FSD Beta?
Or are you saying that the safest drivers generate the best disengagement data? Though it seems like people who are inclined to disengage FSD Beta for every minor error would probably just stop using it...
 
  • Like
Reactions: DrDabbles
Wouldn't training data from the safest drivers only be available when they are not using FSD Beta?
Or are you saying that the safest drivers generate the best disengagement data? Though it seems like people who are inclined to disengage FSD Beta for every minor error would probably just stop using it...
Now you are getting there!
The safest driver will engage with the car everytime a correction needs to be applied, providing valuable dataset that in combination with the rest of the visual camera data, will provide a complete picture of how and when to be safe.

There is lot to learn from unsafe drivers too but that is an expensive price to pay (deadly collisions) from a PR perspective.
 
Seems like you have never worked in the field of AI.
The dataset will be used to build an algorithm. So instead of reading millions of records and fields, it is now just an algorithm, a formula, which can and will work faster and faster.
Please do not assume :)

Really curious how exactly you see data from beta cars being used in training. Questions like: How you choose the segments of camera data to transfer, how do you transfer camera feed data, how you store camera feed data before transfer, how do you compress camera feed data and how you deal with compression loss (vs original data used by in car NN), how would you use driver output after disengagement due to working FSD never putting car to such situation, would you use driver output while car is not in FSD mode, ...
 
Please do not assume :)

Really curious how exactly you see data from beta cars being used in training. Questions like: How you choose the segments of camera data to transfer, how do you transfer camera feed data, how you store camera feed data before transfer, how do you compress camera feed data and how you deal with compression loss (vs original data used by in car NN), how would you use driver output after disengagement due to working FSD never putting car to such situation, would you use driver output while car is not in FSD mode, ...
The short answer is that we don't know. We don't know how Tesla collects and processes data from the FSD testers. When I press the record button when the car does something incorrectly, I know that a short clip and additional data is sent to Tesla (according to the language we all read when we entered Beta), but we don't know what that is, or how much it is. Since the NN's convert visual data into other forms (I love the term "bag of points"), perhaps what's being sent to Tesla is the NN's view of the world and GPS data and not the raw video. They can take those data sets and feed them into a simulator (given GPS data) and see what the car did, then smooth it out in the simulator and feed the NN training data back to the cars. Again, I'm guessing - and everyone here is just guessing. :)
 
6 THOUSAND views for this thread
1 thousand for this one:



AI/ML analysis by DOJO (without LIDAR/RADAR) shows that the stats would be reversed if FSD actually worked as promised by the CEO.
 
I mean, if I were writing the FSD (or rather directing a large team that was working on it)...

I gather autopilot/FSD does not build on your personal driving - for example, it does not learn the quirks of your local route, neither for your driving or for the greater public. It's like every time down a road is like a brand new encounter for the system. I'm not sure that is the greatest approach, but it works (obviously) especially when it *is* your first time down that road, so not much more different each subsequent time.

But perhaps another way to look at is "by the book" is not always the ideal way to handle life situations. Your AP/FSD is doing things "by the book" whereas humans learn simpler and better shortcuts. However, when it also eliminates distractions and mistaken impressions, perhaps it has some advantages.

I should also point out that several other manufacturers are selling cars with what could be called "autopilot" or "traffic aware cruise control", with the autosteer in lane, safe following distances, pedestrian warnings, etc.
 
I mean, if I were writing the FSD (or rather directing a large team that was working on it)...

I gather autopilot/FSD does not build on your personal driving - for example, it does not learn the quirks of your local route, neither for your driving or for the greater public. It's like every time down a road is like a brand new encounter for the system. ...
Oh it gets better. I used Smart Summon previously at a local Costco parking lot. Same scenario multiple times. Meaning, same parking spot, parked same direction, away from other cars, etc. One would think that if it worked say, the first 5 times successfully and took the same route? It would "learn" on that success and keep being successful, all factors remaining equal.

Wrong. On the 6th time? Up and onto a curb. for NO reason.


(enter the "but you dont understand..the FSD is a different stack and different ML/AI/DOJO than smart summon. Its easier to land a rocket right side up than to not have the code drive up onto a curb" fanbois, most likely from the Bay area or Portland).