Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Is there any news on the investigation of the accident that’s delaying the V12 deployment?
FSD beta update information has never been so quiet...what's going on with this investigation and is there any news/updates?
My hope is that it was the drivers fault and FSD was not engaged...AND that Tesla is working on training V12 for a broader deployment before April.
Fingers crossed!
 
  • Like
Reactions: FSDtester#1
FSD beta update information has never been so quiet...what's going on with this investigation and is there any news/updates?
My hope is that it was the drivers fault and FSD was not engaged...AND that Tesla is working on training V12 for a broader deployment before April.
Fingers crossed!
Even if it Was the drivers fault they still need to figure out Why the car allowed it to happen for their end game target. Last thing they want to do is release this wide and have hundreds of “may be the drivers fault” things to investigate.
 
FSD beta update information has never been so quiet...what's going on with this investigation and is there any news/updates?
My hope is that it was the drivers fault and FSD was not engaged...AND that Tesla is working on training V12 for a broader deployment before April.
Fingers crossed!
My guess as to why it’s taking so long to release a fix for the FSD accident (based on my rudimentary layman’s knowledge.) First they have to look at the data and logs and run some simulations to determine why the computer made the error. Then they have to find hundreds or thousands of similar situations in their video database and retrain the system with them. This is time consuming but will be faster in the future as more compute power is brought online. Then they probably test it in the simulator and if it looks good it goes out to internal testers. Once that’s working it is released to the select proletariats and someday to the great society.
This is more time consuming than writing some new code or adjusting some variables as was done in the past.
 
v12.2.1 is not reliable in parking lots.

Yesterday my car attempted to switch to the outgoing lane painted with big and long an white arrow going to the opposite direction while it's going in the parking lot.

Many times v12.2.1 moved to any open space in the parking lot regardless it's going in or going out.
 
Last edited:
v12.2.1 is not reliable in parking lots.

Yesterday my car attempted to switch to the outgoing lane painted with big and long an white arrow going to the opposite direction while it's going in the parking lot.

Many times v12.2.1 moved to any open space in the parking lot regardless it's going in or going out.

At least 12.2.1 is very consistent in its mistakes. This is a good sign :)
 
what happens if a human makes a mistake and submits a bad behavior into the training system as a good behavior?
Tesla has had quite a few human labelers that previously needed to annotate vehicles, objects, etc. that also wanted high quality for perception. This had transitioned to auto-labeling, but there were likely checks to ensure correctness such as multiple people labeling as well as automation that became auto-labeling. The training data is also curated over time, so in the process of training, those that the network has trouble learning could indicate which are problematic examples either due to the network unable to learn that pattern or that it was actually labeled incorrectly. Presumably the labels are associated with the labeler to allow for improvements to the labeling system whether it was human or computer generated.

In general, neural networks are able to learn even with noisy data, but that's also part of the reason why it needs so much data as it only gets a bit of incremental adjustment from each example. With chess self-play without human knowledge, there was tons and tons of "bad" data either because the neural network hadn't developed enough understanding yet or it was forced to play more randomly to discover new understanding, but it's still able to achieve superhuman quality as there was plenty of other training data that were consistent with good behavior. So similarly, one-off mistakes of human labels of "good" and "bad" end-to-end driving examples will effectively be filtered out through the training and data curation process.
 
The problem with disengagements is by definition, something went wrong causing the disengagement so they would be ruled out for training data.
Maybe I'm using FSD Beta differently from you, but I was paying extra attention to my 11.x disengagements yesterday, and quite a few of them were preemptive either knowing 11.x would have trouble such as completing multiple lane changes smoothly or preventing 11.x from making an unnecessary lane change by keeping my hands on the wheel preventing it from turning. Video clips sent back a few seconds after disengagement would capture the good example of what FSD Beta should have done. Looking at the last 30 days of upload to Tesla, it's almost 1TB, so maybe it is from continued 11.x usage / disengagement / voice drive-notes to continue collecting examples for 12.x training?
 
Uninformed interpretation:
May have thought there was a cross bar, two potential positions:
SmartSelect_20240307_085307_Firefox.jpg

SmartSelect_20240307_085307_Firefox.jpg
 
v12.2.1 is not reliable in parking lots.

Yesterday my car attempted to switch to the outgoing lane painted with big and long an white arrow going to the opposite direction while it's going in the parking lot.

Many times v12.2.1 moved to any open space in the parking lot regardless it's going in or going out.
Honestly, I was never expecting parking lots from FSD anyway. It'd be nice, but parking lots are probably some of the most difficult driving you can do, not to mention all the other issues (no mapping, knowing where the door is, deciding which door you park by, etc)
 
My guess as to why it’s taking so long to release a fix for the FSD accident (based on my rudimentary layman’s knowledge.) First they have to look at the data and logs and run some simulations to determine why the computer made the error. Then they have to find hundreds or thousands of similar situations in their video database and retrain the system with them. This is time consuming but will be faster in the future as more compute power is brought online. Then they probably test it in the simulator and if it looks good it goes out to internal testers. Once that’s working it is released to the select proletariats and someday to the great society.
This is more time consuming than writing some new code or adjusting some variables as was done in the past.
something else I'd like to know is how the object perception network is coded and integrated into the system as a whole. If object detection is not part of the end to end AI but rather performed separately and then fed to the AI code the accident may well have been an issue with 'Tesla Vision.' Garbage in, garbage out, right?