You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Hopefully sir Musk meant both types
If it aces it
When it fails with Chuck's unprotected left turn, and most other failures, it's at least doing so more-or-less within the reaction time of the driver. In order to successfully get through Chuck's turn with traffic it might have to be much more responsive. As human drivers IMO, we see our gap in traffic, guide the car quickly and smoothly across the first set of traffic then may have to pause/stop in the median for the next moment, or just gun it to complete the turn. Either way it is a set of decisive movements. Doesn't have to be a mad dangerous dash, but usually we do have to commit to the moment.Are we back to believing Musk?
It’s not going to ace it. He said it “should handle” it (aspirational language!), meaning it probably has some new capabilities to make it easier to make the turn. Not that it can make the turn reliably! That presumably will not happen for another year, if ever.
I’m guessing 50% success rate in modest traffic, appropriately defined so it counts successes as successes and failures as failures. (Rather than counting failures as successes, which I could see happening here!)
I think 10.13 will be incremental improvement, as most releases are (some have been apparent regressions). Nothing special.
I hope they work on making it behave more naturally overall and actually fix some of the issues that have been around for a while (lane choice, turn signals, etc.).
My question is whether or not it works in simulation? They have all of Chuck's raw camera data from every time he pressed the report button. Does it work for every one of those cases now? If it doesn't then what's the point of real world testing?It’s not going to ace it. He said it “should handle” it (aspirational language!), meaning it probably has some new capabilities to make it easier to make the turn. Not that it can make the turn reliably! That presumably will not happen for another year, if ever.
Lol.They have all of Chuck's raw camera data from every time he pressed the report button. Does it work for every one of those cases now?
I guess we can agree that this will be an illuminating result.If it doesn't then what's the point of real world testing?
This statement, while perfectly reasonable, nevertheless seems to reflect the notion that the neural networks are an aggregation of separate, individual solutions for every trained scenario. That's not how it works. The neural networks are a single, general solution mathematically created from the entirety of the training data - Chuck's scenarios and several thousands or 10s of thousand others. It makes perfect sense to think that adding Chuck's testing data into the training set should make it better in those situations, but the mathematical reality is, while it will probably be somewhat better, it could possibly become worse. That's just the nature of neural networks.They have all of Chuck's raw camera data from every time he pressed the report button. Does it work for every one of those cases now? If it doesn't then what's the point of real world testing?
I would add that it "should handle Chuck's complex left turn" now, have we been previously told that it cannot? If, reading between the lines here, Elon is stating that currently it is known to be unable to handle the turn, and presumable has never been yet capable of handling the turn, that would have been good information to know long ago.My question is whether or not it works in simulation? They have all of Chuck's raw camera data from every time he pressed the report button. Does it work for every one of those cases now? If it doesn't then what's the point of real world testing?
This is why I'm confident that I'll win the bet. I think their simulation coverage will easily exceed 90% of real world left turns at that location.
According to Chuck he has seen Tesla owned testing cars doing his UPL repeatedly so one should expect some real improvement. The question how much improvement but more importantly will that translate into significant UPL turn improvement for general FSD driving.Lol.
Oh, interesting, that could hurt my chances.According to Chuck he has seen Tesla owned testing cars doing his UPL repeatedly so one should expect some real improvement. The question how much improvement but more importantly will that translate into significant UPL turn improvement for general FSD driving.
Here are posts where Chuck has referred to Tesla testing (2 different occasions)Oh, interesting, that could hurt my chances.
Presumably this time it is actual Tesla cars (unlike last time, when the result was 0% success rate).
No, they don't even need to add Chuck's data to the training set. Obviously that would be of minimal benefit since it's not going to be the same cars or lighting conditions. Maybe you could have it more reliably recognize the structure of the road though. As you say the path planning and control is not done with neural nets. I guess I'm assuming the perception is good enough for 95%+ success rate because if it's not then what's the point of real world testing? You can just run video through the perception stack and a human can see the errors.It seems you believe that the neural networks are an aggregation of separate, individual solutions for every trained scenario. That's not how it works. The neural networks are a single, general solution mathematically created from the entirety of the training data - Chuck's scenarios and several thousands or 10s of thousand others. It makes perfect sense to think that adding Chuck's testing data into the training set should make it better in those situations, but the mathematical reality is, while it will probably be somewhat better, it could possibly become worse. That's just the nature of neural networks.
Now, if they are adding algorithmic code (referred to as "software 1.0" by our dear departed Andrej) to improve the decision making in left turn situations based on Chuck's scenarios, then, yes, it should handle Chuck's scenarios much better.
That's my interpretation. Chuck reports that Tesla employed were actually doing real world testing of his left turn. So maybe their simulation tools aren't as good as I think or maybe they need special vehicles to create a model to put into the simulation environment.I would add that it "should handle Chuck's complex left turn" now, have we been previously told that it cannot? If, reading between the lines here, Elon is stating that currently it is known to be unable to handle the turn, and presumable has never been yet capable of handling the turn, that would have been good information to know long ago.
Yeah, OK, so adding Chucks scenarios to testing/simulation data and verifying that 10.13 changes (neural nets and algorithmic code) is better in these situations does make sense.No, they don't even need to add Chuck's data to the training set. Obviously that would be of minimal benefit since it's not going to be the same cars or lighting conditions. Maybe you could have it more reliably recognize the structure of the road though. As you say the path planning and control is not done with neural nets. I guess I'm assuming the perception is good enough for 95%+ success rate because if it's not then what's the point of real world testing? You can just run video through the perception stack and a human can see the errors.
Red hands != forced disengagementI want to know if we will get our red hand warnings reset. both of mine are on the car.
Or maybe they were testing actual FSD UPL code changes. Fun to speculate.No, they don't even need to add Chuck's data to the training set. Obviously that would be of minimal benefit since it's not going to be the same cars or lighting conditions. Maybe you could have it more reliably recognize the structure of the road though. As you say the path planning and control is not done with neural nets. I guess I'm assuming the perception is good enough for 95%+ success rate because if it's not then what's the point of real world testing? You can just run video through the perception stack and a human can see the errors.
That's my interpretation. Chuck reports that Tesla employed were actually doing real world testing of his left turn. So maybe their simulation tools aren't as good as I think or maybe they need special vehicles to create a model to put into the simulation environment.
Start filming videos of that turn for every version, maybe they’ll send some cars your way tooOne fellows unprotected left turn… I do get it, that’s a doozy, but how about the right turn at the end of my road which FSDb has never been able to complete since October? (Read: never been able to complete)? Asking for a friend.