Second, I didn’t say anything about approval.
No, but Elon did.
Elon Musk in 2019 said:
I feel very confident in predicting autonomous robots taxis for Tesla next year,” said Musk. “Not in all jurisdiction because we don’t have regulatory approval every where. But I’m confident we’ll have regulatory approval somewhere next year. From our standpoint, if you fast-forward a year, maybe a year and three months…we’ll have over a million robo taxis on the road.”
He explicitly said that robotaxis would be approved in at least one jurisdiction in 2020.
That's clearly not happening.
(and 2021 ain't looking great right now either given Dojo was still a full year out as a couple months ago)
I'm sure this has been explained before, but there are so many long threads that I thought I'd just straight up ask for a clear explanation.
My understanding is that when you file a bug on the non-beta fsd Tesla, Tesla never reviews this information, and thus, the car's ability to navigate on Autopilot doesn't improve. It's never been clear to me why Tesla would incorporate a bug filing feature but not use that function to improve the system.
That feature is so if you open a service ticket with Tesla, there's a bookmark in the logs about the problem the technician can easily reference.
Other than that use the data just sits local on the car.
Imagine the amount of human review needed if they were going to look at every bug report from every person in a fleet of over 1 million cars (and likely to almost double next year).
Some folks post about how they were doing a bug report dozens of times a day for incorrect speed limits for example (because they incorrectly thought someone reviewed every report)... multiply that by fleet size and yikes.
Now I'm seeing the FSD beta videos, and it appears people are filing bug reports and Tesla is putting out very quick updates, fixing very big issues in a matter of days.
The FSD beta testers have a special button on their screens (the extra, white, camera icon you can see slightly to the right of the top middle in videos) that captures (and actually DOES send to Tesla) a MUCH more detailed set of logs including video content) than what the standard bug report feature captures.
As you note the fleet is vastly smaller so they can afford the manpower to review those reports unlike if they had to do so fleet-wide.
I think the fact the car doesn't learn locally has been explained by others now, so two other questions of yours seems to remain outstanding-
Sorry if this is a stupid question, but it just seems like Tesla was ignoring basic issues like going down curvy roads, and yet now they're able to take and implement feedback immediately. People keep mentioning the neural network, but I don't understand how that neural network works (I assume it's doing what I mentioned above - using driver engagement as a sign that its decision was faulty.)
Currently the only thing the system is using NNs for is detection/recognition.... basically it's the vision system.
Following is a very simplified explaination-
The NNs are fed input from the sensors and try to figure out what it's seeing.... identifying that thing over there is a sedan moving ahead of me at 38 miles per hour, that other thing is a pedestrian standing on the corner, those things up ahead are stop lights that are currently green, etc...
So that gets to training... let's say Tesla has a system that can't identify stop signs (it can now, but once could not).
So Tesla sends out a campaign to the fleet "send me a picture from this specific front camera every time you stop at a GPS location that the map says has a stop sign" (lots, but not all, stop signs are in the map data).
This sends a flood of photos back to Tesla. They have humans manually go through them and label the stop sign in the data.
This labeled data is then used to "train" the NN that will need to recognize stop signs, so that it "learns" what stop signs look like from different angles, in different light, etc...
With each round of new fleet data and training, it gets better at reliably recognizing stop signs under a very wide variety of conditions and situations.
Eventually you get the feature where it'll stop at them on its own because it's good enough to (along with the map data) realize it's seeing them virtually all the time.
Thanks, I just watched both of these. First off, it's amazing that there are people intelligent enough to code like this. It is beyond my mental capacity.
I will say that I'm still confused about AP's learning capabilities. In the videos, he mentions that the cars can predict scenarios based on data sets, but of course, its predictions can be inaccurate because there are an infinite number of nuanced situations. But then he talks about how a car can - for example - fail to recognize an occluded stop sign. And if it does, Tesla can ask the fleet to look for a number of these instances. Then they train the fleet to recognize occluded stop signs.
Right.... as in the above example, now that it recognizes clear stop signs, they could tell the fleet to send photos of, say "any time your map data says there's a stop sign but the NN doesn't think there is one" and that will likely capture some occuled situations they can use to train to recognize the situation.
This implies - and he discusses at the end of the longer video - that when a driver disengaged Autopilot, Tesla knows the car didn't act as planned
Naah... maybe you just felt like taking over... or maybe you saw a woman with a stroller up ahead and didn't want to take any chances... or maybe there was a pothole which the current system will just drive right into and you wanted to avoid it... the car has no idea
why you disengaged.... and the "disengagement report" it creates is tiny, basically just GPS coordinates of where you were and what method was used (brake, AP switch, wheel pull, etc).
Those tiny reports might be useful for fleet aggregation like "Hey the fleet shows that a huge % of users all disengage this same away at this same spot- maybe we should send out a campaign to capture pictures there and see what's up" (or they might just be able to look at that spot in google and it's obvious what the issue is)
. My confusion is that he mentions the AI can make fixes on the fly without an update
it definitely can not do that.
And then I've seen mentions of Dojo. What is Dojo and what does it have to do with the car learning to drive better?
Can anyone give a simple explanation?
Dojo is basically an AI/NN training supercomputer that will be able to train on massive amounts of data and, ideally, handle a significant amount of the labeling of it automatically (whereas again this is largely all done manually by humans right now)