You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Anyone try the Navigate Auto Pilot? I tried it this morning and tried to let it take an exit and it was scary as hell. There is no way they are close to FSD.
Anyone try the Navigate Auto Pilot? I tried it this morning and tried to let it take an exit and it was scary as hell. There is no way they are close to FSD.
Sure, but the NN can also compare the action it would take against the driver's and trigger an upload if they differ. I was merely giving a simple view of why they do not need all data from all cameras from all time.
That said, it's *possible* that HW3 is a completely different animal. It could be that they've pushed the current compute as far as it could go, and the new platform allows them to do amazing things.
Having the car self-drive to the customer's home would streamline deliveries but it would need to be super solid. It would be incredibly costly, not to mention super embarrassing, if the car crashed before arriving at the customer's home.
No, Tesla's approach is in no way innovative and they are not the only ones trying it. Nvidia is also trying it, as just one example. It is quite truly a rather old idea which most people who actually know what they're talking about already concluded was not the best path forward in the near term.
This have been refuted and proven false like a thousand times. why do you insist on spreading it consistently?
green on Twitter
Tesla collects 0.1% of raw data from each mile their cars drive. But you only need raw data when you have a poor perception system or you are starting from scratch and don't have one at all. However when you have a great perception system (EyeQ4). All you need is the HD map and driveable path and you can use that to create an neural network that learns how to navigate any complex intersection / roads using cues from the hd map / similar to what Waymo did with chaffeurNet. Take a 4 way intersection or a road about for example, The HD Map contains all the lanes, all the road markings of the lanes, which lane is leading to what, it also has all the road edges / curbs, it also has all the traffic signs, traffic light and their location.
. Feed that to a NN and the result is that you now have a network that does exactly what Elon says, that handles intersection, turns, curves, unmarked or unambiguous roads. The thing is that you don't need 300k cars to do that as proven by the fact that Mobileye's HPP is currently in production.
Jerry, Jerry, Jerry, "can" means something other than "does", and the discussion was specifically about ways to reduce data traffic.
BTW:
Is bologna, since it can't handle any changes to the pre-mapped environment.
Where's the coding, everything i listed above uses neural networks. The one company using alot of coding is....guess what? Tesla! please stop listening to the stupid things Elon says.You are trying to code your way out of the difficulties, which is great as long as the situation never changes.
What if there is construction?
What if it game day (or an evacuation) and all traffic flows the same direction? This is exactly why Tesla wants raw data, so that it can train against what is, not what was, and so be ready for what will.
What if one of those cars is on a road with different style markings, dealt with a cow in road, or any of the other multitude of variations on the environment? Without that car's data, you would not even know to test for that case. Now figure out which 1 out of the 300,000 is important.
The "Green" hacker/member (I can't remember his exact username) has pretty conclusively shown that at least so far, Tesla doesn't use supervised learning on any of the fleet of cars. They record certain events and send them back to the mothership, but none of the events involve what the driver was doing, per se.To me, the most interesting thing Elon said in the whole interview is (at 14:25):
“And we’re really starting to get quite good at not even requiring human labelling. Basically the person, say, drives the intersection and is thereby training Autopilot what to do.”This hints that Tesla is using some form of imitation learning (also known as apprenticeship learning, or learning from demonstration).
It’s been reported that Tesla is taking a supervised learning approach to imitation learning. This approach is sometimes called behavioural cloning.
Recently, DeepMind’s AlphaStar reached roughly median human-level competitiveness on StarCraft II using just supervised imitation learning. Waymo’s ChauffeurNet used supervised imitation learning on a small dataset and achieved results that some people in the autonomous driving and machine learning worlds found impressive.
Other forms of imitation learning include inverse reinforcement learning. Waymo’s head of research, Drago Anguelov, recently gave a talk where — as I understand it — he said Waymo currently uses both supervised imitation learning and inverse reinforcement learning. I believe he said that if you have more training samples, supervised imitation learning is preferable, and that Waymo uses IRL where they have fewer training samples. But you can watch the talk for yourself and see if I’m getting it wrong.
In the interview, Elon expressed the view that it’s a hopeless task to try to hand code if-then-else statements that encompass all driving (at 15:05):
“An heuristics approach to this will result in a local maximum of capability, so not a global maximum. I think you really have to apply a sophisticated neural net to achieve a global maximum... A series of if-then-else statements and lidar is not going to solve it. Forget it. Game over.”
He seems to believe in taking a machine learning approach to driving... But it’s not super clear when he’s talking about perception and when he’s talking about action (i.e. path planning and driving policy).
What’s interesting to me is that only Tesla has the hardware in place to take a machine learning approach to action. According to Drago Anguelov, Waymo can’t collect enough data. Most other companies (maybe all) have less data than Waymo. HW2 Teslas are driving about 350 million miles per month, compared to Waymo’s ~1 million miles per month. Plus the rate of mileage for HW2/HW3 Teslas will increase exponentially over 2019 and 2020.
If machine learning works fundamentally better than hand-coded if-then-else statements, then Tesla’s system might work fundamentally better once it is trained on enough data. I don’t know exactly why Elon is so confident that Full Self-Driving willl be “feature complete” by the end of 2019, but this is one thing that could explain his confidence.
Irrespective of timelines, Tesla appears to be taking a fundamentally different approach than all other companies — because only Tesla can.
They are talking about Holistic path planning which has always existed in Mobileye's EyeQ3 and also in the EyeQ4 as "holistic lane centering."
The "Green" hacker/member (I can't remember his exact username) has pretty conclusively shown that at least so far, Tesla doesn't use supervised learning on any of the fleet of cars. They record certain events and send them back to the mothership, but none of the events involve what the driver was doing, per se.
Everyone that predicted any time soon is way off. But, estimates over 2 years are pointless. No one cares about some estimate that says 5+ years. I think that's why Elon always says 2-3 years. It keeps it fresh even if it's non-sense.
If he said 5-10 years no one would say anything about it.
Lots of people said we'd have L3 by now, but do we? Nope. Nothing here except L2+ and the plus is really being generous.
So now we say in 2 years yet again, and you'll post something about MobileEye to show it's 2 years for real this time. It's real because MobileEye is more grounded in their estimates. But, who knows what crap will happen over those 2 years. Intel has bought and destroyed companies in less than 2 years. It gives Uber 2 more years to kill more people to ruin it for all of us.
I imagine it's going to be 2 years for Tesla as well (to get to real L3). Things do tend to converge on a point as the world is too competitive for one to break free. Like all the Tesla employees that jumped ship to start their own thing.
Just to set the record straight and be a record. If HW 2.5 has corner radars they will be able to achieve L4 (real L4 not basic) highway autonomy in 2021...
Basically, your very long post is to say that Tesla is using methods that Mobileye has been using for awhile now and that Mobileye is way ahead of Tesla? Is that what you are saying? If so, if the method works and is the right method, isn't it a good thing if Tesla is using it? And you may well be right that Mobileye is ahead of Tesla but if Tesla is using the right methods, then that's good news. You should be happy for Tesla that they are using the right methods. Basically, I don't care if Tesla is just now using the right methods that Mobileye has been using for awhile now, as long as Tesla uses the right methods that gets them good progress.
Basically, Mobileye has millions of cars sending hd maps and driveable paths which isn't being regarded by anyone. They literally are the leaders in autonomous data because 100% of their data is collected unlike "less than 0.1%" of Tesla. With just 1 million mobileye cars sending data (BMW, VW, Nissan) that's 986,301,369 million (~1 billion) autonomous miles data uploaded per month.
That's not true though, is it.
Most of those "millions" of cars are running EyeQ3 offline to provide AEB.
I'm talking about EyeQ4 cars from BMW, VW and Nissan.
BMW alone sells 2 million cars a year.
The REM agreement covered "cars entering the market in 2018". Not existing product lines.
And of those cars, only a fraction have connectivity.
Interestingly, there is no mention of driving policy in any of the REM PR.
Its called driveable path.Interestingly, there is no mention of driving policy in any of the REM PR.
They capture, but aren't triggered by what the driver is doing.Snapshots don’t record the driver input? Or even the movement of the car as measured by accelerometer, IMU, GPS, etc.?