What about all the testing Tesla supposedly does before shipping every bit of code we were assured of by some people?I, for one, would be glad to know that the system was validated in shadow mode before being deployed live.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
What about all the testing Tesla supposedly does before shipping every bit of code we were assured of by some people?I, for one, would be glad to know that the system was validated in shadow mode before being deployed live.
I'm sure they do test it as Elon has mentioned in the past... but like every software developer knows, the end users are the best beta testers. They can find problems you didn't even know existed even when running only in shadow mode.What about all the testing Tesla supposedly does before shipping every bit of code we were assured of by some people?
I'm sure they do test it as Elon has mentioned in the past... but like every software developer knows, the end users are the best beta testers. They can find problems you didn't even know existed even when running only in shadow mode.
Other tech companies like facebook, etc do the same thing with certain features.
So, Blader, you claimed LiDAR needed ZERO computation for object detection which, in the application of self-driving cars, is patently false.Don't be naive, i have posted that same video and timestamp multiple times. (heck you prob took that from one of my previous posts..im watching you!)
But we are comparing camera with lidar here, so let's stay on topic people!
Lidar doesn't NEED to know whether the object in-front of it is a pedestrian, bicyclist or a construction cone because it knows its exact dimension and position in space.
That doesn't mean you don't want to perform classification on the detected objects (raw data).
The point of this 5 page discussion is that:
1) A picture is a collection of mean-less numbers which without cutting edge deep learning is useless to a self driving car or any vision system for that matter. Therefore requiring object recognition and classification models.
2) A lidar returns xyz of the world around it, which even without any classification model, you can know that there is a object of precise dimension infront of you.
2) A radar returns data points of the world around it (although with extremely low/garbage resolution), which even without anyclassification model, you can know that there is a object of a certain size in-front of you.
Secondly, a NN needs to recognize objects in a picture and any object it can't recognize, the car CANNOT see.
This isn't the case for lidar which returns 3d coordinates.
End result is, camera is not the best sensor, lidar is.
It boils down to speed of FDS - lidar based FSD is possible up to some relatively slow speed. For higher speed FSD, real object recognition is mandatory. Ignoring color in this process (i.e. using lidar) is making the recognition much harder. It may happen that computer vision will not be successful for quite some time and LIDAR based approach will appear a better way. In the long run vision approach will win.
Well if that chart's true... GO RADAR!Why would this be an either-or scenario, though?
All technologies provide different benefits, for example radar can see through objects and lidar can see in the dark and provide very reliable distance information, for example - these are things where radar and lidar are superior to vision, no matter what.
It seems to me that this has become a bit of a Jobsian quest. Because Elon has been sour on Lidar for FSD, a lot of people are parroting this as an either-or scenario, when in reality it probably is not.
Are there current benefits for using radar and lidar that will eventually go away, including computational ones and vision being harder? Quite probably. That is probably why Tesla includes limited radar and ultrasonics on AP2 as well, because it makes early coding simpler.
But that doesn't mean these technologies do not have inherent benefits that help in the long run as well.
Neither Jobs nor Musk probably were/are quite as rigid in their own thinking at all, they were/are just masters of selling what they have available now.
I don't drive Facebook. This is life and death. Tesla shouldn't ask us to pay $100k+ to treat us like guinea pigs with our life on the line. Its completely immoral and disgusting they lack full and fair disclosure of such critical safety equipment. They did that garbage with me for my Q4 2016 AP2 purchase. I specifically wanted AEB and I was assured, in writing, that it would be there.
It makes total sense that Tesla has to validate their software with new hardware but 99.99% of Tesla HW2.5 owners would never know their car doesn't have the same features as AP2 cars, especially critical safety features like AEB.
the feature-set initially will be disabled say well at least for the first few months.
Before activating the features enabled by the new hardware, we will further calibrate the system using millions of miles of real-world driving to ensure significant improvements to safety and convenience. While this is occurring, Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency braking, collision warning, lane holding and active cruise control. As these features are robustly validated we will enable them over the air, together with a rapidly expanding set of entirely new features. As always, our over-the-air software updates will keep customers at the forefront of technology and continue to make every Tesla, including those equipped with first-generation Autopilot and earlier cars, more capable over time.
Why would this be an either-or scenario, though?
All technologies provide different benefits, for example radar can see through objects and lidar can see in the dark and provide very reliable distance information, for example - these are things where radar and lidar are superior to vision, no matter what.
It seems to me that this has become a bit of a Jobsian quest. Because Elon has been sour on Lidar for FSD, a lot of people are parroting this as an either-or scenario, when in reality it probably is not.
Are there current benefits for using radar and lidar that will eventually go away, including computational ones and vision being harder? Quite probably. That is probably why Tesla includes limited radar and ultrasonics on AP2 as well, because it makes early coding simpler.
But that doesn't mean these technologies do not have inherent benefits that help in the long run as well.
Neither Jobs nor Musk probably were/are quite as rigid in their own thinking at all, they were/are just masters of selling what they have available now.
You're suggesting that 99.99% of Tesla owners with HW 2.5 cars don't read their email regarding the HW 2.5 cars.
From that message, it looked like they were disclosing it to people. I know in Q4 2016 it was made very clear on the AP2 phone call and the website that it wouldn't have certain safety features until a later date (they happened to be wrong about the exact date, but they still disclosed the lack of features).
Oct 2016 conference call:
Tesla blog Oct 2016:
They aren't treating people like guinea pigs because they are testing it in shadow mode. If they did the opposite and tested it live, then people would be like guinea pigs. If AEB triggered too many false positives it would endanger both the passengers and other drivers.
If Lidar was that important I'm sure Elon would've included it as part of the current hardware suite.
but we already know that... "three months maybe, six months definitely..."
So, Blader, you claimed LiDAR needed ZERO computation for object detection which, in the application of self-driving cars, is patently false.
Thus, your claim of zero computational power for LiDAR object classification is 100% wrong. Chris Urmson already told you that Google classifies all objects, yet you repeatedly made the false claim anyway.
Lidar needs 0 computation for any object detection task. zero, nada, none, zip.
Having found a "BLOB of something" in positional space is utterly meaningless unless you know what it is. Thus, every BLOB must be classified. And, in fact, Google already told you that was the case. And even demonstrated it.
Now you're quickly changing the subject to claim that LiDAR provides more useful data than the other two methods. Well of course it does. It's an active system with higher resolution.
But it doesn't remove the requirement to classify all the objects it actively scans. As should be obvious to any self-proclaimed genius.
Why make phony claims if "you're never wrong"??
Elon has staked his reputation and Tesla's on a vision/radar only approach.
Because that's probably all you need. Humans only have two eyes and yet we can drive perfectly fine. The car has 8 eyes that can see at the same time plus radar plus sensors.
He would've added lidar in the very beginning if it was really a need.
Blader, for you to claim that LiDAR requires no computation for object detection is so laughably naive and obtuse that you're obviously trolling.No i said Lidar needed no computation for object detection. object detection(also called recognition) and object classification are vastly two different things....
Blader, for you to claim that LiDAR requires no computation for object detection is so laughably naive and obtuse that you're obviously trolling.
What do you think all that processing power is doing in the trunk of every LiDAR test vehicle? There is an absolutely enormous amount of raw data being output by every horizontal scan every second and it must be assembled and processed.
If you hadn't taken your argument to such a clown-level I'd be mostly agreeing with your other points. LiDAR gives precise spatial positioning "by design" and has a number of other advantages.
Because that's probably all you need. Humans only have two eyes and yet we can drive perfectly fine. The car has 8 eyes that can see at the same time plus radar plus sensors.
He would've added lidar in the very beginning if it was really a need.
What exactly can be done with lidar that cannot be done with two cameras?