scottf200
Well-Known Member
June 15th 2020 one?This is so wrong. Go watch Karpathy's most recent cvpr video.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
June 15th 2020 one?This is so wrong. Go watch Karpathy's most recent cvpr video.
This is so wrong. Go watch Karpathy's most recent cvpr video.
You are clueless, get your head out the tesla bubble and learn something for once.
The traffic control feature uses mapping, lol. Talk about clueless.
I assume you don't even have a Tesla with the traffic control feature.
Again how is it that you cant understand? How many times do i have to repeat myself?
During today’s Q2 earnings call, Elon mentioned several times how the new 4D version of FSD is a profound improvement over the current 2.5D stack. He he a damn good salesman and has me convinced.
It will be interesting to see what Tesla can achieve in the next 6 - 12 months.
Check out the video on the Topic. I think its really good information on autopilot history and 2.5d to 4d. What do you guys think?
Tesla Autopilot ReWrite 4D! & 3D labeling & Competition
https://youtu.be/kLukc3AtO-8
That's all it does.I don't know who or what you're responding to, but I said that the traffic control feature does all the following: "perception, mapping, prediction and planning."
It seems that you disagreed and replied by saying that all the traffic control feature does is NN perception and if statements to stop, which is wrong since it does all of the above.
PerceptionPerception: light color and light position
Mapping: knows if there's a traffic control coming up from SD maps, also localizes the car within the lanes to better understand if the car is in a turn lane or not
Prediction: predicts whether the light is relevant to your current lane (also, NNs are predictions in general)
Planning: when and where to stop / should stop or not based on red / yellow
There are no prediction or planning (decision making) neural network in play at the moment.
Lol, right. It seems all you're doing is defining terms in your head and then disagreeing with anyone who defines them differently. I already identified those 4 characteristics of the traffic control feature. Perhaps your definition of "planning" is different than mine or another company's.
Sure. The guy who works in the autonomous driving industry is just making up his own definitions.
Three observations:
1) 4D is self-driving 101. Drawing 3D boxes around objects is a basic tool of perception. Literally everybody, Mobileye, Waymo, Cruise etc already have 4D camera vision. So 4D camera vision is not some incredible breakthrough. It's foundational to any camera vision because the car has to see like the real world. So while it is great news that Tesla is close to releasing the 4D rewrite, it just means that Tesla's vision can now more accurately locate objects in the real world. That's just the basics of perception and a long way from solved FSD.
2) The video implies that Mobileye is aiming to do camera-only FSD like Tesla. That is misleading. The video fails to mention that Mobileye is developing 2 independent FSD systems, one that is camera-only, yes, but another that is lidar and radar only. And Mobileye plans to combine the two systems together in their final FSD system that goes into consumer cars. So Mobileye is still planning to use lidar in the FSD that they put in consumer cars.
3) The video praises how good the Mobileye car is at maneuvering in complex situations and seems to give all the credit to the 4D vision. The video implies that this is evidence that Tesla will be able to do similar self-driving when they finish 4D. But 4D is only a prerequisite, it does not automatically guarantee that you can handle those situations. The 4D vision is only the perception part. Advanced planning and driving policy are the real reasons that the Mobileye car is able to self-drive like that. So while Tesla doing 4D is an important prerequisite for being able to duplicate that demo, it does not guarantee it. Tesla still needs to do the planning and driving policy for those driving scenarios.
1) 4D is self-driving 101. Drawing 3D boxes around objects is a basic tool of perception. Literally everybody, Mobileye, Waymo, Cruise etc already have 4D camera vision. So 4D camera vision is not some incredible breakthrough. It's foundational to any camera vision because the car has to see like the real world. So while it is great news that Tesla is close to releasing the 4D rewrite, it just means that Tesla's vision can now more accurately locate objects in the real world. That's just the basics of perception and a long way from solved FSD.
You just described 3D, which works by analysing a sequence of static pictures over time (maybe a slight oversimplification). One of the challenges in this approach is making sure "Object 1" in picture 2 is the same entity as "Object 1" from picture 1, even though Object 1 in picture 2 looks a bit different.
What they are calling "4D" will analyse a continuous video feed, so is a fundamentally different approach.
I’d buy it.Sure. The guy who works in the autonomous driving industry is just making up his own definitions.
Sure. The guy who works in the autonomous driving industry is just making up his own definitions.