AlanSubie4Life
Efficiency Obsessed Member
Good. Tesla still can take the crown, later this year, when FSD is done and in wide release. Remember, AI Day is coming.Yes, Mobileye is admitting that it was not a "zero intervention" trip.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Good. Tesla still can take the crown, later this year, when FSD is done and in wide release. Remember, AI Day is coming.Yes, Mobileye is admitting that it was not a "zero intervention" trip.
Good. Tesla still can take the crown, later this year, when FSD is done and in wide release. Remember, AI Day is coming.
True. I mean, FSD has already been at level 5 for 9 months now.Good. Tesla still can take the crown, later this year, when FSD is done and in wide release. Remember, AI Day is coming.
What did they consider occasional?
Yet some still believe maps isn’t scalable. Apparently ~1.5 million cars contribute to the mapping. As more cars enter the mobileye network, the coverage becomes even better. Thanks for sharing.Watch a timelapse of Mobileye mapping large parts of Europe and US in just 12 months. It looks like they've mapped virtually every road in EU and US now.
Yet some still believe maps isn’t scalable. Apparently ~1.5 million cars contribute to the mapping. As more cars enter the mobileye network, the coverage becomes even better. Thanks for sharing.
It may well be that the enhanced map data is only needed for certain areas that are hard for FSD to perceive correctly without more hints as determined dynamically based on actual driving and network feedback to Tesla. Much of the time FSD can perceive and drive along just fine without needing significant new map data. The added data can probably be sparse and space efficient.I wish Tesla would do this. Tesla could build reliable maps and update them quickly with the large number of Teslas on roads. I think accurate HD maps would really help FSD beta. I've had issues with FSD Beta that I think would be fixed if it had better maps. For example, taking sharp turns too fast, not recognizing a 15 mph speed limit, not moving over into a turn only lane when making a turn or moving over too late.
Also, I asked Mobileye how they store the AV maps, whether in the cloud or in the car offline? CTO Shai Shalev-Shwartz himself replied:
Something like that should be easily doable with Tesla’s network of connected cars, assuming they hire the right expertise.I wish Tesla would do this. Tesla could build reliable maps and update them quickly with the large number of Teslas on roads. I think accurate HD maps would really help FSD beta. I've had issues with FSD Beta that I think would be fixed if it had better maps. For example, taking sharp turns too fast, not recognizing a 15 mph speed limit, not moving over into a turn only lane when making a turn or moving over too late.
Also, I asked Mobileye how they store the AV maps, whether in the cloud or in the car offline? CTO Shai Shalev-Shwartz himself replied:
Given that Tesla's aim is for cars to be self-driving without the need to map out every pebble in the road, I find it curious that they would even consider making their own maps. This would require a lot of effort that they can licence from someone like google, who specializes in it. I recall reading once that google had 5000 people working map updates. Why duplicate that effort? I doubt that they would ever save money at it.Something like that should be easily doable with Tesla’s network of connected cars, assuming they hire the right expertise.
I figured caches of at least the destination route and surrounding area would be onboard the vehicle. I would rather they cached at least the entire state maps.
You could say, "Looks like the car figures out the map of this location on the fly without additional help, no need to store and remember that map."It may well be that the enhanced map data is only needed for certain areas that are hard for FSD to perceive correctly without more hints as determined dynamically based on actual driving and network feedback to Tesla. Much of the time FSD can perceive and drive along just fine without needing significant new map data. The added data can probably be sparse and space efficient.
From the AI day presentation, instead of making hard coded maps based on GPS coordinates, Tesla is saving the features of the road as weights for the NN, so that when it sees any road that looks similar, it can predict the layout of the road. Theoretically that allows it to recognize and predict road structures on roads that were never mapped before. If there are enough weights, it can theoretically predict every single road in the world.Something like that should be easily doable with Tesla’s network of connected cars, assuming they hire the right expertise.
I figured caches of at least the destination route and surrounding area would be onboard the vehicle. I would rather they cached at least the entire state maps.
Humans drive better when they are familiar with the road ahead. Furthermore, it is better to solve problems offline than to solve them online. Offline has more compute, knowledge of the future, optimal weather conditions, and the ability to validate quality.
We can plan way in advance for curves ahead, or for occluded areas in an intersection. We could use rear camera when there's a low-sun in front, and still know the lanes ahead. But, REM maps are much more than that.
With REM maps, we adjust driving style to the crowd behavior at each geographical region. This is a key aspect in generalizing our system to so many different places.
Our driving policy approach is unique. In a nutshell, we specify transparent assumptions on the behavior of other road users, and then calculate analytically the worst-case. That is, we use math formulas instead of simulating many possible futures.
Tesla's perception system is based on one big "hydranet". It is a great solution. But, it is *one* great solution. There are many other great solutions. We believe in redundancy. Every piece of our system is solved by more than one approach.
We use e2e deep networks as well as decomposable methods, and even good old computer vision. Every single solution will suffer from diminishing returns at some point. Multiple redundant approaches can cover for each others.
You are slightly misunderstanding what they were talking about. I'm assuming you are referring to the part where they showed multiple cars contributing to what appears to be a map of an intersection. Those are not weights but auto-labeled data of the road generated from multiple clips and varying viewpoints on the same intersection, projected in vector space. The labeled data is used to train their NN, ultimately a trained NN model is just weights used at runtime for inference, this applies to just about every major player working in this field currently.From the AI day presentation, instead of making hard coded maps based on GPS coordinates, Tesla is saving the features of the road as weights for the NN, so that when it sees any road that looks similar, it can predict the layout of the road. Theoretically that allows it to recognize and predict road structures on roads that were never mapped before. If there are enough weights, it can theoretically predict every single road in the world.
I'm aware Tesla sources maps from 3rd party vendors. I'm just saying it is something they can do themselves with the right expertise and of course it would take time to reach the same level as Mobileye REM Map. There is something they also said at AI day that leads me to believe it is something they might be working on because those vector space auto-labeled road data are pretty much maps. How detailed the label is, is unknown.As for actual maps, Tesla is just buying data from a map service (mapbox or tomtom). They dabbled in creating their own, but even though Tesla's fleet is growing, it probably can't match the coverage of the fleets of the map services.
Mobileye REM maps are used for both L2 and L4. That's one thing they've been very keen on using as a selling point. It makes sense to use Lidar for mapping as it is very accurate recreation of the world in 3D that is why majority use it as part of their sensor suit and dataset.I would note however, the map being discussed is for hands free L2, not for their L4 solution. So far the L4s (not just Mobileye, but other companies too) seem to still require some time actually driving at a given location with the exact end vehicles (for even better equipped mapping vehicles), not just general data from regular cars with cameras.
I would note however, the map being discussed is for hands free L2, not for their L4 solution. So far the L4s (not just Mobileye, but other companies too) seem to still require some time actually driving at a given location with the exact end vehicles (for even better equipped mapping vehicles), not just general data from regular cars with cameras.
I wish Tesla would do this. Tesla could build reliable maps and update them quickly with the large number of Teslas on roads. I think accurate HD maps would really help FSD beta.
Maybe I'm wording it poorly, but my point is more that "Supervision" L2 solution can function solely on the maps they are making right now and they can plop a car with that basically anywhere and have that function fine.This is incorrect. Mobileye builds their AV maps using cameras only, crowdsourcing data from their large L2 fleet. And the same maps are used for both L2 and L4. Mobileye's L4 is built upon the L2 system. The L4 system uses the same maps, the same vision stack and the same RSS as L2. It just has the extra computing power and the added radar-lidar stack as you can see from the graphic below. Remember that Mobileye defines L4 as just a more reliable L2 that does not need human supervision.
Nope, not talking about that part talking about that, talking specifically about this part (from an earlier post):You are slightly misunderstanding what they were talking about. I'm assuming you are referring to the part where they showed multiple cars contributing to what appears to be a map of an intersection. Those are not weights but auto-labeled data of the road generated from multiple clips and varying viewpoints on the same intersection, projected in vector space. The labeled data is used to train their NN, ultimately a trained NN model is just weights used at runtime for inference, this applies to just about every major player working in this field currently.
The labeled data as per the presentation is only used for training and is gathered only from a small subset of video data the fleet may see (basically only clips that they trigger to capture as interesting). I see nothing in the presentation that suggests they are going back to making their own maps (meaning clips are captured from all roads to do the auto labeling). If that was what they were doing, from the screen shot you posted, they shouldn't be sending a clip to the offline NN. Instead, there should be an online NN that does the autolabeling, and they send the finished data back to the mothership (not the clip itself). Otherwise the data bandwidth required is too much.I'm aware Tesla sources maps from 3rd party vendors. I'm just saying it is something they can do themselves with the right expertise and of course it would take time to reach the same level as Mobileye REM Map. There is something they also said at AI day that leads me to believe it is something they might be working on because those vector space auto-labeled road data are pretty much maps. How detailed the label is, is unknown.
See my other post on this subject. L2 and L4 both use REM maps (as well as dumb navigation maps), but that does not necessarily mean L4 can operate without additional data/training beyond that. Put another way, if there was no difference, the supervision demo that was done should have been able to done as L4 with zero interventions if the REM maps was all that was required.Mobileye REM maps are used for both L2 and L4. That's one thing they've been very keen on using as a selling point. It makes sense to use Lidar for mapping as it is very accurate recreation of the world in 3D that is why majority use it as part of their sensor suit and dataset.