Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Brad Templeton's F rating of Tesla's FSD

This site may earn commission on affiliate links.
I do not get the detailed maps argument. It doesn't take much detail to show lanes. In the case of the upcoming merge, the car can use 'low-detail' maps for that (planning) information.
The question is do you record the actual location of lane markers, and their texture and other things. The key thing is that if you have detailed information, it's really obvious if the road has changed. Waymo for example has a LIDAR grayscale image (others use the 3D data, perhaps Waymo does too now) of what the road looked like when mapped. In fact it uses that image to figure out where it is on the map. It compares what it sees with the model built by other cars that drove the road, and it knows where it is, and what differences there are from when it was mapped. (There are almost never any, of course, unless you are the extremely rare first car ever to discover a new construction site that wasn't permitted in advance.) But if there are, you know them, and can act appropriately -- for example, driving like a no-map car in that stretch. So it's never worse than the no-map car and almost always better.
 
... It compares what it sees with the model built by other cars that drove the road, and it knows where it is, and what differences there are from when it was mapped. (There are almost never any, of course, unless you are the extremely rare first car ever to discover a new construction site that wasn't permitted in advance.) But if there are, you know them, and can act appropriately -- for example, driving like a no-map car in that stretch. So it's never worse than the no-map car and almost always better.
Sometimes you are the first car to discover the road has changed

 
The question is do you record the actual location of lane markers, and their texture and other things. The key thing is that if you have detailed information, it's really obvious if the road has changed. Waymo for example has a LIDAR grayscale image (others use the 3D data, perhaps Waymo does too now) of what the road looked like when mapped. In fact it uses that image to figure out where it is on the map. It compares what it sees with the model built by other cars that drove the road, and it knows where it is, and what differences there are from when it was mapped. (There are almost never any, of course, unless you are the extremely rare first car ever to discover a new construction site that wasn't permitted in advance.) But if there are, you know them, and can act appropriately -- for example, driving like a no-map car in that stretch. So it's never worse than the no-map car and almost always better.

1) There is no such thing as a "LIDAR grayscale image." LIDAR sensors return an uncolored 3D point cloud. Any desired shading/colorization must be added as a post-processing step using either additional input sources and/or AI that uses multiple frames to figure out which points are a thing versus multiple things that happen to be next to each other, plus perhaps AI that recognizes certain shapes as objects (as Tesla does using cameras).

2) It is simply not scalable to store images of all roads from vantage points of the vehicle driving on the roads in the vehicle. Waymo might have done that back in the day for its very limited suburban Phoenix pilot, but it's not what they're doing today.

3) Therefore HD Maps generally do not contain images. The one exception of which I'm aware is Nvidia Maps (technology from acquiring DeepMap last August). But, even there Nvidia's HD Maps are different for different uses. They support L2 functionality via crowdsourcing (as does Mobileye), but for L3 support they bring humans in for map validation. This is currently only done on highways. Doing so in cities is, another scaling problem. And then for L4 support they have their own fleet of vehicles to produce suitable maps. The costs of these vehicles and the data they collect are too expensive/impractical to put into consumer crowd sourcing vehicles.

4) Waymo is today not using "that image to figure out where it is on the map." That appears to be an Nvidia Map thing only. This is what Waymo says they're doing today.

5) True "ground truth" maps are today built with both expensive Lidar and expensive & time-consuming human validation, as you can read about in the Nvidia link above. In the interview with Shashua, he's hopeful on eventually getting a good lidar sensor down to $1,000 in cost to the OEM. That's a LOT of money to put into normal consumer vehicles. Heck, just the bandwidth costs alone are prohibitive for OEMs, so it seems extremely unlikely that crowdsourcing accurate data is possible in an affordable and scalable solution - and that's without considering the time cost of the human validation that companies like Nvidia do for their self-driving off highway maps.

6) Remember that everything requires data, processing, and processing time. All of which are precious in an autonomous vehicle. If a vehicle is building a 3D environment as it's driving to compare to some stored map and then determine what differences there are (there will always be differences) and then determine which differences matter and which don't matter and then determine what the actual environment is in which to drive, that isn't necessarily "never worse" or "almost always better" than not taking the data and processing time to compare first, but to instead figure out what's there and handle it. It could even be that the more data there is in the map the more processing and processing time are needed for the comparison, and then if there is a meaningful difference, the car is behind on figuring out what's really there.


There's no doubt that Tesla has taken on a big challenge. The argument over which technologies are "crutches" and which are truly practical will go on for a while still. One can see that companies whose technology depends on expensive in-vehicle hardware will have deployment issues (and is why many are pursuing a taxi replacement solution where the added up front hardware cost is amortized over orders of magnitude more miles per vehicle). But, it's also true that companies depending on human-validated maps have their own cost, scaling, the time issues. Right now, the argument against Tesla's approach is that it has to solve the drive 'the first time on this road' problem at least as well as humans, and eventually needs to be better to avoid a world of Sunday Drivers. That said, human Sunday Drivers have just two eyes and limited processing power (which is time-shared) on board. Tesla vehicles have 4 times as many eyes and dedicated processing power.

Note that Tesla already makes use of some additional map data beyond what's needed for navigation. Cars with non-Beta FSD show Stop Signs on the screen before they're visible to the vehicle (or driver) today. I don't think anyone here knows how much map data is stored in Tesla vehicles today, and certainly not how much they might augment that in the future. Only Tesla knows what recognition issues they're having today - we can only infer so much from what the car displays and does. I personally do not think that a full HD map is necessary or even desirable for safe and practical autonomous driving. But, I also don't rule out that Tesla might add more data to their existing maps as their development continues.
 
1) There is no such thing as a "LIDAR grayscale image." LIDAR sensors return an uncolored 3D point cloud. Any desired shading/colorization must be added as a post-processing step using either additional input sources and/or AI that uses multiple frames to figure out which points are a thing versus multiple things that happen to be next to each other, plus perhaps AI that recognizes certain shapes as objects (as Tesla does using cameras).
They return intensity.
lidar.gif

2) It is simply not scalable to store images of all roads from vantage points of the vehicle driving on the roads in the vehicle. Waymo might have done that back in the day for its very limited suburban Phoenix pilot, but it's not what they're doing today.

3) Therefore HD Maps generally do not contain images. The one exception of which I'm aware is Nvidia Maps (technology from acquiring DeepMap last August). But, even there Nvidia's HD Maps are different for different uses. They support L2 functionality via crowdsourcing (as does Mobileye), but for L3 support they bring humans in for map validation. This is currently only done on highways. Doing so in cities is, another scaling problem. And then for L4 support they have their own fleet of vehicles to produce suitable maps. The costs of these vehicles and the data they collect are too expensive/impractical to put into consumer crowd sourcing vehicles.
First of all not only is Mobileye's REM map fully automated, but their map is generated by consumer fleets. Also alot of the mapping has been taken over by state of the art machine learning auto-labeling models which requires order of magnitude less human validation. Its amazing to me how tesla fans champion AI to solve the the mother of all AI problem but that same AI can't possibly crack mapping. Talk about cognitive dissonance.
5) True "ground truth" maps are today built with both expensive Lidar and expensive & time-consuming human validation, as you can read about in the Nvidia link above.
First of all, Lidar IS NOT expensive anymore.
Secondly, these companies use the same car to map as they do to drive.
Thirdly ML handles alot of the map creation and validation in alot of companies.
In the interview with Shashua, he's hopeful on eventually getting a good lidar sensor down to $1,000 in cost to the OEM. That's a LOT of money to put into normal consumer vehicles.
High resolution Lidar today already cost less than $1k. The lidar that they use today cost $1k (Luminar), Volvo gets the same lidar for $500. He's talking about a different type of lidar. Also there are dozens of cars today with high resolution lidar.
6) Remember that everything requires data, processing, and processing time. All of which are precious in an autonomous vehicle. If a vehicle is building a 3D environment as it's driving to compare to some stored map and then determine what differences there are (there will always be differences) and then determine which differences matter and which don't matter and then determine what the actual environment is in which to drive, that isn't necessarily "never worse" or "almost always better" than not taking the data and processing time to compare first, but to instead figure out what's there and handle it. It could even be that the more data there is in the map the more processing and processing time are needed for the comparison, and then if there is a meaningful difference, the car is behind on figuring out what's really there.
You are spreading misinformation. If using HD map were worse or provided no benefit, no one would still be using it.
There are companies today, using it and providing results with driverless L4 fleets (Waymo, Cruise, AutoX) while the one company that don't doesn't even have anything remotely close.
There's no doubt that Tesla has taken on a big challenge. The argument over which technologies are "crutches" and which are truly practical will go on for a while still. One can see that companies whose technology depends on expensive in-vehicle hardware will have deployment issues (and is why many are pursuing a taxi replacement solution where the added up front hardware cost is amortized over orders of magnitude more miles per vehicle).
What deployment issues? Mobileye's current robot-taxi cost around $15k. Their consumer car in 2024 (two years from now) will cost significantly less than $5k.
Exactly what deployment issue are you referring to?
But, it's also true that companies depending on human-validated maps have their own cost, scaling, the time issues.
There is not scaling issue. Secondly again as I said. Its amazing to me how tesla fans champion AI to solve the biggest the mother of AI project but that same AI can't crack mapping. Talk about cognitive dissonance.
Right now, the argument against Tesla's approach is that it has to solve the drive 'the first time on this road' problem at least as well as humans, and eventually needs to be better to avoid a world of Sunday Drivers. That said, human Sunday Drivers have just two eyes and limited processing power (which is time-shared) on board. Tesla vehicles have 4 times as many eyes and dedicated processing power.
You're joking right? The human brain is a million times more powerful than Tesla's hardware and the human eye is also way better.
Tesla vehicles is stuck with 8 low resolution (1.2mp) cameras with significant blind spots, with no 360 camera cleaning
Note that Tesla already makes use of some additional map data beyond what's needed for navigation. Cars with non-Beta FSD show Stop Signs on the screen before they're visible to the vehicle (or driver) today. I don't think anyone here knows how much map data is stored in Tesla vehicles today, and certainly not how much they might augment that in the future. Only Tesla knows what recognition issues they're having today - we can only infer so much from what the car displays and does. I personally do not think that a full HD map is necessary or even desirable for safe and practical autonomous driving. But, I also don't rule out that Tesla might add more data to their existing maps as their development continues.
This proves that point about 99% of Tesla fans. Its not HD map, Lidar, radar that you are against. You are simply against anything that isn't Tesla.
 
  • Disagree
Reactions: HabitualUser
1) REM is merely compiling what the cars have said. The "distillation" as you put it is the critical element, and for that Mobileye's use of the on-board computer is itself severely limiting. Garbage-in, garbage-out: it doesn't matter how great Mobileye's cloud computing is if it's fed stuff that's missing elements or even wrong because the on-board computer wasn't up to snuff. In this regard, Tesla's sending of data to the cloud for post-processing is superior.

2) Sashua understands the cost of bandwidth and what OEMs are willing to pay very well. Vehicle OEMs are loathe to spend money to provide data to a Tier 1. Even the $1/month number Sashua mentioned is a high cost for OEMs that try to save pennies by cutting down the number of screws used to attach a panel. If Mobileye really wanted more data, they could pay the OEMs for the bandwidth ME-equipped vehicles use. If the bandwidth is as cheap as you say and the additional data as useful as you say, then why hasn't Mobileye done this?

Mobileye is getting data from 10k Zeekr cars in china. Soon to be 75k+ by the end of 2022.
Mobileye has more than 100x more data than Tesla. 16 million 1 minute clips vs 1 million 10 second clips. 200PB vs 1.5 PB.
They don't need more data.
 
Who is Brad Templeton? He covers robocar technology in media such as Forbes, and previously worked
on Google's car team. He chairs the program on computing and networking for Singularity University.
Singularity University is NOT a University. It is a consulting company.
 
They return intensity.

They don't return 2D grayscale images, as was claimed. Storing such images on vehicles to aid in identifying where the vehicles is is not a scalable solution, as I described.

First of all not only is Mobileye's REM map fully automated, but their map is generated by consumer fleets. Also alot of the mapping has been taken over by state of the art machine learning auto-labeling models which requires order of magnitude less human validation. Its amazing to me how tesla fans champion AI to solve the the mother of all AI problem but that same AI can't possibly crack mapping. Talk about cognitive dissonance.
I was reporting on how Nvidia itself says their maps are generated, which include a variety of methods. The better maps they want for L4 are not automatically-generated from solely crowdsourced information.

I have said little to nothing about the suitability of AI for map generation. Of course that's possible. Mobileye, however, is limited by the relatively low compute and storage OEMs let them install on vehicles and by the severe bandwidth limitations OEMs place on Mobileye's bandwidth. Should Tesla choose to generate HD maps, they already have tons of images and video in their cloud from which to process. Mobileye's images and videos are lost forever, as they're not sent to the cloud and not retained on board.

First of all, Lidar IS NOT expensive anymore....High resolution Lidar today already cost less than $1k.
I was simply repeating what the CEO of Mobileye said to Brad Templeton in a video he posted. When Templeton challenged him on the $1k price being too high, CEO Shashua said the lower cost lidar units weren't good enough in quality, although he did later qualify that ME is planning on not needing multiple lidar units - only 1 forward facing lidar would suffice. I guess that means vision only for things like changing lanes on a highway.

To quote from Mobileye's own website: "Unfortunately, LiDAR is inherently a relatively expensive type of autonomous vehicle sensor, costing roughly ten times as much as radar. And we don’t anticipate this cost to come down significantly anytime soon."

You are spreading misinformation. If using HD map were worse or provided no benefit, no one would still be using it.
First, so stop creating strawmen of things I didn't say to attack me with. I have said that additional map data can be useful.

Second, the old "other people use it, so..." defense doesn't hold water. Different companies are using different technologies and techniques. The usefulness of maps will vary based on the software employed, and there are issues with adding data from maps into any system, such as resolving differences between what the maps describe and what the car is actually seeing. I keep trying to start a discussion around neural net confidence scores, but no-one has taken that up. Either folks don't understand it or it doesn't support their arguments.

There are companies today, using it and providing results with driverless L4 fleets (Waymo, Cruise, AutoX) while the one company that don't doesn't even have anything remotely close.
I can't believe that in 2022 we still have to point out that solving the problem for small-area/expensive vehicle taxi fleets is quite different than solving the problem for almost all roads in countries with much more affordable consumer-owned vehicles. Some estimates are that Waymo adds $100k to the price to be able to add a vehicle to its fleet. Tesla is trying to do more with software (a scalable solution) than with hardware.

This proves that point about 99% of Tesla fans. Its not HD map, Lidar, radar that you are against. You are simply against anything that isn't Tesla.
Stop with the false strawmen already. Part of what I actually said was:

Note that Tesla already makes use of some additional map data beyond what's needed for navigation...I also don't rule out that Tesla might add more data to their existing maps as their development continues.
 
Mobileye has more than 100x more data than Tesla. 16 million 1 minute clips vs 1 million 10 second clips. 200PB vs 1.5 PB.
They don't need more data.
Mobileye retains old data longer than Tesla. Some of that data is 25 years old, according to Mobileye.

Tesla doesn't need to retain it all. Tesla simply tells the vehicles what it wants when it wants it. Mobileye doesn't have that opportunity since almost no OEMs let ME push OTA updates to the OEM's vehicles on demand like Tesla does to its vehicles. Which means its likely that ME has to retain everything it previously got, especially since CEO Shashua recently admitted they're not getting images nor video anymore.

Note also that ME doesn't actually claim "16 million 1 minute clips." They actually say: "200 petabytes of driving footage, equivalent to 16 million 1-minute driving clips" (emphasis added) And they also say that data includes both "real-world and simulated" data.

Matter of fact, everything I've read on ME has them saying they don't collect images nor video any more. Such as: Five Things You Don’t Know About Mobileye Data Services (But Probably Should) "Although our system is camera-based, videos are not uploaded to the cloud."
 
How is the intensity of the reflected light not a grayscale image? I guess technically they're 3D grayscale images...

We've lost the context of the original statement:
Waymo for example has a LIDAR grayscale image (others use the 3D data, perhaps Waymo does too now) of what the road looked like when mapped. In fact it uses that image to figure out where it is on the map.

Which, by mentioning that others use 3D data, is saying that the "LIDAR grayscale image" is 2D. The word "image" itself is inconsistent with the "point cloud" phrase typically used to describe what lidar produces, which is 3D. And note that what is returned is grayscale only in that it contains no color information - there is not a reliable association of what human eyes see to the 8-bit intensity values returned by lidar systems, as such intensities are not solely based on the visible-spectrum reflectivity of the thing reflecting the signal back.

To further help with context, it's worth nothing that @bradtem is a "Strategic Advisor" for DeepMap, which was bought by Nvidia. Nvida says: "NVIDIA Map is the only HD solution that supports both a camera layer for localization, planning, and control, as well as a redundant radar localization layer..." That "camera layer" is indeed a collection of 2D images against which the vehicle compares what its cameras (not lidar) are seeing to help place it (localization) in the environment, as described here.

My point was mostly that such a solution isn't scalable beyond the limited area robotaxi use case, as it requires significant on-board vehicle storage to have images for every road segment in every area in which the vehicle might be drive. Whatever use Waymo previously had for lidar image/sequence storage for localization, they appear to have abandoned that approach for localization years ago, as the links I provided show.
 
  • Like
Reactions: Gerardf

Nice. I am very familiar with these road conditions and driving style. I've always maintained that the Whole Mars Catalog "no disengagement" drives are way too easy. Come to the northeast. If FSD passes here, we're ready for mass deployment.

This is South Boston btw. At least the roads are somewhat grid-like here. Chinatown/Financial District, Government Center: forget about it. Fenway, Jamaica Plain, Cambridge, Somerville, and pretty much any town inside I-95 is a big fail for FSD. I don't even see how people who live here could qualify via Safety Score. Maybe go out at 3am, get on I-93 or Rt 2 and get some AP miles logged, maybe.
 
  • Like
Reactions: voyager
They don't return 2D grayscale images, as was claimed. Storing such images on vehicles to aid in identifying where the vehicles is is not a scalable solution, as I described.
They return something better than 2D grayscale images. They have this images in 3d giving them distance.
You don't want a 2d images, you want 3d images. That's the entire point of VIDAR which is trying to get to what Lidar is offering.
I was reporting on how Nvidia itself says their maps are generated, which include a variety of methods. The better maps they want for L4 are not automatically-generated from solely crowdsourced information.
Mobileye's maps are automatically generated.
Cruise/Waymo map are semi-automatic.
Why can't you understand that ML tools have reduced the time to create maps even when using human validators from 26 weeks to 1-2 weeks?

mWrvzp1.png


I have said little to nothing about the suitability of AI for map generation. Of course that's possible. Mobileye, however, is limited by the relatively low compute and storage OEMs let them install on vehicles and by the severe bandwidth limitations OEMs place on Mobileye's bandwidth. Should Tesla choose to generate HD maps, they already have tons of images and video in their cloud from which to process. Mobileye's images and videos are lost forever, as they're not sent to the cloud and not retained on board.
Collecting Images/Videos are a bad way to build and maintain real time maps, you need the actual NN output correlated by GPS data.
Mobileye is already using AI for map generation and validation. Mobileye's REM Map is fully automated. They have mapped all of japan, EU and most of US, china.

Why do you keep spreading misinformation about known facts?
They use a offline Neural network to generate the map using the data sent from cars. They first align the data using a proprietary algorithm, then they feed it into a deep neural network which outputs a accurate map model.

Lastly Mobileye has enough images/videos data coming from their Zeekr fleet.

iqnTb79.png


N7SC10L.png


zPyuAPI.png

I was simply repeating what the CEO of Mobileye said to Brad Templeton in a video he posted. When Templeton challenged him on the $1k price being too high, CEO Shashua said the lower cost lidar units weren't good enough in quality, although he did later qualify that ME is planning on not needing multiple lidar units - only 1 forward facing lidar would suffice. I guess that means vision only for things like changing lanes on a highway.
He didn't say lower cost lidar units weren't good enough. But that they can build a lower spec Lidar for less money. Heck today they are using a lower spec lidar (luminar) compared to what they want to sell in 2025. And they are still able to achieve L4 autonomy with them. This is also the same/similar lidar others are using for L4. The point is as time goes, lidar gets better in quality and cheaper. The main reason Intel's lidar will cost more is because its a 4D FMCW lidar not a 3D lidar.

Velodyne HDL 64E - $75k

Key Features:​

  • 64 lines
  • 50m (10% reflectivity), 120m (80% reflectivity) range
  • 360° Horizontal FOV
  • 26.9° Vertical FOV
  • 0.08° angular resolution (azimuth)
  • <2cm accuracy
  • ~0.4° Vertical Resolution
https://hypertech.co.il/wp-content/uploads/2015/12/HDL-64E-Data-Sheet.pdf

Luminar Iris - $500

Key Features:​

  • 640 lines
  • 500m max range
  • 250m at <10% reflectivity
  • 120° Horizontal FOV
  • 30° Vertical FOV
  • 0.07° horizontal resolution
  • 1cm accuracy
  • 0.03° Vertical Resolution
  • Dust & Water Ingress, Vibration & Shock certified
Note that Mobileye's lidar is aiming for 1,000 lines.
Lastly, the cars will also have 6x radar, 360 high resolution imaging 4D radar not just vision for changing lanes and other tasks.



Second, the old "other people use it, so..." defense doesn't hold water. Different companies are using different technologies and techniques. The usefulness of maps will vary based on the software employed, and there are issues with adding data from maps into any system, such as resolving differences between what the maps describe and what the car is actually seeing. I keep trying to start a discussion around neural net confidence scores, but no-one has taken that up. Either folks don't understand it or it doesn't support their arguments.
Its not others use it. EVERY ONE USE IT OTHER THAN ONE PERSON. Who you swear by.
The same people who spent an entire day (Autonomy Day 2019) making fun of simulation. Which now two years later are now dependent on and heavily reliant on, going as far as to say "we couldn't do it without simulation".

There are no issues other than the one you are inventing. They were wrong about simulation, they will be wrong about this aswell.
Its simply a matter of time before they acknowledge it. Then you will come out all of a sudden for HD map.
Same with geofence.
I can't believe that in 2022 we still have to point out that solving the problem for small-area/expensive vehicle taxi fleets is quite different than solving the problem for almost all roads in countries with much more affordable consumer-owned vehicles. Some estimates are that Waymo adds $100k to the price to be able to add a vehicle to its fleet. Tesla is trying to do more with software (a scalable solution) than with hardware.
Because you are not just solving for a small area.
You are solving for the entire driving task in that area. The same driving task that exists in other areas aswell which you can apply and scale your solution to.
It is you people who somehow believe a system that can drive 100k miles between safety disengagement in SF when it goes to San Jose, California will all of a sudden start running over pedestrians and hitting things left and right and reliability drop to 1 mile between safety disengagement.

This is simply not true as have been proven, Tesla fails on the most simplest driving situation. You act like Phoenix is some cornered off box in the middle of no where. That it is not the "real world" or the "wild". Yet this is the same location that has 19 million tourist visitors every year. So clearly the system is robust and general unless these people/cars are in danger. Waymo would be a manice to society with all these millions of people going to phoenix from around the world that Waymo has to interact with. Waymo cars encounters these people and vehicles it hasn't seen before on a daily basis. Phoenix, Cali has 19/42 million visitors from all over the country and the world each year. If Waymo's perception system was brittle, each of those people are in danger of being killed/totaled.

Look at the gif below (there are thousands of other simple situations like that where FSD Beta fail at and this occurs multiple times in a SINGLE drive). If what you said was true. IF i took this vehicle that FSD Beta was about to ram into because it didn't detect it, into phoenix and parked it. The driverless waymo should ram into it. We know that humans don't shape shift when they drive to other cities. This is actually very important. Your cars don't transform like autobots. Again very important. This allows Waymo's NN to generalize.

71EGPy.gif


Do you transform into an alien when you go from city to city? Does your car transform into a UFO and levitate? Do you walk backwards like in Tenet? You should go to Phoenix if your logic is correct. Waymo perception and prediction sys will fail & should run u over and rear end u.. If your statement were true then all the millions of tourists who fly/drive into Phoenix are in danger of being run over/rear end as Waymo's perception & prediction is brittle, not general & will instantly fail.

The same is the case in SF with Cruise and Waymo. Go to SF and stand infront of their AV. They should run you over as they don't have you in their dataset.
 
Last edited:
I'm a little baffled why anybody would insist you can't get a 2D grayscale image out of a LIDAR. Of course you can. We did it at Waymo back in 2009 and were not the first. LIDAR returns a 3-D point cloud including grayscale (infrared of course, not gray) intensity return for each point. It is very simple to take those points which are at ground level and project them down into the plane, which is what Waymo did, both in mapping, and in localizing (the car makes its own 2D projection of the road surface and surroundings and all you need to do is find the section of the map which looks like that which can be done quite quickly.) You also immediately learn if you have found a match, but the lanes have been redrawn, which is just the sort of thing you want to be aware of quickly and reliably. You can even do this with camera images though your intensity values are based on reflections of ambient light during the day, while with LIDAR they are uniform regardless of illumination.

Note that while the LIDAR resolution may be low that doesn't matter as you sweep over the road with all your beams as you drive, and get multiple returns from different scans to get a nice dense image. You will see this in any of Waymo's map projections, and those of other teams that use similar techniques.
 
That right stalk isn’t going to last long before needing repair/replacement 😂
That is definitely not a "typical" drive of FSD. That guy is a lier with an agenda.

I've driven for 3 months on FSD and never had that kind of experience.

Whether FSD Beta is "useful" depends on a lot of things - and may be it is not useful to that guy. Doesn't mean its the typical thing.

I've used AP on city streets since I bought EAP/FSD in 2018 (after a few months of mustering needed courage) and FSD Beta is definitely better than EAP on city streets. Much more useful and I've listed these in my earlier posts.
 
I have had FSD Beta since late December. I simply cannot trust it even on basic roads. The jittery steering is horrible, the hesitation when it shouldn't and sometimes just missing objects in the middle of the road. I think the parts shortage accelerated Tesla's "Vision" plan and it's not working. This is nowhere near a human driver experience. Even basic autopilot using Vision has been bad on highways. Sudden braking out of nowhere. All I really want is a calm experience on the highway and Tesla has gotten even worse at this.
 
  • Informative
Reactions: CarlThompson
I have had FSD Beta since late December. I simply cannot trust it even on basic roads. The jittery steering is horrible, the hesitation when it shouldn't and sometimes just missing objects in the middle of the road. I think the parts shortage accelerated Tesla's "Vision" plan and it's not working. This is nowhere near a human driver experience. Even basic autopilot using Vision has been bad on highways. Sudden braking out of nowhere. All I really want is a calm experience on the highway and Tesla has gotten even worse at this.
I don’t see how radar helps with “objects in the middle of the road” - it’s for tracking moving objects, which vision seems to have no problem tracking.

Ofcourse it’s not near human driving experience - but that is just stating the obvious. It’s early access beta - needs 1,000x improvement for human level.