They don't return 2D grayscale images, as was claimed. Storing such images on vehicles to aid in identifying where the vehicles is is not a scalable solution, as I described.
They return something better than 2D grayscale images. They have this images in 3d giving them distance.
You don't want a 2d images, you want 3d images. That's the entire point of VIDAR which is trying to get to what Lidar is offering.
I was reporting on how Nvidia itself says their maps are generated, which include a variety of methods. The better maps they want for L4 are not automatically-generated from solely crowdsourced information.
Mobileye's maps are automatically generated.
Cruise/Waymo map are semi-automatic.
Why can't you understand that ML tools have reduced the time to create maps even when using human validators from 26 weeks to 1-2 weeks?
I have said little to nothing about the suitability of AI for map generation. Of course that's possible. Mobileye, however, is limited by the relatively low compute and storage OEMs let them install on vehicles and by the severe bandwidth limitations OEMs place on Mobileye's bandwidth. Should Tesla choose to generate HD maps, they already have tons of images and video in their cloud from which to process. Mobileye's images and videos are lost forever, as they're not sent to the cloud and not retained on board.
Collecting Images/Videos are a bad way to build and maintain real time maps, you need the actual NN output correlated by GPS data.
Mobileye is already using AI for map generation and validation. Mobileye's REM Map is fully automated. They have mapped all of japan, EU and most of US, china.
Why do you keep spreading misinformation about known facts?
They use a offline Neural network to generate the map using the data sent from cars. They first align the data using a proprietary algorithm, then they feed it into a deep neural network which outputs a accurate map model.
Lastly Mobileye has enough images/videos data coming from their Zeekr fleet.
I was simply repeating what the CEO of Mobileye said to Brad Templeton in a video he posted. When Templeton challenged him on the $1k price being too high, CEO Shashua said the lower cost lidar units weren't good enough in quality, although he did later qualify that ME is planning on not needing multiple lidar units - only 1 forward facing lidar would suffice. I guess that means vision only for things like changing lanes on a highway.
He didn't say lower cost lidar units weren't good enough. But that they can build a lower spec Lidar for less money. Heck today they are using a lower spec lidar (luminar) compared to what they want to sell in 2025. And they are still able to achieve L4 autonomy with them. This is also the same/similar lidar others are using for L4. The point is as time goes, lidar gets better in quality and cheaper. The main reason Intel's lidar will cost more is because its a 4D FMCW lidar not a 3D lidar.
Velodyne HDL 64E - $75k
Key Features:
- 64 lines
- 50m (10% reflectivity), 120m (80% reflectivity) range
- 360° Horizontal FOV
- 26.9° Vertical FOV
- 0.08° angular resolution (azimuth)
- <2cm accuracy
- ~0.4° Vertical Resolution
https://hypertech.co.il/wp-content/uploads/2015/12/HDL-64E-Data-Sheet.pdf
Luminar Iris - $500
Key Features:
- 640 lines
- 500m max range
- 250m at <10% reflectivity
- 120° Horizontal FOV
- 30° Vertical FOV
- 0.07° horizontal resolution
- 1cm accuracy
- 0.03° Vertical Resolution
- Dust & Water Ingress, Vibration & Shock certified
Note that Mobileye's lidar is aiming for 1,000 lines.
Lastly, the cars will also have 6x radar, 360 high resolution imaging 4D radar not just vision for changing lanes and other tasks.
Second, the old "other people use it, so..." defense doesn't hold water. Different companies are using different technologies and techniques. The usefulness of maps will vary based on the software employed, and there are issues with adding data from maps into any system, such as resolving differences between what the maps describe and what the car is actually seeing. I keep trying to start a discussion around neural net confidence scores, but no-one has taken that up. Either folks don't understand it or it doesn't support their arguments.
Its not others use it. EVERY ONE USE IT OTHER THAN ONE PERSON. Who you swear by.
The same people who spent an entire day (Autonomy Day 2019) making fun of simulation. Which now two years later are now dependent on and heavily reliant on, going as far as to say "we couldn't do it without simulation".
There are no issues other than the one you are inventing. They were wrong about simulation, they will be wrong about this aswell.
Its simply a matter of time before they acknowledge it. Then you will come out all of a sudden for HD map.
Same with geofence.
I can't believe that in 2022 we still have to point out that solving the problem for small-area/expensive vehicle taxi fleets is quite different than solving the problem for almost all roads in countries with much more affordable consumer-owned vehicles. Some estimates are that Waymo adds $100k to the price to be able to add a vehicle to its fleet. Tesla is trying to do more with software (a scalable solution) than with hardware.
Because you are not just solving for a small area.
You are solving for the entire driving task in that area. The same driving task that exists in other areas aswell which you can apply and scale your solution to.
It is you people who somehow believe a system that can drive 100k miles between safety disengagement in SF when it goes to San Jose, California will all of a sudden start running over pedestrians and hitting things left and right and reliability drop to 1 mile between safety disengagement.
This is simply not true as have been proven, Tesla fails on the most simplest driving situation. You act like Phoenix is some cornered off box in the middle of no where. That it is not the "real world" or the "wild". Yet this is the same location that has 19 million tourist visitors every year. So clearly the system is robust and general unless these people/cars are in danger. Waymo would be a manice to society with all these millions of people going to phoenix from around the world that Waymo has to interact with. Waymo cars encounters these people and vehicles it hasn't seen before on a daily basis. Phoenix, Cali has 19/42 million visitors from all over the country and the world each year. If Waymo's perception system was brittle, each of those people are in danger of being killed/totaled.
Look at the gif below (there are thousands of other simple situations like that where FSD Beta fail at and this occurs multiple times in a SINGLE drive). If what you said was true. IF i took this vehicle that FSD Beta was about to ram into because it didn't detect it, into phoenix and parked it. The driverless waymo should ram into it. We know that humans don't shape shift when they drive to other cities. This is actually very important. Your cars don't transform like autobots. Again very important. This allows Waymo's NN to generalize.
Do you transform into an alien when you go from city to city? Does your car transform into a UFO and levitate? Do you walk backwards like in Tenet? You should go to Phoenix if your logic is correct. Waymo perception and prediction sys will fail & should run u over and rear end u.. If your statement were true then all the millions of tourists who fly/drive into Phoenix are in danger of being run over/rear end as Waymo's perception & prediction is brittle, not general & will instantly fail.
The same is the case in SF with Cruise and Waymo. Go to SF and stand infront of their AV. They should run you over as they don't have you in their dataset.