You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Supercruise 2.0 already went away from Mobileye the last time i checked.That statement is still extremely vague and can mean a number of things
1) There are no change in actual plans (meaning Super Cruise remains as Mobileye based, and Ultra Cruise using Qualcomm) and the only change is in the marketing name (literally renaming Ultra Cruise to Super Cruise)
2) Both Super Cruise and Ultra Cruise will use Mobileye and GM is abandoning the Qualcomm solution
3) Super Cruise will abandon Mobileye and eventually switch over to Qualcomm
Any source for that? I looked briefly and the only talk about moving to Qualcomm for ADAS I found was the announcement for Ultra Cruise:Supercruise 2.0 already went away from Mobileye the last time i checked.
I agree with the points made, I guess we will see what is really happen behind the scene from the results. As you say, the statement release sounds like typical PR smoke and mirrors.Anyways you are right that its extremely vague. i posted elsewhere that I believe its 100% a PR move.
It just doesn't make sense anyway you look at it. If you say, they did it to merge the brands, then why is it that almost all product portfolio even in different consumer sector have a base and advanced (or pro) version of the same product?
Examples are iPhone/iPhone Pro, Autopilot/FSD, or Supervision Lite/Supervision, Meta Quest/Meta Quest Pro.
If its branding, then it makes absolutely no sense because you do want to have distinction on capability/price unless GM wants to offer only one system that would be equivalent to FSD/SuperVision. Then this "NEW" SuperCruise should be announced and RELEASED this year.
But I just don't buy it. We know there are two teams as confirmed by the quotes.
The first team which built the first SuperCruise 1.0 that was based on EyeQ3 (.25 Tops) and a single forward facing camera and a single forward facing radar. Then the same team built the second SuperCruise 2.0 that was based on Qualcomm chip (Not sure about the tops, maybe ~2-10 Tops give/take) and a single forward-facing camera and surround radar.
Then the new second team built the system Ultra Cruise based on 300 Tops compute, 360 (4k) cameras, surround 4D radars, forward and rear radar, higher precision GPS.
There is such a gap between both systems, that you can't merge systems. It's impossible as the second system is based on tech wayyyyy beyond the first system. The first system is inferior in every way possible.
It's like Tesla trying to merge AP code with FSD code. What instead would happen is, you cancel the first system completely and then you replace it with the second system and ditch the lidar. Then have the team from the first system join the other team.
But it doesn't seem like it's what they are saying in the article, or at least they are butchering the communication of it. Unless this NEW SuperCruise gets announced in their investor conference and gets released this year with all the pre-announced ultra cruise features then no Its just PR and UltraCruise has been scrapped.
That video doesn't show any understanding of hand signals. It's coincidental, just like various Tesla videos on the same topic.
Edit: for clarity my issue is with the word "proof" which has a very specific meaning. Saying "this video demonstrates Waymo responding to hand signals" is perfectly acceptable. Since the video data can be interpreted in other ways it is not proof.Dolgov confirmed that the Waymo did understand and follow the hand gestures autonomously.
That video doesn't show any understanding of hand signals. It's coincidental, just like various Tesla videos on the same topic.
The post you responded to does not say it is proof, it simply states Waymo obeys hand signals from police. It does not state it is the absolute fact but based on prior knowledge on how the system is designed to operate, you can conclude that Waymo is responding to the traffic control personnel. They do the same for temporary handheld stop signs, biker hand signals etc.Edit: for clarity my issue is with the word "proof" which has a very specific meaning. Saying "this video demonstrates Waymo responding to hand signals" is perfectly acceptable. Since the video data can be interpreted in other ways it is not proof.
This improved video also clearly shows the Waymo vehicle begins to creep forward while the officer in the wire-frame version has their back to the Waymo lane.
Gestures
While the Waymo Driver can detect various gestures from raw camera data or lidar point clouds, like a cyclist or traffic controller’s hand signals, it is advantageous for the Waymo Driver to use key points to determine a person's orientation, gesture, and hand signals. Earlier and more accurate detection allows the Waymo Driver to better plan its move, creating a more natural driving experience.
The post you responded to does not say it is proof, it simply states Waymo obeys hand signals from police.
How to Guarantee the Safety of Autonomous Vehicles | Quanta Magazine
As computer-driven cars and planes become more common, the key to preventing accidents, researchers show, is to know what you don’t know.www.quantamagazine.org
To provide a safety guarantee, Mitra’s team worked on ensuring the reliability of the vehicle’s perception system. They first assumed that it’s possible to guarantee safety when a perfect rendering of the outside world is available. They then determined how much error the perception system introduces into its re-creation of the vehicle’s surroundings.
The key to this strategy is to quantify the uncertainties involved, known as the error band — or the “known unknowns,” as Mitra put it. That calculation comes from what he and his team call a perception contract. In software engineering, a contract is a commitment that, for a given input to a computer program, the output will fall within a specified range. Figuring out this range isn’t easy. How accurate are the car’s sensors? How much fog, rain or solar glare can a drone tolerate? But if you can keep the vehicle within a specified range of uncertainty, and if the determination of that range is sufficiently accurate, Mitra’s team proved that you can ensure its safety.
I guess it assumes that the "controller" acts of the reported information without error?Thanks. I saw that article. It is interesting. My question though is that it only seems to focus on perception, not planning. What if the perception is accurate but the AV's planner makes a mistake? It seems to me that your perception could be accurate and the AV could still be unsafe due to planner errors. I guess they are arguing that the planner can take into account the error bars of the perception system and maintain a vehicle path that is always safe. That seems to be the Mobileye view as well. They also focus on reducing perception errors and using RSS to ensure the vehicle's path is always safe because it always maintains an adequate braking distance from other objects to avoid an at-fault collision.
They also detected someone paying attention to their phone ... vs their surroundings! Well done.Zoox for example can predict pedestrian movement, determine if a pedestrian is distracted and looking on their phone, can differentiate between a regular pedestrian and a construction worker, track gestures. These are all of the things an autonomous vehicle needs to be able to do in order to ensure it understands the world around and can navigate safely.
Yup, it's pretty cool all of the things these systems can detect and classify. And that's not really the hard part, relatively speaking, the hard part is using all that information to drive safely. It's too bad all these information is hidden behind simple UI because the passenger really does not need to know all that information, but it would be cool to see.They also detected someone paying attention to their phone ... vs their surroundings! Well done.
Interesting progress.Yup, it's pretty cool all of the things these systems can detect and classify. And that's not really the hard part, relatively speaking, the hard part is using all that information to drive safely. It's too bad all these information is hidden behind simple UI because the passenger really does not need to know all that information, but it would be cool to see.
Pedestrian giving the stop gesture because they want to cross the road.
Two police officers giving the go gesture. Just like the Waymo video shared.
Do they give the resolution of the original video? Is this something that could be done with HW3 or HW4 video?Yup, it's pretty cool all of the things these systems can detect and classify.
They don't share that information but looks to me like ~Full HD or ~2.1mp cameras which is ~2x the resolution of HW3/4 cameras.Do they give the resolution of the original video? Is this something that could be done with HW3 or HW4 video?
FYI HW4 sensors are 2896 × 1876 (5.4MP) and speculated to be Sony IMX490.They don't share that information but looks to me like ~Full HD or ~2.1mp cameras which is ~2x the resolution of HW3/4 cameras.
That said, it's not about the resolution of the cameras though that helps in resolving details in the distance, it's about the capabilities built into their perception stack. And Tesla FSDb doesn't have those capabilities yet as of April 2023. There's been no evidence that the updates since then include those capabilities from reading the release notes. Can it be done? I think so.
At the moment I have Arlo cameras and doorbells. They also register "Person", "Animal", "Package", "Vehicle", or sometimes just report "Movement". (I haven't seen "Package no longer at front door" which is a clever variant, but these days I get text or email updates associated with most deliveries, which should cover most situations.)Interesting progress.
I recently noticed my Google doorbell has started giving me additional notifications of "Person with package at front door" and "Package no longer at front door".