Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
In L3, the manufacturers take responsibility for any mishaps during its operation (L3 does not operate above the highway speed of 37MPH):

Mercedes-Benz EQS Drive Pilot: Been on sale in Germany this summer.

Honda Sensing Elite Traffic Jam Pilot: Only 100 leases in Japan since March last year.

GM does not take responsibility for any accidents in both Super Cruise and Ultra Cruise because they are classified as L2: The driver is responsible for the drive.
And exactly that is why the owners should keep hands on wheel and pay attention at all times.

The hands off level 2 system is a silly concept.
 
GM cannot just call it L3. L3 implies that the system can do all driving tasks when engaged. Super Cruise cannot do all driving tasks. It just does lane keeping and adaptive cruise control. So it is clearly L2.

Being alert and eyes on the road is more effective than having your hands on the wheel. That's because it is possible to have your hands on the wheel and not be paying attention to the road. And if you are not paying attention, having your hands on the wheel won't matter. But if you are alert and paying attention to the road, it is very easy to take control if needed since your hands will be very close to the steering wheel.

Also, Super Cruise does not allow lane changes on non-divided highways to prevent accidental crossing into incoming traffic. Plus with HD maps, the car will stay in the lane and won't cross into incoming traffic. So that is not a risk.
Of course it is a risk of going into oncoming traffic, because all ADAS will have hiccups - my experience from 5 brands and different generations. GMs and Supercruise is not available in Europe, so that is one not tried.

Both AP2, AP3 cars, iX40 with EyeQ5 and ID.4 with EyeQ4 will sometimes glide wide or cut a corner or swerve etc. Sudden disengagement happens when the system no longer wants to be active for some reason, false positive alerts etc.

If GM has solved these issues with a higher quality system with hugely better reliability than the best of today, GM SC would qualify as a Level 3 system where the driver not need to take immediate control in certain ODDs. Of course they may not want to take on that economic risk even if the system is lvl 3 capable OR the know the system is only lvl 2 capable and will sometimes need immediate driver intervention.

And my key point is that hands in lap reduces the time until the driver can take control. Level 2 requires instant ability to take control. Ergo Hands off & Level 2 is a silly concept, increasing risk with no real benefits.
 
If GM has solved these issues with a higher quality system with hugely better reliability than the best of today, GM SC would qualify as a Level 3 system where the driver not need to take immediate control in certain ODDs. Of course they may not want to take on that economic risk
I don't think anyone will deploy Level 3 without lidar. Economic risk is a legislative/political issue, though. Companies would gladly take on a 5m per death type of risk IMHO. But look at this 124.5m judgement against Audi:

Jurors found Audi and Volkswagen 55 percent responsible for the injuries after about a month-long trial. Gloria Cordova, who was driving the vehicle that rear-ended the A4, was found 25 percent responsible. Jesse Rivera Sr. was held 20 percent responsible because his son was not in a booster seat. But because Audi and Volkswagen were found more than 50 percent responsible, under Texas law they are liable for all of the actual damages awarded, Dunn said.

The woman who caused the wreck and the father who broke the law by not putting his son in a booster seat had virtually no liablity while the carmaker whose front seat met all federal safety requirements owes the entire 124.5m. And this isn't some freak runaway jury, it's business as usual in Shakedown, USA. (A runaway jury would award billions, like this 7b+ award against Charter cable company).

China will deploy life-saving autonomous technology as fast as possible while politicians and trial lawyers associations hold us hostage.
 
  • Like
Reactions: daktari
Ergo Hands off & Level 2 is a silly concept, increasing risk with no real benefits.
Given how many people go to the extent of adding things to the steering wheel to defeat the hands detection system, I would say that is demonstratively false. As pointed out by others, hands on wheels do not necessarily mean you are paying attention and the flip side is true too.

If a reliable hands off system can be made that can detect human attention by other means, I think that has a lot of value to a decent amount of people (even though you personally do not prefer it obviously, given you prefer the 5 second nag system which a lot of people absolutely hate). This is especially true of systems that work based on torque sensing, which can easily have false negatives and nag unnecessarily (capacitive based systems might be better in this regard).
 
  • Like
Reactions: Doggydogworld
Given how many people go to the extent of adding things to the steering wheel to defeat the hands detection system, I would say that is demonstratively false. As pointed out by others, hands on wheels do not necessarily mean you are paying attention and the flip side is true too.

If a reliable hands off system can be made that can detect human attention by other means, I think that has a lot of value to a decent amount of people (even though you personally do not prefer it obviously, given you prefer the 5 second nag system which a lot of people absolutely hate). This is especially true of systems that work based on torque sensing, which can easily have false negatives and nag unnecessarily (capacitive based systems might be better in this regard).
The wheel tug was implemented as a knee-jerk reaction to some widely publicized fatal AP failures where drivers were asleep, or not paying attention. It was a cheap fix that required no hardware mods and is really a poor substitute for head and eye tracking.
 
Actually hands-on L2 is kind of silly. If I have to hold my hands on the wheel I might as well steer. It doesn't require any extra effort.

To me this is entirely dependent on the situation.

With a high confident L2 system that has proper driver monitoring then hands off freeway L2 driving work well. With less confidence or more traffic then requiring hands on isn't bad. It reduces the time it takes to react to something that might happen on the road.

For me the biggest annoyance with Tesla's implementation was the torque sensor method which wasn't detecting hands, but torque. We see a huge amount of variability between people/vehicles when it comes to how well the torque sensing works.

I'm not a fan of L2 city streets due to the march of 9's problems. As the 9's pile on it becomes increasingly difficult for humans to pay attention regardless of the "hands" question.

It will be interesting to see what GM does with Ultra Cruise. What capabilities it will have, and what the "hands" requirement will look like.
 
Actually hands-on L2 is kind of silly. If I have to hold my hands on the wheel I might as well steer. It doesn't require any extra effort.
I don't agree to this. I feel the level 2 steering assist is a great support both at highways and narrow roads, even if I need to hold hands on wheel - both for extra safety and for compliance.

The comparison of hands free lvl2 with autopilot+ defeat device is very good - in the same class of unsafe. Of course GM will claim that their surveillance of eyes peeking forward is safer than not, and it probably is.

But the eyes watch over the traffic. (And ADAS speed control)
The hands watch over the ADAS steering activity.

Skip hands = less control

IMO, hands off need lvl 3 capabilities.
 
I don't agree to this. I feel the level 2 steering assist is a great support both at highways and narrow roads, even if I need to hold hands on wheel - both for extra safety and for compliance.

The comparison of hands free lvl2 with autopilot+ defeat device is very good - in the same class of unsafe. Of course GM will claim that their surveillance of eyes peeking forward is safer than not, and it probably is.

But the eyes watch over the traffic. (And ADAS speed control)
The hands watch over the ADAS steering activity.

Skip hands = less control

IMO, hands off need lvl 3 capabilities.
Maybe Tesla should require that the accelerator pedal be slightly depressed occasionally as well to ensure that you are ready to instantly respond to phantom braking.

Or, the car could ask random alertness questions like, "What color was the car that just went by?" or, "what road did we just pass?" Miss two out the last three questions and its off to AP jail!
 
Nice article on Zoox's prediction stack:

From this data-rich image, the ML system produces a probability distribution of potential trajectories for each and every dynamic agent in the scene, from trucks right down to that pet dog milling around near the crosswalk.
These predictions consider not only the current trajectory of each agent, but also include factors such as how cars are expected to behave on given road layouts, what the traffic lights are doing, the workings of crosswalks, and so on.
These predictions are typically up to about 8 seconds into the future, but they are constantly recalculated every tenth of a second as new information is delivered from Perception.
These weighted predictions are delivered to the Planner aspect of the AI stack — the vehicle’s executive decision-maker — which uses those predictions to help it decide how the Zoox vehicle will operate safely.
While perfect prediction is, by its nature, impossible, Wang’s team is currently taking steps on several fronts to raise the vehicle’s prediction capabilities to the next level, firstly by leveraging a graph neural network (GNN) approach.
Work is now underway to integrate Prediction even more deeply with Planner, creating a feedback loop. Instead of simply receiving predictions and making a decision on how to proceed, the Planner can now interact with Prediction along these lines: “If I perform action X, or Y, or Z, how are the agents in my vicinity likely to adjust their own behavior in each case?” In this way, the Zoox robotaxi will become even more naturalistic and adept at negotiations with other vehicles, while also creating a smoother-flowing ride for its customers.

 
Last edited:
  • Like
Reactions: Bitdepth
Good read. Argo says their lidar can see up to 400 meters away and scans in 360 degrees multiple times per second. Their AI can also classify objects separately and understands the relationship between objects. For example, it can classify a bicycle and a rider separately. This is key because sometimes a rider is on their bicycle and sometimes walking next to their bicycle. Argo's AI knows the difference.

 
  • Like
Reactions: 2101Guy
After getting their permit for driverless, Baidu has launched commercial driverless robotaxis in 2 Chinese cities.

Should be noted that the geofenced areas are much smaller than Waymo. Let's see how quickly Baidu can scale from here. But it is a start.

Baidu has rolled out commercial driverless taxi services in the Chinese cities of Wuhan and Chongqing, expanding the transport option beyond the country's capital Beijing. The launch comes this week with the government releasing China's first draft guidelines on the use of self-driving vehicles for public transport.
The driverless robotaxi service will run in government-designated areas in Wuhan and Chongqing, spanning 13 square kilometres and 30 square kilometres, respectively. Routes in each city will be covered by five Apollo 5th generation vehicles.

 
  • Informative
Reactions: Jeff N
Mobileye just did their own "cross country FSD" trip similar to what Elon promised. Mobileye's vision-only system called SuperVision completed a nearly 2,000 km trip in Europe, in various conditions ranging from day to night and highways, twisty country roads and city streets, including mapless routes, with only "occasional and minimal human interventions":

Just weeks ago, as part of a demonstration for customers, we completed a multi-day, transcontinental road trip that put Mobileye SuperVision™ – our next-generation driver-assist system – to the test. In the span of four days, we covered nearly 2,000 kilometers, passing through six countries in southern and central Europe – eschewing the confines of a controlled environment to venture out on roads that our technology had only mapped (but our test vehicles had never driven on) before.
From our starting point in Barcelona, we traversed the Spanish and French Rivieras, then drove through Monaco, northern Italy, Austria, and much of Germany (where the journey concluded). All told, the trip encompassed nearly 40 hours of driving, including some 300 kilometers at night, in heat as high as 40 degrees Celsius, on a combination of packed city streets, twisting country roads, and high-speed interurban highways.
To transparently demonstrate REM’s adaptability and capability, we even let our guests choose waypoints along the route – so the route could be set or reset with minimal notice. Also, to show just how well the computer-vision system alone works, we performed a significant portion of the driving in “mapless mode” (without the benefit of the Mobileye Roadbook), relying strictly on the vehicle’s onboard cameras instead. And we’re proud to report that the system performed impressively throughout – requiring only occasional and minimal human intervention, even after dark on non-illuminated roads and on pavement with worn-away lane markings.
In the Italian city of Genoa, for example, our vehicle spent hours crawling through heavy urban traffic during a record heatwave, without any need for human intervention. On another night, the car drove itself out of Monaco, even after our team got lost with no cellular reception.


IMO, this is a nice demonstration of Mobileye's vision-only system!
 
Last edited:
  • Love
Reactions: 2101Guy
An eyebrow was raised at the quote: "requiring only occasional and minimal human intervention"

Yes, Mobileye is admitting that it was not a "zero intervention" trip. But SuperVision is a driver assist. It is required to have driver supervision. We also don't know if the interventions were for a safety issue during the trip or required interventions like to park the car or move the car at a gas station. The point is that the interventions were minimal so there were not a lot of them.

But in context, it sounds like the "occasional and minimal human interventions" might have been for the mapless routes. So when they drove on roads that were not mapped, and some roads did not have lane markings, the vision-only system did require occasional interventions. But maybe, the mapped routes did not require any interventions.
 
Last edited:
Mobileye just did their own "cross country FSD" trip similar to what Elon promised. Mobileye's vision-only system called SuperVision completed a nearly 2,000 km trip in Europe, in various conditions ranging from day to night and highways, twisty country roads and city streets, including mapless routes, with only "occasional and minimal human interventions":







IMO, this is a nice demonstration of Mobileye's vision-only system!

But but @powertoold told me this was impossible! That it was game, set, match!
 
  • Like
Reactions: diplomat33