Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
If manufactures offer L4 vehicles for sale without controls and you encounter "Conditions" and L4 will not operate, what do you do?

(Taking this to the autonomous car progress thread in order to avoid taking the Elon FSD Beta Tweets thread off-topic).

Short answer is you are basically screwed since you have no way of taking control to get the L4 unstuck. So the manufacturer has to make sure the L4 will never get stuck inside its ODD if they want to sale a personal car with no controls. And if the manufacturer cannot guarantee that the L4 will never get stuck, then they need to include some way of controlling the car, either a steering wheel or pedals, controls on a touch screen or remote assistance.
 
  • Like
Reactions: JulienW
@JulienW The other thing to keep in mind is that L4 is not allowed to leave its ODD. So in the case of a L4 car with no controls, it will be restricted to only operate inside the ODD since there is no way to control the vehicle outside the ODD. So the ODD will matter a lot. If the ODD is a geofence, then the L4 with no controls will be restricted to only drive inside that geofence. For ride-hailing, this might be fine. For personal cars, this could be a problem as it could limit where the owner is able to go. You would need to make sure the geofence is big enough to meet the driving needs of the owner since consumers are unlikely to purchase a L4 car with no controls that can't take them where they need to go. In the case of non-geofence L4, the ODD might be restricted some other way like weather or time of day. For example, maybe the L4 is not allowed to drive in bad weather or at night. This would place limits on when you can use a L4 with no controls. This could be a problem for a L4 personal car since owners would probably not like a car that can't be used when it rains or can't be used at night. So, I think keeping controls on L4 personal cars makes a lot of sense since the owner can take over if they want to go outside the L4 ODD. Removing controls makes sense for L5 personal cars since you don't have to worry about any limitations. The problem is that achieving L5 that never gets stuck is exponentially harder than L4. So again, even for L5, I think it makes sense to keep driving controls, just in case the owner needs or wants to take over.
 
DeepRoute.AI driverless robotaxi handling some pretty busy city streets in Shenzhen, China really well. The video has a narrator from the perspective of the robotaxi, explaining its thinking and actions which helps understand how the FSD works.


And China has passed rules allowing more autonomous vehicles to be deployed:

 
Last edited:
The company TuSimple had a crash with one of their autonomous trucks back on April 6. Some details are now coming out:

On April 6, an autonomously driven truck fitted with technology by TuSimple TSP -9.74%▼ Holdings Inc. suddenly veered left, cut across the I-10 highway in Tucson, Ariz., and slammed into a concrete barricade.

An internal company report says the accident was caused by the person in the cab not properly rebooting the system and the system executed an outdated command:

An internal TuSimple report on the mishap, viewed by The Wall Street Journal, said the semi-tractor truck abruptly veered left because a person in the cab hadn’t properly rebooted the autonomous driving system before engaging it, causing it to execute an outdated command. The left-turn command was 2 1/2 minutes old—an eternity in autonomous driving—and should have been erased from the system but wasn’t, the internal account said.

 
  • Informative
Reactions: Bitdepth
The company TuSimple had a crash with one of their autonomous trucks back on April 6. Some details are now coming out:

An internal company report says the accident was caused by the person in the cab not properly rebooting the system and the system executed an outdated command:

That's an absolutely atrocious design. Blaming the operator only deepens my suspicion of this dodgy company. This video shows the accident (crash at very start and also near the end).
 
The company TuSimple had a crash with one of their autonomous trucks back on April 6. Some details are now coming out:



An internal company report says the accident was caused by the person in the cab not properly rebooting the system and the system executed an outdated command:



Sorry, not a human error. It's a defective design that would allow the system to execute a stale command regardless of how many times the computer has to be rebooted to get it working.

Very poor form to blame the schmuck in the truck.
 
If it requires a specific unusual reboot sequence from the safety operator, then it's TuComplicated.

nice-very-nice-yn02or.jpg
 
That's an absolutely atrocious design. Blaming the operator only deepens my suspicion of this dodgy company.

There are multiple issues here that make testing TuSimple unsafe.

Just like the Autonomous Uber fatal pedestrian case: Yes, it was a terrible design to disable the Automatic Brakings, but if the safety driver had followed the protocol, she could have manually applied the brake timely, and no one would die.

Same for TuSimple: Yes, rebooting the computer before re-engaging the AutoSteer function is a terrible design, but if the driver followed the protocol, there wouldn't be this mishap.

Another troublesome issue here is: While it's very easy to override an AutoSteer in a sedan, the video suggests that is not a case for a Semi.

Thus, multiple issues must be addressed before it is safe for TuSimple to test on public roads:

1. Fail-safe re-engaging AutoSteer.

2. Drivers need to adhere to the protocol.

3. Overriding AutoSteer must be effortless.
 
I didn't realize at first that TuSimple has an important office here in Tucson (as well as San Diego, Ft. Worth. China and Japan) and that the accident in question occurred on I-10 in Tucson.

Linked below is a local news article that explains the accident in more detail. It wasn't serious but easily could have been. The article also describes TuSimple's safety record prior to this, and also delves into other L4 and L2 accident reports on the NHTSA list, highlighting Waymo and Tesla respectively as having the highest tallies in these (as we know, non-normalized raw incident count) metrics. Overall, however, it strikes me as a reasonable and not a highly sensationalized write-up, with a balanced context of autonomous driving development, as well as more detail about TuSimple's financial (stock/IPO) and legal issues (a current ex-employrlee lawsuit).

Feds probe crash of TuSimple self-driving truck in Tucson

This is the website (tucson.com) of the local newspaper and normally will let you read two or three stories before nagging you about being a subscriber.
 
...It wasn't serious but easily could have been...

The practice isn't safe.

Although no one died, it's still very much serious.

The MCAS from Boeing 737 max didn't crash for quite some time but its design flaw was so serious that eventually crashed and killed twice.

Both designs made the task of overriding the automation very difficult.

The first thing when designing an automation system is: Humans should be able to override it at will and there should be no struggle to wrestle for the right to take over the control.
 
The practice isn't safe.

Although no one died, it's still very much serious.

The MCAS from Boeing 737 max didn't crash for quite some time but its design flaw was so serious that eventually crashed and killed twice.

Both designs made the task of overriding the automation very difficult.

The first thing when designing an automation system is: Humans should be able to override it at will and there should be no struggle to wrestle for the right to take over the control.
MCAS was created to prevent stalls from too high an angle of attack on takeoff specifically by overriding the pilot. Making it easy to disable would defeat its purpose. It obviously suffered from very serious design defects, which cost hundreds of lives and billions of dollars.
 
MCAS was created to prevent stalls from too high an angle of attack on takeoff specifically by overriding the pilot. Making it easy to disable would defeat its purpose. It obviously suffered from severe design defects, which cost hundreds of lives and billions of dollars.
That's the same with steering. In the manual mode of steering, humans are known for their imperfect competency, which results in the loss of lives and dollars.

It would be erroneously rationalized that making overriding easy would defeat the purpose of excluding humans from automation.

The problem is: that the technology has not achieved such a zero error record just yet. That is why we still pay human pilots and TuSimple drivers to use their judgment to override the automation at will.
 
That's the same with steering. In the manual mode of steering, humans are known for their imperfect competency, which results in the loss of lives and dollars.

It would be erroneously rationalized that making overriding easy would defeat the purpose of excluding humans from automation.

The problem is: that the technology has not achieved such a zero error record just yet. That is why we still pay human pilots and TuSimple drivers to use their judgment to override the automation at will.
The purpose of MCAS is to take attitude control away from the pilot when s/he makes a critical error on takeoff. It does so by forcing the nose down to prevent a stall. If the system were easily overridden, then the pilot making the original error is likely to do, leading to a stall. Other flight automation, like autopilot are designed so that the pilot can easily override the automation. The fact that MCAS exists is an indication that it was not deemed sufficient to simply warn the pilot of a potential stall condition.

In (current) autonomous vehicles, the roles are reversed. The driver's purpose is to take control away from the automation when it makes a critical error. AEB might be the closest analogy to MCAS, though I believe that the driver can still easily defeat AEB using the accelerator.
 
The purpose of MCAS is to take attitude control away from the pilot when s/he makes a critical error on takeoff. It does so by forcing the nose down to prevent a stall. If the system were easily overridden, then the pilot making the original error is likely to do, leading to a stall. Other flight automation, like autopilot are designed so that the pilot can easily override the automation. The fact that MCAS exists is an indication that it was not deemed sufficient to simply warn the pilot of a potential stall condition.

In (current) autonomous vehicles, the roles are reversed. The driver's purpose is to take control away from the automation when it makes a critical error. AEB might be the closest analogy to MCAS, though I believe that the driver can still easily defeat AEB using the accelerator.

I do not doubt the good intention of an automation system like MCAS. However, when that good intention results in death, that intention must be changed.

Boeing's site has updated its MCAS. Among them are:

"MCAS will only be activated once": Previously, once MCAS is activated, and the plane is nosing down toward the ground to crash, the pilot could override it by nosing it up, but the MCAS would be automatically reactivated again. The result was the struggle between the human's determination to defeat the machine and the machine's determination to "correct" human manual control. It's just like a roller coaster: Up and down, then again, up and down until the machine prevailed, at last, to go down and crash (blue line below):

2560px-Lion_Air_Flight_610.svg.png



"MCAS will never override the pilot’s ability to control the airplane using the control column alone." Notice that Boeing has changed its philosophy and now allows the pilot to override the MCAS much more easily without wrestling with the automation system for the manual control.


Some have argued that the MCAS has never been needed. It makes the handling of the new plane feel the same without additional training. A trained pilot knows how to handle the difference if you give them the ability to control the plane.


It's the same with TuSimple: A trained truck driver can manually drive a truck. If it adds automation, that system must obey a human's manual control at any time.
 
Argo.AI establishes a safety advisory council to help with AV safety:

The Argo Safety Advisory Council provides an external perspective with members sharing their knowledge and expertise to further enhance Argo’s safety approach, while advising on additional ways to build trust and public acceptance in autonomous vehicle technology.

The Council includes world-class experts and demonstrated leaders in transportation safety:
  • Christopher “Todd” Doss, Senior Managing Director, Cybersecurity at Ankura and former Assistant Director of the Federal Bureau of Investigation
  • Dr. Mitchell Garber, Senior Managing Consultant at ESi and former Medical Officer for the National Transportation Safety Board
  • David Kelly, Principal and CEO at Storm King Strategies and former Acting Administrator of the National Highway Traffic Safety Administration
  • Annette Sandberg, Principal and CEO at TransSafe Consulting and former Administrator of the Federal Motor Carrier Safety Administration
  • Robert Sumwalt, Executive Director of the Center for Aviation and Aerospace Safety at Embry Riddle Aeronautical University and former Chairman of the National Transportation Safety Board.

 
Good article from deeproute.AI on their progress:

Some key points of interest:

The fully driverless vehicle drove just under 14 miles in one hour, navigating through significant traffic and narrow lanes safely and efficiently. The vehicle intelligently maneuvered around double-parked cars, counterflow e-scooters and pedestrians, negotiated with oncoming vehicles to calculate the right timing and trajectory to pass busy intersections, conducted multiple lane changes and unprotected left turns.
DeepRoute.ai has made significant improvements to achieve driverless capability, with both software and hardware meeting auto-grade standards. The safety mechanism was also upgraded to guarantee driverless safety on the road. In the case of long tail scenarios, the system will alert the remote monitoring center to intervene or take other safety measures.
The driverless vehicle is equipped with Driver 2.0, a production-ready L4 solution that includes five solid-state LiDARs, eight cameras and other sensors, and a computing platform integrated with its proprietary inference engine. The perception algorithm with sensor fusion can achieve precise object detection up to nearly 220 yards. The planning and control algorithm based on game theory can choose optimal routes and make decisions based on real-time situations when negotiating with oncoming vehicles and other road agents. With its deep learning approach, the inference engine optimizes compute resources, allowing the algorithm to run on its low-cost and power-efficient computing platform effectively and stably. As a result, Driver 2.0 can be priced at $3,000 for automakers in mass production and the algorithm can work with 2 to 5 solid-state LiDARs for automakers customization needs.