Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Almost ready with FSD Beta V9

This site may earn commission on affiliate links.
Unless FSD can honk the horn, it will never happen until we have eliminated pedestrian and human controlled vehicles on the roads.

and as far as the EXACT following mumbo jumbo - the need for a buffer in the distance for comfort of the occupants (like it has now) will always allow the 'fast and furious' to cut the car off. FSD will always give way as it lives in a world with 'rules'
 
That's the great part- instead of a rule written by someone who understands it needing to be memorized and then understood by millions of potentially dumb randos, the rule is written by someone who understands it, then programmed to be followed by a single set of code that is tested to confirm it follows it- then just copied identically to millions of self driving cars in the fleet.

The more of them you get the more consistent driving you get, the fewer accidents, the less backed up traffic, etc...

That's exactly it. The decisions people are making are based on years of driving experience - through construction, crazy weather conditions, under stress, with random animals jumping out, etc. You also have to take into account random signage, emergency workers instructions, etc.

Translating traffic rules is not going to be simple programming, nor is keeping up with those changes.
 
Indeed. In fact one must question if the gearcruncher is just trolling ....

I'm sorry that I have communicated so poorly that people think I might be trolling. I can assure you I am not. I am seriously interested in getting us to autonomous driving as a society- but I was trying to point out the challenges that lie between here and there, because unless you acknowledge and solve those, you'll never succeed. Apparently I failed miserably in trying to make that point. Like I said in the beginning, I do believe these issues are solve-able, and in fact they are "easy" when 99% of cars are self driving. The interesting questions are how we get to even 1% of cars self driving when there are many vested interests and laws against that. We can't get to 99% without figuring out the 1% first.

I work in a non-automotive autonomous vehicle space, and the questions I brought up are exactly the ones that our industry is working through. Ambiguous, poorly written laws that require interpretation to meet. We have the advantage of having both a single federal set of laws, as well as technical experts on the regulatory side that we can work with to negotiate acceptable solutions *before* we release the product, and it still takes years to negotiate/prove you met the intent of what appear to be simple rules. Piles of industry standards trying to navigate the rules and produce quantitative guidance that still can lead to reasonable questions if they are "sufficient". There are laws in my industry that mean when humans operate the vehicles, they are likely in violation of some law a large percentage of the time, but those laws only get applied when something goes wrong. I see all the same issue coming with autonomous cars 10 fold.

You can say "It's easy to program a car to stay 147 feet behind the car in front, and that's "reasonable and prudent". Technically, that is easy. The issue is if the regulator will agree that 147 feet is reasonable and prudent. How did you come up with 147 feet? Why not 159? How many nines of reliability is that modeled off of? Who is liable when that turns our to be a wrong decision 1 in 1 billion miles? Will every local judge agree that was reasonable and prudent? All reasonable questions that take time to answer. You can get there, but with the great American experiment of 50 states with individual counties and cities, this is much harder than things regulated only at the federal level. I'm really interested to see how America solves it.

You can also say "once we have this solved, this will be so much safer!" - of course it will. But for the next 30 years, self driving and human driven cars are going to exist on the same roads and will interact with one another. Lots of work in the autonomous space will be wasted work eventually as it will have no use once all vehicles are autonomous. But we can't snap our fingers and just make human driven cars illegal, so we have to solve these issues to get there, and the earlier we realize they are going to be challenges, the faster the whole transition goes. For me, the highest level question that is interesting is "Are autonomous cars ever allowed to break the written law?" and if the answer is no, then the thought experiment is figuring out where that won't actually work in the real world, because autonomous cars are sometimes going to be working with different assumptions/rules/mental models than humans are.

Anyway, I get the point that I'm not adding to the conversation in a useful way, and that most discussions of challenges faced by autonomous cars are interpreted as someone that hates autonomy in general and believes it will never work. I'll move to the sidelines and let the autonomous car pass ;)
 
Last edited:
I work in a non-automotive autonomous vehicle space

We have identified the problem.

Specifically-


We have the advantage of having both a single federal set of laws, as well as technical experts on the regulatory side that we can work with to negotiate acceptable solutions *before* we release the product, and it still takes years to negotiate/prove you met the intent of what appear to be simple rules.


That isn't how it works for automotive autonomous vehicles.

In the states they are already legal most of them the maker simply has to say the car can obey all traffic rules- and it's legal to operate.

Nobody from the government needs to even be spoken to- before OR after release.

In a few other states the maker has to submit a certification to the state...saying the same thing. But again, that's it. The state doesn't "check" or anything- they take their word for it.

So your own industry being one where you have to get approvals in advance with years of negotiation and evidence, do not apply




You can say "It's easy to program a car to stay 147 feet behind the car in front, and that's "reasonable and prudent". Technically, that is easy. The issue is if the regulator will agree that 147 feet is reasonable and prudent.

Again- there is no regulator

Other than a cop I suppose- who is the only regulator of that rule TODAY with human drivers.

And since he knows the EV will have cameras recording everything as counter-evidence, he's unlikely to try and write undeserved tickets to one too.


How did you come up with 147 feet? Why not 159?

Because of math and physics. Already explained to you.

(nor of course would the value be static- it'd change based on speeds and conditions involved based on the data the car has in the moment).

Who is liable when that turns our to be a wrong decision 1 in 1 billion miles?

As in it causes an accident? Again this is already clear in law in the places such cars are allowed.

It's typically the person who activated the self-driving system for that drive. This is the 2nd or 3rd time I am specifically explaining this fact to you

Why do you keep pretending this is SUPER COMPLEX STUFF NOBODY HAS THOUGHT OF OR FIGURED OUT YEARS AGO ALREADY?
 
  • Funny
Reactions: GZDongles
We have identified the problem.

Specifically-





That isn't how it works for automotive autonomous vehicles.

In the states they are already legal most of them the maker simply has to say the car can obey all traffic rules- and it's legal to operate.

Nobody from the government needs to even be spoken to- before OR after release.

In a few other states the maker has to submit a certification to the state...saying the same thing. But again, that's it. The state doesn't "check" or anything- they take their word for it.

So your own industry being one where you have to get approvals in advance with years of negotiation and evidence, do not apply






Again- there is no regulator

Other than a cop I suppose- who is the only regulator of that rule TODAY with human drivers.

And since he knows the EV will have cameras recording everything as counter-evidence, he's unlikely to try and write undeserved tickets to one too.




Because of math and physics. Already explained to you.

(nor of course would the value be static- it'd change based on speeds and conditions involved based on the data the car has in the moment).



As in it causes an accident? Again this is already clear in law in the places such cars are allowed.

It's typically the person who activated the self-driving system for that drive. This is the 2nd or 3rd time I am specifically explaining this fact to you

Why do you keep pretending this is SUPER COMPLEX STUFF NOBODY HAS THOUGHT OF OR FIGURED OUT YEARS AGO ALREADY?
At least you have the self awareness to pick the right profile picture 😁
 
Interesting conversation. I had a prior EV before my 3 (a Volt) that had pure visual front crash braking. That "phantom braked" too. Interesting that high speed braking required radar on that vehicle. Sure - it's far less advanced than what Tesla has - but the idea that a data stream from cameras can't result an error is sort of laughable. Also, for pure vision - the cameras may not be in the right spot. As much as this pains me to say, Subaru's front cameras are a foot apart. Its ugly (but so is the entire car), but appears to work based on their front crash mitigation performance. What makes humans special is our ability to change our stereoscopic camera angle to gather more information. And we do that all the time while driving. We gather a lot of information by moving our bodies and head to reframe the angle in real time. We use visors and sunglasses to improve contrast, as well as moving our head. Bottom line, I didn't buy FSD because I figured in the 7-8 years I would own this won't be level 4. We haven't even discussed weather. My guess? They're going to push vision as far as it can go - kill a few people in L2, and then add sensors back into the mix. That might be the best way. Find your holes and edge cases then solve for them directly. Assuming the tesla cameras are as good or "net" better than human vision capabilities is completely unproven and dare I say, unlikely.
 
AI/SENSOR Redundancy is necessary. It should be mandatory.

Can we first ban cars that smash into things if the driver dozes off ?


BOTHELL, Wash. -- A drowsy driver was rudely awakened when he smashed his car into a freeway guardrail on Bothell Tuesday morning.

The driver was heading from SR 522 and taking the offramp to I-405 South just before 7 a.m. when he fell asleep at the wheel, according to Trooper Rick Johnson.

1618338656007.png
 
Another point: I also disagree with Musk's public assessment of required performance. Many drivers go their entire lives without so much as a minor accident. The standard is not better than average human performance. The standard will be better than an unimpaired, not distracted, experienced and trained driver. Otherwise, who is going to trust their life with this? Dropping overall fatalities is nowhere near good enough. Thats why Level 4-5 will take so long.
 
  • Like
Reactions: gearchruncher
Again- there is no regulator

Other than a cop I suppose- who is the only regulator of that rule TODAY with human drivers.
Cops are not regulators. The judges that interpret the laws are. A judge absolutely could hear evidence that a specific autonomous car design was breaking laws in their jurisdiction and prohibit the use of that vehicle in the future until it is remedied. Just like they can revoke an individual drivers license.

At the Federal level, there absolutely is a regulator- the NHTSA. You know, the group that issues recalls for safety issues, oversees the FVMSS, and negotiates with auto manufacturers over safety standards? They can also prohibit cars from being used that present a hazard to the public. The NC autonomous driving law even says this:

Unless an exception or exemption has been granted under applicable State or federal law, the vehicle: a. Is capable of being operated in compliance with Articles 3, 3A, 7, 11, and 13 of this Chapter; b. Complies with applicable federal law and regulations; and House Bill 469 Session Law 2017-166 Page 3 c. Has been certified in accordance with federal regulations in 49 C.F.R. Part 567 as being in compliance with applicable federal motor vehicle safety standards and bears the required certification label or labels

NTHSA still isn't sure if the yoke steering wheel is legal. You think a car with no steering wheel or brake pedals accessible is clearly compliant with all FVMSS and would be unquestionably legal in NC right now?

As in it causes an accident? Again this is already clear in law in the places such cars are allowed.

It's typically the person who activated the self-driving system for that drive.

You live in NC and it's not what you described. The registered owner of the vehicle is liable for infractions, not the one that activated it.

(d) Registered Owner Responsible for Moving Violations. – The person in whose name the fully autonomous vehicle is registered is responsible for a violation of this Chapter that is considered a moving violation, if the violation involves a fully autonomous vehicle.

There is nothing about who is liable civilly when a collision occurs in the NC law. I guess we can assume the civil follows the criminal liability? It also says the vehicle must have liability insurance like any other car. I wonder what insurance companies will cover the first owner that calls up and says "I'd like to insure my self driving car- and you'll be liable for all the mistakes it makes, and we have no idea how it was programmed, and no, it meets no specific standard because there is no regulator."
 
  • Like
Reactions: Dan D.
from safety perspective. I can only say that radar does indeed help. I have a 2017 MS FSD HDW3 upgrade and I have been in autopilot and thunder storms and road spray really did start to obliterate the ability to drive down to the " I can see the tail lights in front of me only" type of driving . when it was just the the 3 front cameras and the radar, the car soundly tracked the car in front and kept its distance safely, better than I could being distracted by the rain, thunder lightning and visually by the road spray so in that case the saftey margin was better on EAP than not.
Now even with the new build if the cameras are obscured in autopilot, the car does warn you but it gives up much earlier indicating to take over.. when we go to fused stereo AI for all the camera pairs , yes the radar might be redundant for most things, but the system will give up much easier under optically challenged scenarios for any one camera . I would recommend they find a way to have a graceful fall back position to keep the margin of safety up with the radar. Also I thought they were experimenting with at finer grain higher resolution radar as a front " vision aide" that would be the best of both worlds .
 
  • Like
Reactions: Dan D.
Cops are not regulators. The judges that interpret the laws are. A judge absolutely could hear evidence that a specific autonomous car design was breaking laws in their jurisdiction and prohibit the use of that vehicle in the future until it is remedied. Just like they can revoke an individual drivers license.


... on what basis do you image the judge would have this case before him to rule on ALL cars using a specific system instead of a single instance of a moving violation?

Because most places I'm aware of a judge in traffic court certainly can't issue the kind of widespread injunction you are imagining here


That's apart from the fact there'd be no such evidence, because the car would be designed not to break the law.


At the Federal level, there absolutely is a regulator- the NHTSA

Except they don't regulate autonomous driving systems.

At all.

They leave that up to the states.

That doesn't mean they couldn't regulate them in the FUTURE- but today they don't.



. You know, the group that issues recalls for safety issues, oversees the FVMSS, and negotiates with auto manufacturers over safety standards? They can also prohibit cars from being used that present a hazard to the public. The NC autonomous driving law even says this:

But again, as of right now they don't do any of that regarding autonomous vehicles

They leave it up to individual states.


NTHSA still isn't sure if the yoke steering wheel is legal.

This has literally nothing to do with autonomous driving.


You think a car with no steering wheel or brake pedals accessible is clearly compliant with all FVMSS and would be unquestionably legal in NC right now?

My Tesla, even when it can drive itself, will still have all those parts- so you again appear to be just throwing random nonsense at an imaginary wall hoping something sticks.


You live in NC and it's not what you described. The registered owner of the vehicle is liable for infractions, not the one that activated it.

I said typically.

That doesn't mean it's the same in every state.

NC does it differently. But it DOES IT. In fact that's the other most common type of liability, owner rather than engager.


What's awesome though is it forced you to admit you already know the law on this exists in these states.


So your "OH MY HOW WILL ANYONE KNOW WHO WILL BE RESPONSIBLE" schtick is exposed as the concern trolling it always was.






It also says the vehicle must have liability insurance like any other car. I wonder what insurance companies will cover the first owner that calls up and says "I'd like to insure my self driving car- and you'll be liable for all the mistakes it makes, and we have no idea how it was programmed, and no, it meets no specific standard because there is no regulator."


Man, good thing Tesla has gotten into the insurance business huh?

Almost like they knew in advance this issue would be most easily solved by a company that understands self driving.
 
Someone who is drunk and or sleepy?
All kidding aside, that's a good point and Level 2 at its best. It helps you when you're not at your best. From an individual probability standpoint, it only improves the odds you make it home. Thats an entirely different scenario than using it as a productivity aide or a way to sleep and travel. It needs to be better than an excellent driver. That's a high bar, and I'll bet when it goes to L3 or L4 radar or additional sensors will be added back in to achieve it. Weather is quite an X factor. In level 4 it needs to handle blinding thunderstorms, at night.
 
  • Like
Reactions: DanCar and Dan D.
Man, good thing Tesla has gotten into the insurance business huh?

Almost like they knew in advance this issue would be most easily solved by a company that understands self driving.

For sort-of-related context:
Here's a list of what Tesla keeps track of so far for insurance purposes. Obviously this isn't for self-driving, but you can see there's going to be some overlap, if self driving actually becomes a reality in the next several years, and it should be possible to gather enough data to adjust for their risk.

All seems like very useful information for their current purpose, in any case. (By the way, it sounds like it's not clear that they're actually COLLECTING this data yet...they may be logging it, but not clear whether it's actually being taken from the vehicle for Tesla Insurance purposes.)

 
Someone who is drunk and or sleepy?

All kidding aside, that's a good point and Level 2 at its best. It helps you when you're not at your best. From an individual probability standpoint, it only improves the odds you make it home. Thats an entirely different scenario than using it as a productivity aide or a way to sleep and travel. It needs to be better than an excellent driver. That's a high bar, and I'll bet when it goes to L3 or L4 radar or additional sensors will be added back in to achieve it. Weather is quite an X factor. In level 4 it needs to handle blinding thunderstorms, at night.

Drunk is Level 4/5. Sleepy is at least Level 3. Level 2 is intended for the "fully attentive".

I know what you're getting at though. A good Level 2 system will help you do better.
 
But again, as of right now they don't do any of that regarding autonomous vehicles
NHTSA document titled "Federal Automated Vehicles Policy": https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/federal_automated_vehicles_policy.pdf

NHTSA has broad enforcement authority under existing statutes and regulations to address existing and emerging automotive technologies. NHTSA has issued an Enforcement Guidance Bulletin relating to safety-related defects and emerging automotive technologies. This bulletin sets forth NHTSA’s current views on emerging automotive technologies—including its view that when vulnerabilities of such technology or equipment pose an unreasonable risk to safety, those vulnerabilities constitute a safety-related defect—and suggests guiding principles and best practices for motor vehicle and equipment manufacturers in this context. With regard to NHTSA’s enforcement authority over motor vehicles and equipment, it applies “notwithstanding the presence or absence of an FMVSS for any particular type of advanced technology.” NHTSA has the authority to “respond to a safety problem posed by new technologies in the same manner it has responded to safety problems posed by more established automotive technology and equipment.” This includes the Agency determining the existence of a defect that poses an unreasonable risk to motor vehicle safety and ordering the manufacturer to conduct a recall.

With regard to new motor vehicle technologies, including HAVs, NHTSA states in its bulletin that its “enforcement authority concerning safety-related defects in motor vehicles and equipment extends and applies equally to new and emerging automotive technologies.” Furthermore, “[w]here an autonomous vehicle or other emerging automotive technology causes crashes or injuries, or has a manifested safety-related failure or defect” that presents a safety concern, NHTSA will evaluate the HAV or technology through its investigative authority and, if necessary, “exercise its enforcement authority to the fullest extent.”

The Agency focused on the Federal Aviation Administration (FAA) because its challenges seem closest to those that NHTSA faces in dealing with HAVs. FAA uses an agency pre-market approval process to regulate the safety of complex, software-driven products like autopilot systems on commercial aircraft.
Weird. Why is NHTSA "facing challenges" in an area they don't even regulate?

This is interesting also: https://www.nhtsa.gov/staticfiles/rulemaking/pdf/Automated_Vehicles_Policy.pdf
NHTSA does not recommend that states authorize the operation of self-driving vehicles for purposes other than testing at this time. We believe there are a number of technological issues as well as human performance issues that must be addressed before self-driving vehicles can be made widely available. Self-driving vehicle technology is not yet at the stage of sophistication or demonstrated safety capability that it should be authorized for use by members of the public for general driving purposes. Should a state nevertheless decide to permit such non-testing operation of self-driving vehicles, at a minimum the state should require that a properly licensed driver (i.e., one licensed to drive self-driving vehicles) be seated in the driver’s seat and be available at all times in order to operate the vehicle in situations in which the automated technology is not able to safely control the vehicle. As innovation in this area continues and the maturity of self-driving technology increases, we will reconsider our present position on this issue.

NHTSA has a whole site dedicated to Automated vehicles here: Automated Vehicles for Safety
Do you really read that and think "NHTSA has nothing to do with Autonomous vehicles and is leaving everything up to the states"? One of the key goals listed is "Reducing policy uncertainty" - what policy uncertainty is there if NHTSA isn't a regulator? You are right that today, before anyone is actually attempting to release autonomous vehicles for public use, it is mostly up to the states. But NHTSA has made it clear that they absolutely believe themselves to be the federal regulator in this area, and will step in if they think public safety is at risk, and that they already have broad tools in the current FVMSS to use to evaluate autonomous vehicles.
 
  • Helpful
Reactions: Dan D.