Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Almost ready with FSD Beta V9

This site may earn commission on affiliate links.
NHTSA document titled "Federal Automated Vehicles Policy": https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/federal_automated_vehicles_policy.pdf






Weird. Why is NHTSA "facing challenges" in an area they don't even regulate?


What's weird is you'd go through that many documents- none of which have a single actual federal regulation specific to autonomous cars

And still be unclear about the facts already given to you.

They {B]do no regulate autonomy at all[/B] right now.

They COULD in the future. In fact I mentioned that specifically too.


So it is, indeed, very weird how often you claim to not know something, even after posting links that tell it to you.


This is interesting also: https://www.nhtsa.gov/staticfiles/rulemaking/pdf/Automated_Vehicles_Policy.pdf


NHTSA has a whole site dedicated to Automated vehicles here: Automated Vehicles for Safety
Do you really read that and think "NHTSA has nothing to do with Autonomous vehicles and is leaving everything up to the states"?

Yes, because again that is the actual state of the law right now

In fact- that document is the NHTSA making recommendations to the states regarding state regulations


Once again you appear to link to sites you either didn't read or didn't understand.


One of the key goals listed is "Reducing policy uncertainty" - what policy uncertainty is there if NHTSA isn't a regulator?

Well- you seem incredibly uncertain about the whole thing- despite having been told the current state of things multiple times.


I notice you also skipped over the last post where you got caught admitting you already knew who states held responsible for driving systems while continually being "concerned" about how anyone can know the answer to that very question.


It makes your earlier denials about let's say less than sincere debate increasingly hard to believe given how you keep repeating the same stuff no matter how often it's explained to you.


You repeat that same behavior here- despite having been told half a dozen times these facts:

The NHTSA does not regulate self driving cars at this time - fact.

All current regulation of them is done by the states - fact.

The NHTSA could choose to do so in the future -fact.


Nothing you just linked to disagrees with those facts in any way but you act like there's still any confusion or misunderstanding on this stuff other than that which you are intentionally putting forward.
 
@Knightshade ...might I suggest you stop feeding the trolls. I, and others, do appreciate your responses but...feeding trolls is bad, m'kay.

Actually really appreciate them. Will be an interesting couple of years as this rolls along. Can't wait to read a good novel and stare at the trees as we roll along.
Oh and perhaps one day our autonomous cars might take us over to Counter Culture and we'll meet while picking up a bag of the finest east African sun dried peaberry found on the east coast (Durham NC roaster, my favorite in the USA).
 
Interesting, but to be clear, are you saying this is the case for "FSD Beta 8.x" i.e. City Streets Beta as shared by several testers on YouTube? It was my understanding that unlike released AP, NoA etc., City Streets Beta is now heavily concentrated on NN machine learning cycles, so-called "Software 2.0" approach as they're all discussing online, to minimize and in principle eliminate most of the need for hand-coded AI Rules.
Yes, currently FSD’s path planning and control logic is coded logic. The NN portion of the stack is primarily vision processing and post-processing (bird’s eye view for example).

Once the NNs have an accurate picture of the environment around the car, this info is passed to the path planning and control logic.

The long term plan per several presentations by Andrej Karpathy is to make more and more of the stack use AI, but this is still to be done down the road sometime (pun intended).
 
  • Informative
Reactions: JHCCAZ
Yes, currently FSD’s path planning and control logic is coded logic. The NN portion of the stack is primarily vision processing and post-processing (bird’s eye view for example).

Once the NNs have an accurate picture of the environment around the car, this info is passed to the path planning and control logic.

The long term plan per several presentations by Andrej Karpathy is to make more and more of the stack use AI, but this is still to be done down the road sometime (pun intended).
OK thanks. I'll accept then that the unimpressive left-turn paths evident in the FSD 8.x videos were coded that way.

I retract my charge that Tesla engineer alpha-testers inadvertently taught awkward driving maneuvers to the computer - apparently they programmed them right in.
 
OK thanks. I'll accept then that the unimpressive left-turn paths evident in the FSD 8.x videos were coded that way.

I retract my charge that Tesla engineer alpha-testers inadvertently taught awkward driving maneuvers to the computer - apparently they programmed them right in.
I think it’s just tricky to get right in a very general way. Just needs additional refinement. We’ve seen progress on this already in the 8.x releases, and I’m sure that will continue.
 
Elon update:


Interesting that Elon specifies that 99.999999% means percentage of miles with no injury. If I did my math right, that comes to 1 injury per 100M miles of driving.

I wish Elon would give us some hard data on what the probability of accident is now with city driving so we could better quantify the progress. It would seem that FSD Beta is a long ways from 1 injury per 100M miles.

I am curious about last part. If Highway AP has already achieved above 99.999999% reliability, why doesn't Tesla remove driver supervision for highways?
 
Interesting that Elon specifies that 99.999999% means percentage of miles with no injury. If I did my math right, that comes to 1 injury per 100M miles of driving.

I wish Elon would give us some hard data on what the probability of accident is now with city driving so we could better quantify the progress. It would seem that FSD Beta is a long ways from 1 injury per 100M miles.

I am curious about last part. If Highway AP has already achieved above 99.999999% reliability, why doesn't Tesla remove driver supervision for highways?



For that matter it makes it sound like AP for the highway is "done" and NOT going to get the total re-write 4D/BEV stuff.

(because otherwise the fact it's that 'safe' now tells us nothing useful if it's going to be totally re-written)


Oh and the reason to not remove driver supervision on highways is obvious- it still hits stationary objects.

In THEORY ditching radar and going to vision only COULD fix that (at least when it's not foggy). But only if they actually do that for ALL code, not just city driving.
 
Sometimes I wonder if Elon is like one of those people you see in movies (and probably real life) who make an *incredible*, statistically unheard of run at the craps table, and then start getting more and more cocky, and then suddenly the dice stop rolling their way but they bet too big and throw away all their big wins.

if so, i wonder how far we are from seeing the losing bets getting scooped up by the croupier.
yes, maybe, yet I worry that he is becoming more like one of my clients from LONG ago (a CEO you surely know the name)..... He would come up with an idea, throw a big budget at it. About three weeks later, just as I was making good progress and already spent a ton of money, he would come back and say something like, "I changed my mind. Scrap that project. We are going to do this." yes you guessed it, 3 weeks later, the same thing.
 
I am curious about last part. If Highway AP has already achieved above 99.999999% reliability, why doesn't Tesla remove driver supervision for highways?
There is no way Highway AP is 1:100M miles without a human there to take over. But there is a human today- so it's part of the AP system and they can take credit for that.

We know that people already tend to not use AP in places where they know it won't work well, and that if you grab the wheel or brakes before the impact occurs, then the car wasn't "on autopilot" when it collided. So this is a massive distortion to the statistics of how reliable AP is when left alone. It might be 1:100M as currently deployed, but for all we know, humans are catching 9999:10000 failures of AP and it would be 1:10,000 miles if left alone (and even that feels high based on my experience).

Also, 1:100M "injury rate" first means that you can have minor crashes quite often, relying on the rest of the car to keep people safe. There's an injury severity scale as well, so he may mean only very severe injuries. Only about 1:100 highway accidents in the USA lead to severe or fatal injuries, and Teslas are supposedly even safer than average. So maybe human drivers are catching a failure every 1:100 miles.

The number we actually need is "rate of collisions on a limited access highway that would have occurred if the driver did not intervene"- that would really help us know just how much humans are involved in fixing AP mistakes and would probably make it clear why Tesla still needs to make sure an active human is paying attention every few minutes. This is why when you are actually trying to test for L4/L5, you just measure disconnects per mile- because anytime a human takes over, it means a crash probably would have occurred. To claim 1:100M at 90% confidence, you'd need 300M miles of driving with ZERO disconnects (or larger data sets with appropriate disconnect rates).

All of this will apply to City "FSD" as well- if it's an L2 system, they can say they got to 1:100M miles because the human taking over is actually part of the system design.
 
Last edited:
I’m sure Elon Musk, Andrej Karpathy, and the rest of the Tesla self-driving team have considered the radar question in great detail. No doubt their pure vision solution includes some new and innovative technologies.

All cameras capture and manage light. Using camera input, light speed, direction, and associated distances can be measured and interpreted.

A few years back, a company called Lytro came up with a camera technology that captured an image at multiple depths and let users refocus the image after the fact. Google assimilated much of the Lytro technology and team. They subsequently developed a light fields technology which includes a set of advanced capture, stitching, and rendering algorithms resulting in a 3D virtual reality render.

Vision-based technology continues to advance rapidly. Concepts and variations of binocular parallax, immersive light field video, layered mesh representation, composite depth visualization, AI reconstruction of objects, and more are among the technologies Tesla may utilize in their new pure vision system.

It’s fair to say they are exploring much more than my paltry understanding of such technologies. The release of FSD beta 9.0 will be enlightening. I’m optimistic about its capabilities.
 
  • Disagree
  • Like
Reactions: Hbrink and goRt
Did you drive in the fog so dense that you could barely see your feet? If so, what did you use to navigate?
Yes, about 50 years ago, about 20 miles south of London in a rare smog. I would not recommend it. Only needed to go about 2 miles on road that I knew like the back of my hand, and it was probably the most difficult drive I have ever done. I really could only see the outlines of my feet although my eyeballs were a full 6 feet away (I am fairly tall). The fog lights did help me to see a few more feet. Even so I could not go more than a slow walking pace otherwise I would had driven into parked cars (of which there were several).
Radar would not have had a problem with this...