Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD very far away due to regulations?

This site may earn commission on affiliate links.
Speed sign reading only on HW3
Not sure why this would be limited to HW3. AP 1 was able to do it.
Huge disappointment when Audi turns on L3 driving in Germany for the A8 even though it's limited to slow traffic speeds. People in cali rarely achieve higher speeds during commute hours.
Is L3 live for the Audi A8? What situation can it handle that EAP can't do right now?
 
Full self driving means the car will drive me door to door without driver intervention. Maybe FSD no longer means full self driving?
I'm a full self driving "feature" skeptic. I don't think Tesla will release stoplight and stop sign detection any time soon. Let's say your car stops at stop sign and a kid walking across the street falls down in front of the car? Does the driver have enough time to take over if the car doesn't see them? There are probably a million situations where the driver would not have the reaction time to take over the vehicle in a city environment.

EAP was rolled out a little at a time.

FSD can be rolled out a little at a time also ( even though it shares features with EAP ).

For goodness sake....they pretty much have sign and light recognition working software wise. Now...does it require HW3? I'm not sure.
I'm not sure what Tesla hardware is making this work. ( HW1, HW2 or HW3). I believe at one point in time I saw 50+ simultaneous objects being recognized while moving - as well as lane and road recognition. (seemingly by only the front camera(s)).

 
Last edited:
There is no situation where a human would see a kid and the car wouldn't be able to see the kid.

Of course there is.

Since humans understand the kid is still there after it leaves their immediate view, and the car does not.

Right now Teslas doesn't use object permanence but there's nothing stopping them from being able to program the car to do that.

I mean- there's nothing stopping them from programming the car to do full L5 self driving either- other than both things are incredibly hard to do

The car has all of the information that it needs.

I mean- not really. The car can, when it's in field of view, ID "a pedestrian"

There's no indication it understands specific pedestrians though. So once one leaves its field of vision it might well be able to be programmed to think it's "still there" just out of view- but would have no understanding if the next similar-sized pedestrian that came into view was them moving again, or someone else entering field of view from a similar direction- just as one tiny example of how incredibly complex a problem this is to solve for all cases.

Humans do all this naturally, generally before they can even walk/talk.
 
Not sure why this would be limited to HW3. AP 1 was able to do it.

Is L3 live for the Audi A8? What situation can it handle that EAP can't do right now?

AP1 had it because it was something MobileEye developed and optimized over a great number of years. MobileEye produces some incredibly efficient, and capable stuff. It's why Intel paid billions to acquire them.

The way to think of Tesla solution is they tend to be a bit more brute force. It's not as power efficient as other solutions, but they better control their destiny having a mostly custom solution.

There likely wasn't enough processing power within the HW2 computer left over to do it. We could probably have sign recognition, but not 360 degree vision detection at the same time.

It's a pretty major limitation in my opinion, and one of the reasons that EAP owners might be really frustrated once HW3 computers have sign reading capabilities.

Personally I hope they use the sign reading capabilities of HW3 to improve their maps so the maps themselves have the proper speed limits, and so forth. This would mean EAP cars benefited even if they didn't have sign reading capabilities themselves.

The FSD will of course need sign reading for things like stop signs, etc. So maybe that's why they're keeping sign recognition for it, and not AP2. I can't find where I read that though so I easily be mistaken on that detail.

I don't believe L3 with the A8 is live yet in Germany, but it's expected this year. Although lots of people won't consider it to be truly L3 due to the really slow speed limitation of it.

Here is about all I know about it, and why it's not being released in the states.

Why the 2019 Audi A8 won't get Level 3 Traffic Jam Pilot in the US

When thinking about L2, and L3 I wouldn't focus so much on capabilities but on liability. Even if EAP was REALLY, REALLY good and could handle a situation just like an L3 car it's still not L3 until the car itself takes responsibility for the driving where it allows you to read a book.All you have to do is take over when prompted, but it's supposed to give you adequate time to take over.
 
Last edited:
The problem is that level 2 autonomy requires that the driver take over when the car makes a mistake. Having stop light and stop sign detection in a level 2 system would create situations where the average driver would not respond in time. Good luck convincing a jury that the driver should have hit the brakes in the fraction of a second that the car, accelerating from a stop, would take to hit someone in a crosswalk.
You can find plenty of videos of EAP not seeing trucks, gore points, etc. but in those cases there is enough time for the driver to take corrective action.

There is already a law.

And the law is simple.

The person in the drivers seat is responsible for EVERYTHING!!!

Who would try to convince a jury of anything? They are responsible - in every state.
 
AP1 had it because it was something MobileEye developed and optimized over a great number of years. MobileEye produces some incredibly efficient, and capable stuff. It's why Intel paid billions to acquire them.

The way to think of Tesla solution is they tend to be a bit more brute force. It's not as power efficient as other solutions, but they better control their destiny having a mostly custom solution.

There likely wasn't enough processing power within the HW2 computer left over to do it. We could probably have sign recognition, but not 360 degree vision detection at the same time.

It's a pretty major limitation in my opinion, and one of the reasons that EAP owners might be really frustrated once HW3 computers have sign reading capabilities.

Personally I hope they use the sign reading capabilities of HW3 to improve their maps so the maps themselves have the proper speed limits, and so forth. This would mean EAP cars benefited even if they didn't have sign reading capabilities themselves.

The FSD will of course need sign reading for things like stop signs, etc. So maybe that's why they're keeping sign recognition for it, and not AP2. I can't find where I read that though so I easily be mistaken on that detail.

I don't believe L3 with the A8 is live yet in Germany, but it's expected this year. Although lots of people won't consider it to be truly L3 due to the really slow speed limitation of it.

Here is about all I know about it, and why it's not being released in the states.

Why the 2019 Audi A8 won't get Level 3 Traffic Jam Pilot in the US

When thinking about L2, and L3 I wouldn't focus so much on capabilities but on liability. Even if EAP was REALLY, REALLY good and could handle a situation just like an L3 car it's still not L3 until the car itself takes responsibility for the driving where it allows you to read a book.All you have to do is take over when prompted, but it's supposed to give you adequate time to take over.

EAP is still beta. Nothing is rolled out yet.
 
Since humans understand the kid is still there after it leaves their immediate view, and the car does not.

That's irrelevant that EAP doesn't support that because it's not in the typical use case for EAP.

You'd handle that with an LSTM and do you know who wrote a great article on LSTMs? ;)
Heck, you can even possibly do it with simple object counting.

Having stop light and stop sign detection in a level 2 system would create situations where the average driver would not respond in time.
I don't know, you'd have the same amount of time as you would approaching stopped traffic now.
 
That's irrelevant that EAP doesn't support that because it's not in the typical use case for EAP

This is a discussion about FSD though. Maybe I have lost the thread but the child was brought up as an example of how adding a capability which is not 100% accurate can actually make FSD more dangerous to use - because a user might assume it can see the child and not take over control of the vehicle in time, or at all.

Adding object permanence seems like it would result in pretty frustrating behavior with the current abilities of the system. Obviously in this case we’re talking about upgraded hardware. But think about that display of cars around you with current hardware...cars popping in and out, merging into one, etc. I think there is a reason they aren’t doing object counting...

To me it seems really, really difficult to recognize objects from camera images flawlessly, even with binocular vision. Clearly they are sort of doing it - but it is working about as well as I would expect right now. The human brain is so so good! I’m sure these neural nets are magical, but my guess is that they’re still going to need more processing power than HW3.
 
This is a discussion about FSD though.
Exactly... so people using what the car currently does for EAP as proof of what it can or cannot do is fallacious.

To me it seems really, really difficult to recognize objects from camera images flawlessly, even with binocular vision.

It doesn't have to be flawless and we know with things like imagenet that machines can outperform humans at this task. In addition, due to the framerate of the camera you have many tries per second to get it right as the car continues to move.
 
Exactly... so people using what the car currently does for EAP as proof of what it can or cannot do is fallacious.



It doesn't have to be flawless and we know with things like imagenet that machines can outperform humans at this task. In addition, due to the framerate of the camera you have many tries per second to get it right as the car continues to move.


EAP isn't perfect...(still beta)

FSD doesn't have to be either....
 
  • Like
Reactions: JeffK
There is already a law.

And the law is simple.

The person in the drivers seat is responsible for EVERYTHING!!!

Who would try to convince a jury of anything? They are responsible - in every state.


Once again you are factually wrong. This varies by state.

In TN for example the law says " While the ADS is in control of the vehicle, the manufacturer will assume liability for incidents where the ADS is at fault."

So in that case using an automated driving system the person in the drivers seat is not responsible for everything.
 
The irony of this thread title " FSD very far away due to regulations?" struck me.

Seems the reason there is no FSD, is because an actual effective and safe working FSD system does not exist. It is not just because actual working FSD systems have been banned ? FSD has not been achieved yet.

I was in Detroit and they seemed to have a downtown people mover system downtown that had no drivers. That is a "FSD" system, but in a very constrained scenario.

Thus, it may seem counterintuitive, but maybe more regulation is what it is going to take to get to FSD. The FSD problem is much harder than most assume. Constraining the problem with regulations for smarter, uniform and maintained roads can greatly simplify the problem, adding a degree of acceptable safety that otherwise will take much longer to realize.
 
I wonder how the HW2.5 -> HW3 upgrades will go for FSD option owners.

Do you think they will actually upgrade HW2.5 cars when HW3 has been released to production cars?

Or what seems more likely to me is that they wouldn't upgrade HW2.5 cars until FSD is indeed released... which would likely be a long way away unless they release FSD features like sign/light detection/auto-park/etc.

I also wonder if Tesla is actually going to be able to make the 3 legit FSD with the current senor suite and HW3. With the Y looking to have additional sensors/cameras it makes you think.

Anyone know what the take rate is on the FSD option?
 
Last edited:
The irony of this thread title " FSD very far away due to regulations?" struck me.

Seems the reason there is no FSD, is because an actual effective and safe working FSD system does not exist. It is not just because actual working FSD systems have been banned ? FSD has not been achieved yet.

I was in Detroit and they seemed to have a downtown people mover system downtown that had no drivers. That is a "FSD" system, but in a very constrained scenario.

Thus, it may seem counterintuitive, but maybe more regulation is what it is going to take to get to FSD. The FSD problem is much harder than most assume. Constraining the problem with regulations for smarter, uniform and maintained roads can greatly simplify the problem, adding a degree of acceptable safety that otherwise will take much longer to realize.


FSD (the full meaning of the term) is level 5 self driving.

Constraining the system (which is what L3 and L4 are- they are restricted to specific operational domains) means it's not level 5.



I wonder how the HW2.5 -> HW3 upgrades will go for FSD option owners.

Do you think they will actually upgrade HW2.5 cars when HW3 has been released to production cars?

Or what seems more likely to me is that they wouldn't upgrade HW2.5 cars until FSD is indeed released... which would likely be a long way away unless they release FSD features like sign/light detection.


The first FSD specific features (not level 5 or anything close) will be released this year.

Otherwise it wouldn't make any sense for HW3 to even exist since that's literally the only thing it's used/needed for.

I'd expect current FSD owners to be upgraded as parts supply (and service time) allows second half of this year.
 
Of course there is.

Since humans understand the kid is still there after it leaves their immediate view, and the car does not.



I mean- there's nothing stopping them from programming the car to do full L5 self driving either- other than both things are incredibly hard to do



I mean- not really. The car can, when it's in field of view, ID "a pedestrian"

There's no indication it understands specific pedestrians though. So once one leaves its field of vision it might well be able to be programmed to think it's "still there" just out of view- but would have no understanding if the next similar-sized pedestrian that came into view was them moving again, or someone else entering field of view from a similar direction- just as one tiny example of how incredibly complex a problem this is to solve for all cases.

Humans do all this naturally, generally before they can even walk/talk.
No one is suggesting that the car has this capabilities right now. That's why I'm saying it's a software issue and not a limitation of what the car can see. The car is able to pick up way more inputs than a human can. The challenge is how to get the car to interpret the information it collected and make the correct decision. Of course it's complex, that's why people are still figuring it out. If it was easy it would already be on the market. But it's by no means impossible.
 
Or what seems more likely to me is that they wouldn't upgrade HW2.5 cars until FSD is indeed released... which would likely be a long way away unless they release FSD features like sign/light detection/auto-park/etc.

It would be more consumer friendly to get everyone's car who paid for FSD upgrades within a very short timeline of the first feature going online (say within the normal software rollout timeline), so I would assume that would have to start well prior to the software to have a chance at getting everyone done in a reasonable time.

They have been selling FSD since 2016, and folks were way more optimistic about it back then, so there may be quite a few cars out there. I have FSD on my 3, and I will be pretty unhappy if software features I paid for are out and I have to wait 6 months to get on the service center schedule for my hardware upgrade. That may be how it will go down in real life though.
 
There is already a law.

And the law is simple.

The person in the drivers seat is responsible for EVERYTHING!!!

Who would try to convince a jury of anything? They are responsible - in every state.
Perhaps this is the regulation that Tesla is hoping to get passed? Right now in California the manufacturer is responsible for accidents caused while a vehicle is in self driving mode.
After all there have been several cross country drives by self driving cars and Elon said Tesla could do it today with a custom build of the code. They could just let customers beta test it. Haha.
 
The first FSD specific features (not level 5 or anything close) will be released this year.

Otherwise it wouldn't make any sense for HW3 to even exist since that's literally the only thing it's used/needed for.

I'd expect current FSD owners to be upgraded as parts supply (and service time) allows second half of this year.
We will see. The suspense is killing me.
I suspect that HW2 and HW3 will be identical in software for the rest of the year. The reason to make their own hardware is to get more control over a critical part of the car. The cost of actually getting the hardware built is way lower than what what Nvidia is charging them so it also improves gross margins.
 
  • Like
Reactions: Inside