Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

How is FSD ever going to work with current camera layout?

This site may earn commission on affiliate links.
I’m familiar with the convention on levels of autonomy. Only L5 can drive in all conditions; hence, an attentive driver can be required in
L1 - L4. (I do not regard L0 as “autonomy”, as this provides ZERO autonomy.) Only L5 does not require an attentive driver, Ghost Rider.

Please explain how Waymo is operating a robotaxi service at L4 with no driver, attentive or otherwise. Or are you saying they are operating at L5?
 
In technical terms, they are neutral on politics and legality.

For example, the brake system is assigned for Braking and not responsible for steering. This responsibility is not about legal nor political. You don't need a politician to make a law that since they can't find a red-nosed reindeer, the law is now that the brakes have to glow red in the dark to shine the way for the car to drive at night and the brakes should now take over the responsibility of what used to be of the lights.

Same with your table of AV classification from L0 to L5.

In L0: Human driver is responsible for manual driving.

In L5: Humans are not needed for driving, and the machine takes over that driving responsibility.

If the responsibility assignments are not met, they don’t belong in a class such as L5.

Waymo can indeed take over the task of driving without human drivers but only in certain locations, so it's an L4.

No one needs to go to court to claim Waymo legally is not L5. They need to determine, technically, if Waymo can take over the responsibility of driving without humans everywhere in the world without geofencing.

The assignment of responsibility is the factor that makes a car function anywhere from L0 to L5, not legal, not political.
Tam, your buddy, DarkForest, used the term “responsibility” in the context of who or what corporation would be legally and financially responsible, which has nothing to do with the sheer capability. I see your point, but I think you are making it about semantics, although it is not. You say “responsibility”; I say “capability”.

It seems evident that you are speaking from a perspective of successful vehicle operation, whether at the hands of a human or under the direction of AI. From that perspective, I can see why you are using the term “responsibility”. I am speaking about FSD (and more broadly about AEV’s) from the perspective of autonomy; meaning, AI successfully operating a motor vehicle—or not, not whether a human is operating the vehicle or AI is operating the vehicle. From my perspective, it is about capability, not responsibility, for responsibility is a human concept, not a material concept applicable to AI. AI concerns itself with achieving or failing to achieve its objective, not the consequences thereof and, therefore, the resultant responsibility.
 
  • Like
Reactions: heapmaster
Please explain how Waymo is operating a robotaxi service at L4 with no driver, attentive or otherwise. Or are you saying they are operating at L5?
Because they can’t do it anywhere. It may be half dozen one, six the other, but both the restriction to geofencing and the absence of the human driver relegates Waymo to L4. If Waymo moved outside of its geofence, then a human driver would be required.
 
  • Funny
Reactions: Daniel in SD
From my perspective, it is about capability, not responsibility, for responsibility is a human concept, not a material concept applicable to AI. AI concerns itself with achieving or failing to achieve its objective, not the consequences thereof and, therefore, the resultant responsibility.
Semantics aside—from my perspective, my logic thread goes like this:

  • L5 means I can sit back and relax while the car drives itself
  • I can only sit back and relax if the car is 100.000% capable of driving itself (or knowing with plenty of lead time that it can’t in a given situation)
  • I can only know that the car is 100.000% capable of driving itself if the manufacturer proves it by saying they will take financial responsibility for its actions
  • If the manufacturer does not take full financial responsibility for the car’s actions, then it proves that they don’t trust their product and neither should I
So yes, in a perfect world, it’s only about capability, not responsibility. I just don’t think you can have one without the other.

I prefer using the term responsibility because it implies capability, whereas capability does not imply responsibility (and in fact seems to imply otherwise).

To tie it back to the thread: I don’t think the current hardware is sufficient for Tesla to ever claim financial responsibility. Therefore I will never be able to sit back and relax in my M3.
 
Because they can’t do it anywhere. It may be half dozen one, six the other, but both the restriction to geofencing and the absence of the human driver relegates Waymo to L4. If Waymo moved outside of its geofence, then a human driver would be required.

OK. It’s apparent that you are determined to use your own non-standard definitions for autonomy levels, so there is really no point in continuing this discussion within this thread. We’ll just be talking in circles without a common vocabulary.
 
  • Like
Reactions: DarkForest
...semantics...

In troubleshooting and diagnosing, the word responsibility might be more relevant than capability.

MobilEye test drives with cameras alone because it knows that Radar and Lidar are not responsible if something goes wrong. The fault lies with the cameras alone. After all, they are not there to be a factor in sensor fusion.

They do the same with test driving Radar and Lidar. When something goes wrong, they can't hold the cameras responsible. The fault now lies with Radar and Lidar because there are no cameras to shift the responsibility.

Once the bugs are worked out from those separate systems, that's when they combine them together with Cameras, Radar and Lidar.

Thus, when an accident happens, we need to find out if the machine was responsible or if a human passenger sabotaged the machine.

Thus, if a company claims it is L5 and its system works well in one town but not another, it is no longer L5 because of the limitation of the location, and in L5, humans are not responsible for driving at all locations.

It is very awkward to say "in L5. humans are not "capable" of driving to all locations."
 
Semantics aside—from my perspective, my logic thread goes like this:

  • L5 means I can sit back and relax while the car drives itself
  • I can only sit back and relax if the car is 100.000% capable of driving itself (or knowing with plenty of lead time that it can’t in a given situation)
  • I can only know that the car is 100.000% capable of driving itself if the manufacturer proves it by saying they will take financial responsibility for its actions
  • If the manufacturer does not take full financial responsibility for the car’s actions, then it proves that they don’t trust their product and neither should I
So yes, in a perfect world, it’s only about capability, not responsibility. I just don’t think you can have one without the other.

I prefer using the term responsibility because it implies capability, whereas capability does not imply responsibility (and in fact seems to imply otherwise).

To tie it back to the thread: I don’t think the current hardware is sufficient for Tesla to ever claim financial responsibility. Therefore I will never be able to sit back and relax in my M3.
I really appreciate your lengthy, well thought out response. Some great talking points to consider.

I'd like to first address your last comment about the insufficiency of the current hardware. That is interesting, as I am by no means an expert in this area. Why do you believe that the current hardware is insufficient? Do you disagree with Elon's assertion that a vision-only system is the way to go?

How do you feel about the recent removal of USS? (NOTE: I took delivery of my 2023 Model Y in NOV2022, and it does not have USS. I know a lot of TESLA owners/fans were upset about that, but I don't know what I'm missing, so I am not concerned about it.)

Do you think it was a mistake to strip away radar? What about this new radar that some are buzzing about? I heard it is supposed to produce a much clearer image, as it scans at a higher frequency (or something like that).

Do you believe that Elon is dead wrong about LiDAR? He has been pretty staunch about that, citing its high cost and how unnecessary he believes it to be.
 
I really appreciate your lengthy, well thought out response. Some great talking points to consider.

I'd like to first address your last comment about the insufficiency of the current hardware. That is interesting, as I am by no means an expert in this area. Why do you believe that the current hardware is insufficient? Do you disagree with Elon's assertion that a vision-only system is the way to go?

How do you feel about the recent removal of USS? (NOTE: I took delivery of my 2023 Model Y in NOV2022, and it does not have USS. I know a lot of TESLA owners/fans were upset about that, but I don't know what I'm missing, so I am not concerned about it.)

Do you think it was a mistake to strip away radar? What about this new radar that some are buzzing about? I heard it is supposed to produce a much clearer image, as it scans at a higher frequency (or something like that).

Do you believe that Elon is dead wrong about LiDAR? He has been pretty staunch about that, citing its high cost and how unnecessary he believes it to be.
Semantics aside—from my perspective, my logic thread goes like this:

  • L5 means I can sit back and relax while the car drives itself
  • I can only sit back and relax if the car is 100.000% capable of driving itself (or knowing with plenty of lead time that it can’t in a given situation)
  • I can only know that the car is 100.000% capable of driving itself if the manufacturer proves it by saying they will take financial responsibility for its actions
  • If the manufacturer does not take full financial responsibility for the car’s actions, then it proves that they don’t trust their product and neither should I
So yes, in a perfect world, it’s only about capability, not responsibility. I just don’t think you can have one without the other.

I prefer using the term responsibility because it implies capability, whereas capability does not imply responsibility (and in fact seems to imply otherwise).

To tie it back to the thread: I don’t think the current hardware is sufficient for Tesla to ever claim financial responsibility. Therefore I will never be able to sit back and relax in my M3.
I appreciate the additional clarification. Some great thoughts!

I don't see capability as being in a perfect world, but rather, I see responsibility as being in a perfect world, so I guess we are inverse on our views in that way. When one considers how the literal rubber meets the road, there are just too many external factors for any company (even TESLA) to take financial responsibility for...well, I assume we are talking about car wrecks and related deaths, etc. For example, when one gets into a car wreck, insurance companies come out (sometimes) and determine % at fault. Sometimes, they determine that you were 51% or more at fault, etc., and then you are financially responsible. Not all wrecks are that simple; sometimes car wrecks involve ten or more parties. Doling out blame and responsibility is a tough business. One thing seems clear: Capability and responsibility are inextricably linked, if not symbiotic. In today's terms, you cannot address the police after a wreck that occurred while Autopilot was engaged in your TESLA and think that you're going to blame it on TESLA. I suppose it begs the question, will TESLA drivers (or would they be merely passengers) always be responsible for the operation of the TESLA, even when L5 autonomy is achieved? I know that is the point you've been driving at, but surely passengers in a driverless TESLA with no steering wheel and pedals should not realistically be held to account. However, certainly, someone must be held responsible, right??? Will we ever get to a world where incidence of human fatalities resultant from automobile collisions are so LOW that no one is to be held liable??? That idea is not too far-fetched, as evidenced by my previous analogy about the U.S. Congress passing legislation that absolves the makers of the COVID-19 shots from any/all liability, so it's not as if there is not an established precedent for corporate absolution.

Keeping things more lighthearted, I will say that this discussion reminds me of that scene from "Tommy Boy" where "Tommy Callahan, Jr." is trying to convince his stoic customer, one Colin Fox, to buy his product and not his competitor's, but the customer goes on about the guarantee not being printed "ON-THE-BOX". Tommy invokes, or tries to invoke, some of his late father's sales mojo but ends up misquoting his dad. "Tom Callahan, Sr.", in a business deal shortly before his death, said to a customer with whom he was negotiating at Sr.'s own wedding, "You can stick your head up a bull's ass if you wanna get a good look at a T-bone, but wouldn't you rather just take the butcher's word for it?" In a word: NO. You have made it clear that you need to see the "guarantee" printed "ON-THE-BOX" and do not wish to take the proverbial butcher's word for it. Fair enough. I cannot argue that it would be a HUGE vote of confidence on TESLA's part to assume legal and financial responsibility, but one, I think, that would never happen. Moreover, it is one that is not necessary (except in your case, obviously), but I think for most people, the statistical facts that would be propagated from TESLA marketing it's FSD would speak for itself if TESLA could boast ZERO fatalities or even if they could boast 80% fewer fatalities than with human operated vehicles. People will eventually flock to FSD in droves. In the end, I think it will be the summation of what is good for business and how much will society tolerate, in terms of where this road leads.

I think the rhetorical question that looms large is when will TESLA owners be able to make money from their TESLA "robo-taxi"? At this point, I wouldn't be surprised if that day comes well past 2030. It is hardly "right around the corner", as Elon has famously misled his customers.
 
I really appreciate your lengthy, well thought out response. Some great talking points to consider.

I'd like to first address your last comment about the insufficiency of the current hardware. That is interesting, as I am by no means an expert in this area. Why do you believe that the current hardware is insufficient? Do you disagree with Elon's assertion that a vision-only system is the way to go?
I believe it’s possible for a vision-only system to work. I just have no idea how we’d achieve it.

I can buy into the whole “humans only have two eyes and a brain” analogy. However, humans also learn on the spot, and we have other sensors to help us calibrate our vision as we learn about new object types. We can also see way further than the current cameras can, including piecing together information based on incomplete data (eg seeing what traffic is doing far ahead without having to recognize individual vehicles).

Every single time I drive to work, I learn something new about the route, the kind of drivers in my area, how traffic is at a specific time of day, how to drive in new weather situations, how to drive through new sorts of obstacles, how to deal with poor drivers, etc. And all of that is specific to my area. The current technique of learning general info at infrequent intervals offsite is not nearly as effective.
How do you feel about the recent removal of USS? (NOTE: I took delivery of my 2023 Model Y in NOV2022, and it does not have USS. I know a lot of TESLA owners/fans were upset about that, but I don't know what I'm missing, so I am not concerned about it.)
I have a 2022 Model 3 with USS, and it’s amazing. I have mixed feelings about its removal. Since Musk drives a Tesla with software several versions ahead of what we have, I can believe he saw the system as capable of replacing USS. On my M3, I often see the USS distance match what Vision is saying for vehicles around me.

However, I have no idea how that’s going to work for other situations like parking in a home garage. Vision absolutely sucks in that situation, and it’s not surprising. It knows how big a vehicle is and can extrapolate its distance based on how many pixels it takes up on camera. But how does it do that with a wall of arbitrary width and length?
Do you think it was a mistake to strip away radar? What about this new radar that some are buzzing about? I heard it is supposed to produce a much clearer image, as it scans at a higher frequency (or something like that).
I don’t think radar is necessary, but I do miss the performance of my previous car that did have radar. I could set the cruise control speed up to 112 mph instead of just 85 mph, and I never once experienced phantom braking in 6 years of heavy usage.
Do you believe that Elon is dead wrong about LiDAR? He has been pretty staunch about that, citing its high cost and how unnecessary he believes it to be.
I’m not sure.
 
Please explain how Waymo is operating a robotaxi service at L4 with no driver, attentive or otherwise. Or are you saying they are operating at L5?
My issue with the convention on autonomy is that it does not treat all “six” levels of autonomy as equal. (Again, I object to the very existence of Level 0, as that is 0% autonomous, and it is a chart of autonomy.)

All levels of autonomy should be regarded as to the extent a human driver is required, in terms of driving ANYWHERE. Waymo is definitely a L4 solution, but that is precisely my point. L4 is merely 100% of a lesser %, meaning that Waymo may be fully autonomous at L4, but only within that certain geography. In order for L4 Waymo to drive anywhere—which is the ultimate measurement and the only one that matters—a human driver is required. Therefore, Waymo may be 100% effective at driving only 1% of space (or whatever small percentage you want to come up with), and a human driver is required to handle the inverse. The assertion that a human driver is not required for L4 as well as L5 is FALSE. A human driver is NOT required for L4—insofar as the vehicle is capable of driving within a predetermined geography; hence, a human driver is required to drive anywhere outside of that zone. The convention on autonomy is supposed to measure the degree to which a vehicle can drive itself, but not merely in a vacuum—but rather in the real world. In the real world, humans drive vehicles in all sorts of places under all sorts of conditions.
(SEE PHOTO)

Perhaps the future or autonomy and TaaS (transport-as-a-service) will be boasted in terms of its coverage, not at all unlike cell phone companies today boast their network speed and coverage, but that is another discussion, altogether.
 

Attachments

  • A3606954-BAAC-4033-9FFD-797C899A3F96.jpeg
    A3606954-BAAC-4033-9FFD-797C899A3F96.jpeg
    66.1 KB · Views: 80