Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Is current sensor suite enough to reach FSD? (out of main)

This site may earn commission on affiliate links.
Tesla won’t have to wait for FSD to solve all weather conditions in everywhere in the world before rolling out TN and generate significant value.

With current FSD computer they should be able to drive better than human under most conditions, that would enable TN in enough markets to consume more cars than they can make.

Those other conditions are the 9s that would keep coming, any FSD won’t be reaching 100% any time soon.

For example, in some less developed cities in China and India, you almost need to understand local culture and language to just drive through the city, that probably need AGI to happen first.

At that point, FSD would already worth so much(or become a subscription), that replacing the FSD computer every few years for new capabilities is no issue at all.

I believe the current sensor suit is sufficient given enough compute, and existing cars won’t need significant retrofitting, so no problem at all.
 
  • Like
Reactions: preilly44
  • Funny
Reactions: Artful Dodger
Not sure if the sensor suite is sufficient for FSD, Tesla seems to think it is and I have no knowledge or experience to refute.

That said, since the issue of phantom braking has not been solved for many of us, I think some part of the car, software, sensors, compute power, memory, whatever, is indeed insufficient for FSD. Now two years into owning an FSD vehicle, I no longer delude myself into believing I’ll enjoy it, at least not in this car. Love to be proven wrong and have my EAP/FSD expense turn out to actually deliver.
 
Not sure if the sensor suite is sufficient for FSD, Tesla seems to think it is and I have no knowledge or experience to refute.

That said, since the issue of phantom braking has not been solved for many of us, I think some part of the car, software, sensors, compute power, memory, whatever, is indeed insufficient for FSD. Now two years into owning an FSD vehicle, I no longer delude myself into believing I’ll enjoy it, at least not in this car. Love to be proven wrong and have my EAP/FSD expense turn out to actually deliver.
The 4D upgrade should solve the phantom braking problem.
 
When people start discussing droplets blocking rear view cameras, even if it’s weekend, you know the flock has strayed too far from the path. You may continue here: Is current sensor suite enough to reach FSD? (out of main) (where 40+ posts have found a new home).

Well, what is relevant to this thread, I believe, is whether Tesla will have to do another hardware retrofit - this one vastly more expensive - to vehicles in which FSD has been purchased.

Many jurisdictions will insist on a certain level of redundancy in any autonomous driving vehicle they approve. Tesla is aware of this (they stated on Autonomy Day that they have two identical FSD chips in each vehicle for precisely that reason, also see FSD Chip - Tesla - WikiChip), yet I wonder if the entire system is fully redundant.

That article I linked above says: "Additionally, half of the cameras sit on one power supply and the other half sit on the second power supply (note that the video input itself is received by both chips). The redundancy is designed to ensure that in the case of a component such as a camera stream or power supply or some other IC on the board going bad, the full system can continue to operate normally."

But, can the car fully function autonomously if half the cameras fail (or are obscured)? I haven't compared the video outputs myself, but, for instance, are the cameras set up such that the FOVs (Field of Vision) overlap completely - that half the cameras can fail due to power supply failure, and the other cameras fill in completely, with the software pre-programmed / learned to handle that as well?
 
Well, what is relevant to this thread, I believe, is whether Tesla will have to do another hardware retrofit - this one vastly more expensive - to vehicles in which FSD has been purchased.

Many jurisdictions will insist on a certain level of redundancy in any autonomous driving vehicle they approve. Tesla is aware of this (they stated on Autonomy Day that they have two identical FSD chips in each vehicle for precisely that reason, also see FSD Chip - Tesla - WikiChip), yet I wonder if the entire system is fully redundant.

That article I linked above says: "Additionally, half of the cameras sit on one power supply and the other half sit on the second power supply (note that the video input itself is received by both chips). The redundancy is designed to ensure that in the case of a component such as a camera stream or power supply or some other IC on the board going bad, the full system can continue to operate normally."

But, can the car fully function autonomously if half the cameras fail (or are obscured)? I haven't compared the video outputs myself, but, for instance, are the cameras set up such that the FOVs (Field of Vision) overlap completely - that half the cameras can fail due to power supply failure, and the other cameras fill in completely, with the software pre-programmed / learned to handle that as well?

Continuous improvement. There will always be hardware upgrades, software improvements, new requirements, and new mandates. It will never be “perfect” at any particular point in time, but will be good enough for most use cases. Can the current hardware achieve “good enough?” Maybe - let’s wait and see how much better the new software is that Elon mentioned in the Q2 earnings call. Multiple redundancies important? Absolutely, but that can be achieved incrementally - I trust Tesla is committed to continuous improvement and will deliver the most they can within the capabilities of the current hardware. We have seen that from inception and we have seen hardware upgrades along the way when needed. No reason to expect anything different in the future.
 
Well, what is relevant to this thread, I believe, is whether Tesla will have to do another hardware retrofit - this one vastly more expensive - to vehicles in which FSD has been purchased.

Many jurisdictions will insist on a certain level of redundancy in any autonomous driving vehicle they approve. Tesla is aware of this (they stated on Autonomy Day that they have two identical FSD chips in each vehicle for precisely that reason, also see FSD Chip - Tesla - WikiChip), yet I wonder if the entire system is fully redundant.

That article I linked above says: "Additionally, half of the cameras sit on one power supply and the other half sit on the second power supply (note that the video input itself is received by both chips). The redundancy is designed to ensure that in the case of a component such as a camera stream or power supply or some other IC on the board going bad, the full system can continue to operate normally."

But, can the car fully function autonomously if half the cameras fail (or are obscured)? I haven't compared the video outputs myself, but, for instance, are the cameras set up such that the FOVs (Field of Vision) overlap completely - that half the cameras can fail due to power supply failure, and the other cameras fill in completely, with the software pre-programmed / learned to handle that as well?



There's some overlap of some cameras, but the camera fields of view do not overlap to nearly the extent you could lose half the cameras and still have L5 driving (if you ever had it at all)

It's why I say eventually the choice for the company will be spending $ on FSD refunds, or spending money on additional HW upgrades.

Either way they'll need to spend some money.

It's also why (well, one of two major reasons why- the other being it made it easier to recognize more of future FSD revenue) they changed the wording on FSD when you buy it in March 2019.

Prior to then, customers were promised features that were, at minimum, level 4 autonomy.

After then customers were only promised a specific, enumerated, set of features, and at no greater than level 2 autonomy.

Thus do they only owe refunds to the pre-March-2019 buyers if they find they can't manage it with the existing HW and it's not worth spending the $ to add HW to those cars.

Given the majority of 3s either already are, or soon will be, the post-3/19 folks, and all Ys will be, AND the fact the pre-3/19 folks are the ones who only paid 3-5k for FSD instead of 7-8k....that all means the cost to the company should be relatively small if they go the refund route for those folks.


Small enough I don't think in and of itself it'd have a substantial impact on share price from a purely financial perspective.... especially since they've still got some time before they'd have to "admit" they need to do it.

But it's more likely to have some significant impact on share price from a confidence perspective.


One thing to look for is what sensor package the PRODUCTION version of cybertruck comes with.

As I mention they've already made at least one significant change to the Y sensor suite (a heater on the front radar).... if the CT has even more changes/improvements/redundancies then it becomes even more likely you'll see what I describe above for the older 2.0/2.5 sensor suite cars.

On the other hand if they can announce the refunds to older buyers at the SAME TIME they announce "Hey, L4 is done and working right now on the CT sensor package" that'd mitigate things tremendously.


(or if you want the SUPER optimistic perspective- the revenue/profits from having it working on the CT sensor suite will be so massive- the company can easily afford the more-than-issuing-refunds expense of just retrofitting all the older S/3/X/Y cars in the fleet and they will do so at no charge)
 
For a given neural network, inference latency, memory usage is constant i.e. computational characteristics is invariant for a given neural network regardless of inputs.

Assuming we have an agreement on neural network inference computation charactertistics, I interpret what you said to mean that a neural network capable of handling FSD is larger than what can be fit inside AP3 memory. Why do you think so? And do you have a rough estimate of the minimum neural network size needed for FSD?


Yes agree with what you wrote.

Although I do work in deep learning, I generally work with physiological time signals so the size of the networks is much smaller that what I imagine Tesla would need and don't really have to focus on memory limits. So I don't have direct experience to get a good estimate what size of the network it would take, but I think we can look at some historical priors to get a sense...

The null hypothesis here should be that the computational / memory load is not sufficient until proven otherwise. Companies, such as Waymo (I've talked with their engineers) did not focus more heavily on camera vision only because of the vast computational load it would require, much much more than LIDAR obviously. Of course, they made those decisions before the growth of deep learning.

In neuroscience, it is well known how amazing and intricate the human vision system is. Not really the eyes, but the brain. The amount of processing being done is insane, and HW3 is a joke compared to it. I forget the numbers, but its orders of magnitude more.

But of course maybe advanced neural networks can be optimized and pruned to achieve the desired accuracy with much less compute than humans. But where is that threshold reached?

We know that on HW2.0 (I believe) Tesla wasn't even using the raw images for processing in the neural network. They were downsampling the images (by a factor of at least 2, maybe 4) before feeding in. I knew at that time it was a joke to think they were going to have FSD on that sort of hardware.

Well know we know they at least don't have to do that. Read this post (literally the most "informative" ever on TMC Neural Networks)

But here it seems they are still feeding in only a few time snapshots at at time. I can tell you this will absolutely not do for sufficient accuracy of FSD. I don't even know if humans could take just a few snapshots over 2 or 3 seconds and have a robust enough object prediction.

No, to achieve good enough FSD, Tesla is going to have to feed in "video". Like, information over 3-6 seconds. At what sampling rate and resolution I have no idea, but that is going to be needed to improve static object detection to a sufficient level. I am confident of that.

Now, the 4 dimensional labeling is presumably operating along those lines. This in itself is a deep learning task that must take some amount of video to reconstruct the 3-D space (VIDAR) before feeding that reconstruction into the other modules (perception, planning, etc...).

And I truly believe that Tesla's approach can work. The camera resolution is good enough I think. The amount of data they can collect is assuredly good enough. But at what point can they make each component of high enough accuracy, how many images will they need to feed in? How good are their computer vision skills (likely need to make processing of videos most efficient before feeding into NNets).

In some ways MobileEye has more domain knowledge and if it was that alone I would trust them more to be able to make their models the most efficient memory wise. But Tesla's data advantage may lead them to realize the best approaches as well.

All of that to say, I am bullish on the approach in the long term. It's generalized and supported by a lot of data.

But there is NO basis to think HW 3.0 is enough to handle all of this. Definitely not Elon's words. If Karpathy says it confidently, I would be more optimistic.
 
But we all also know that requires major LIDAR sensing, so you understand my dilemma?

Perhaps if you explain why it needs major LIDAR sensing, and hence why Elon is wrong, we will understand.

Tesla Has Published A Patent 'Predicting Three-Dimensional Features For Autonomous Driving' : teslainvestorsclub

This patent is nothing new, we saw it demonstrated on Autonomy Day...

Currently we don't even know if Autopilot is using the full suite of sensors, and stitching those sensors up into a 3D view... the current sensors have some level of redundancy as many objects will appear in multiple cameras....

So 4D is just images appearing in multiple cameras overtime and video feed, not a still image. Multiple cameras and perspective can help with distance estimation.

As far as I know LIDAR tells the distance to objects, not what objects are, so the main advantage may be that LIDAR distinguishes between a shadow and a real object? Human eves have no problem recognizing stopped trucks... we are rarely fooled by shadows...

Regardless of whether LIDAR is ultimately needed, FSD requires vision to be fully solved and a large data-set to train the NN.
It makes no sense to have LIDAR in every car. no one can afford that.... so data gathering and identifying edge cases is all camera based.

Elon being wrong about LIDAR is possible, so but far we have no real basis to conclude he is wrong, until we establish the final limitations of the current solution. There is no question at all that he understands LIDAR and what it can do.
 
Having owned a 2016 Model S with FSD and now a 2020 Model X with FSD, here's my bet. Neither car will ever drive a mile with true Level 4 or 5 FSD.

Yes, this post is comical with people defending the hardware on the car going to do FSD. I have base level autopilot and seeing what it can do I would never spend the money for FSD at this point. It gets confused going over brick inlay in my neighborhood. It starts randomly swerving. It definitely helped my recent 200 mile each way trips to the point I could drive them all myself without much trouble, but its nowhere near full self driving no matter how many gimmicks Tesla wants to add in.
 
Backing in a parking lot is done at 1 mph. Tesla v2.0 hdw suite (Oct 2016+) has 12 ultrasonic sensors with a range of dozens of feet distributed about the car. An obscured rear camera view is not a safety issue when there is already another independent data stream usable for parking / backing.

The following was a sensor suite proposed in an 2013 IEEE.org article. Notice the updates Tesla did in Oct 2016? Can you think of why a rear-facing radar was deleted from this spec?

13C315_18.jpg


Elon has already stated that there is a v2 of the FSD computer in the product development pipeline. It's expanded capabilities will help to tighten the 'training loop' between data collection / analysis, and adapting the neural net to learn by experience.

None of this requires new sensors. But it is the way Tesla plans to improve FSD.
So what is it that provides Waymo and Cruise such good autonomy? They do have different hardware....
 
I don't know if the car needs more sensors per se, but I do know that it will need some ways of auto cleaning all the sensors for it to work in the Canadian winter.

For forward facing cameras on the Windscreen... the cars wipers could be adapted / deployed to clean them.. the car can do that when it senses the need...

Forward cameras plus some working right/left - rear cameras are required. ... they might not need all cameras - just s reasonable subset... and computer software can correct some disported images..

My hunch, none of us can see how Tesla intends to solve this problem, but they have a plan...
 
  • Like
Reactions: phantasms