Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW4 sensor suite and hi-res camera updates?

This site may earn commission on affiliate links.
Most of use that work with machine learning (AI) know that it's the edge cases that get you. I'm working on a project now that took me 1 day to build and on month 3 trying to work out a single edge case that's preventing us from releasing the product.

Telsa got to 95% of FSD pretty quick (a few years) but that last 5% are edge cases and could take a long, long, long, long time to work out if they even can be.

This is the issue with machine learning in general and why you'll never see AI "take over" like so many people seem to claim. Even ChatGPT is hilariously bad vs a human at writing software and it's all over the news and YouTube like it's going to put all software developers out of a job this year. Won't ever happen.

Tesla is banking on "vision" (cameras + machine learning) to solve every case, I just don't think it's possible. I think it's admirable trying.

Guaranteed the reason FSD is $15,000 now is to limit the number of people using it, hence limiting liability exposure. Lawyers were involved in that price increase.

You would have to create and raise an AI that has all the sensory inputs of a human and goes through all the interactions a human does from birth to learning how to drive and then, at the age of about 25, it would probably be good enough. As humans, we take for granted what we know.

Computers are slow when cross referencing complex information, humans are fast. A thought exercise ... walk into a kitchen you've never been in before and then you'll notice, in about 80ms, that you recognize most everything in the room just at a glance and know that the stove is hot, the knife is sharp, the floor is wet therefore slippery, etc, etc. Machine learning requires that to be trained and it's not an insignificant task.

I wish to be proven wrong by Tesla, they have mad skills working on these problems.
 
Do you see Tesla taking ownership of the DDT and liability for accidents in vehicles that are only 2-3x safer than a human?
If Tesla wants to move the market with a wide deployment of robotaxis sooner, then they could do so with the HW3 fleet even if it is not as safe as the presumably much smaller HW4 fleet in the near/mid-term. It'll be interesting to see how Tesla will take or split liability of robotaxi accidents as if they do take responsibility either as the manufacturer or insurance provider, it would probably be worthwhile to do "preventive care" in upgrading a robotaxi vehicle to HW4+ to reduce the risk of an accident.

A separate issue not directly with safety is comfort / timeliness / annoyance-to-others aspects of a robotaxi. Presumably each of those can still be improved with HW3 than what we currently experience with FSD Beta, but those probably can be improved even more with HW4+, so if there is a mix of HW3/HW4+ robotaxi fleet, would consumers give preference to the one that's more comfortable? If that comes at a price premium, that could also help justify robotaxi fleet getting upgraded to newer hardware.

This is all still assuming regulators allow robotaxis that are "only" 2-3x safer than a human or even 4-6x. Is it reasonable to allow "1x safe" average humans while excluding safer vehicles? Potentially regulators and/or lawmakers decide safety isn't enough and there needs to be other quality aspects too.
 
Most of use that work with machine learning (AI) know that it's the edge cases that get you. I'm working on a project now that took me 1 day to build and on month 3 trying to work out a single edge case that's preventing us from releasing the product.

Telsa got to 95% of FSD pretty quick (a few years) but that last 5% are edge cases and could take a long, long, long, long time to work out if they even can be.

This is the issue with machine learning in general and why you'll never see AI "take over" like so many people seem to claim. Even ChatGPT is hilariously bad vs a human at writing software and it's all over the news and YouTube like it's going to put all software developers out of a job this year. Won't ever happen.

Tesla is banking on "vision" (cameras + machine learning) to solve every case, I just don't think it's possible. I think it's admirable trying.

Guaranteed the reason FSD is $15,000 now is to limit the number of people using it, hence limiting liability exposure. Lawyers were involved in that price increase.

You would have to create and raise an AI that has all the sensory inputs of a human and goes through all the interactions a human does from birth to learning how to drive and then, at the age of about 25, it would probably be good enough. As humans, we take for granted what we know.

Computers are slow when cross referencing complex information, humans are fast. A thought exercise ... walk into a kitchen you've never been in before and then you'll notice, in about 80ms, that you recognize most everything in the room just at a glance and know that the stove is hot, the knife is sharp, the floor is wet therefore slippery, etc, etc. Machine learning requires that to be trained and it's not an insignificant task.

I wish to be proven wrong by Tesla, they have mad skills working on these problems.
I totally agree with this. I mean if OP is really has "The models are never explicitly told about traffic-lights or stop signs, and yet they have learned to obey them" then Tesla should be able to really make FSD good, they got a lot more resources and talent.
 
"The models are never explicitly told about traffic-lights or stop signs, and yet they have learned to obey them"
It's funny how we anthropomorphize machine learning ... it's just 1's and 0's ...

But machine learning doesn't just learn on it's own, it's basically "punished" into learning the right way and eventually it gets so good at knowing what not to do it seems almost human ... almost ... until a deer that got tangled in and dragging a green traffic light across a busy intersection ... then all bets are off :D ridiculous example but so are most edge cases ...

I'm not sure what the op meant by that but no, your car did not figure out what a traffic light is on its own, Tesla has (had?) many hundreds of people "labeling" objects for it's machine learning model from sample videos the fleet has been uploading for years.
 
If Tesla wants to move the market with a wide deployment of robotaxis sooner, then they could do so with the HW3 fleet even if it is not as safe as the presumably much smaller HW4 fleet in the near/mid-term. It'll be interesting to see how Tesla will take or split liability of robotaxi accidents as if they do take responsibility either as the manufacturer or insurance provider, it would probably be worthwhile to do "preventive care" in upgrading a robotaxi vehicle to HW4+ to reduce the risk of an accident.

A separate issue not directly with safety is comfort / timeliness / annoyance-to-others aspects of a robotaxi. Presumably each of those can still be improved with HW3 than what we currently experience with FSD Beta, but those probably can be improved even more with HW4+, so if there is a mix of HW3/HW4+ robotaxi fleet, would consumers give preference to the one that's more comfortable? If that comes at a price premium, that could also help justify robotaxi fleet getting upgraded to newer hardware.

This is all still assuming regulators allow robotaxis that are "only" 2-3x safer than a human or even 4-6x. Is it reasonable to allow "1x safe" average humans while excluding safer vehicles? Potentially regulators and/or lawmakers decide safety isn't enough and there needs to be other quality aspects too.
I'm digging the way you're thinking about the details here, have spent some time thinking about the nitty gritty as well but hadn't considered charging more for the higher hardware levels. It makes sense, and I'd expect higher rates for S/X robotaxis versus 3/Y.

The regulators would probably need crazy granularity in data provided to allow generalized robotaxis that are 1-3x safer than humans, because it would need to be broken down by road type with context around weather etc etc. In the end I'm not sure regulators would be in the way as much as corporations will be unwilling to take the risk of owning liability in the way necessary to enable broad generalized robotaxi deployment, a huge increase in utilization, and all those benefits. Even just seeing what is happening right now with Autopilot accidents where it's clearly a Level 2 ADAS, each case of damages / injuries / fatalities owned by corporations will be a risk of many millions in lawsuits.

Elon has talked about what would be required to do that and how getting to 2-3x safer than humans is easy and a low hurdle to jump, beyond that is where it becomes difficult

Elon said:
Being better than a human is relatively straightforward, frankly – but how do you be 1000 per cent better, 10,000 per cent better? That’s much harder

All this can tie into the "Vision-only" theory and the idea that we drive with eyeballs so why can't a vehicle? Well because a robotaxi needs to far exceed our capabilities, not match them. Vision-only is surely fine for a Level 2 ADAS, but nobody has a clue what a Level 4-5 robotaxi will look like.
 
It'll be interesting to see how Tesla will take or split liability of robotaxi accidents as if they do take responsibility either as the manufacturer or insurance provider, it would probably be worthwhile to do "preventive care" in upgrading a robotaxi vehicle to HW4+ to reduce the risk of an accident.
IMO, Tesla will never take responsibility for FSD of current gen vehicles. They shouldn't - whats the point ? Accidents, even if rare, cost just too much in liability to ever be worth the marginal revenue.
 
  • Like
Reactions: spacecoin
All this can tie into the "Vision-only" theory and the idea that we drive with eyeballs so why can't a vehicle?
Right, but our eyeballs are tied to our brains which have been taught through trial and error for <insert your age here>.

It's hard to understate how much we know and how fast our brains can cross reference information.

I think uploading a fully adult human with a good driving record into the FSD computer would get there faster ... silly example that can't be done but hopefully makes my point.
 
IMO, Tesla will never take responsibility for FSD of current gen vehicles. They shouldn't - whats the point ? Accidents, even if rare, cost just too much in liability to ever be worth the marginal revenue.
Tesla never said what cut the Tesla Network would take but I wouldn’t call the the revenue “marginal.”
1674755060890.png
 
I feel bad for those folks that paid $15k for HW3.
The $15,000 is just a license to enable FSD. HW3 is already in the cars and your car already has FSD, it's just turned off. Your HW3 computer is typically behind your glove box, and used for autopilot, etc. :D

A great comparison between HW3 and HW4 would be something like a Playstation 3 to Playstation 4, both are advanced and very capable computers for video games, but Sony didn't just stop at Playstation 3 and in a similar fashion, Tesla is still improving their computers.

Edit: There is an hour long presentation out there somewhere by Elon and one of the AI engineers at Tesla talking about HW3, it's quite the advanced piece of hardware and should work great for a long time, don't feel slighted at the sign of HW4 on the horizon. There is also a HW3.1 and HW3.5 revisions.
 
Last edited:
Accidents, even if rare, cost just too much in liability to ever be worth the marginal revenue
Reusing the numbers from Tesla's Vehicle Safety Report: "in the United States there was an automobile crash approximately every 652,000 miles." Assuming a robotaxi matches average humans and say Tesla makes a profit of 10¢/mile, is $65.2k average profit per crash worthwhile? What if the miles per crash was 5x (3.3M miles/crash) and profit was 10x ($1/mile), is $3.3M/vehicle worth the costs (e.g., insurance, maintenance, charging).

I suppose a big factor would be how much would insurance be, and Tesla Insurance covering robotaxis in the future if sold to vehicle owners could reflect how dangerous / expensive it is to run robotaxis on HW3 vs HW4+.
 
  • Informative
Reactions: pilotSteve
Well, the customers who bought the first iPhone found the plastic screen got all scratched up in their pocket. Anyone who uses bleeding edge tech has to expect it to become obsolete in the near future.
The first iPhone sold to customers had a glass screen because Steve Jobs realized plastic screen was crap.
 
  • Funny
  • Like
Reactions: CyberGus and EVNow
It's funny how we anthropomorphize machine learning ... it's just 1's and 0's ...

But machine learning doesn't just learn on it's own, it's basically "punished" into learning the right way and eventually it gets so good at knowing what not to do it seems almost human ... almost ... until a deer that got tangled in and dragging a green traffic light across a busy intersection ... then all bets are off :D ridiculous example but so are most edge cases ...

I'm not sure what the op meant by that but no, your car did not figure out what a traffic light is on its own, Tesla has (had?) many hundreds of people "labeling" objects for it's machine learning model from sample videos the fleet has been uploading for years.
I agree and ya those edge cases of stuff its never encountered is maybe where rules come into play. They really need to get to reading signs too like no turn on red, etc.. I can see how they dont want to use map data or pre-mapped data since things can change like construction, but they got a lot to solve compared to how humans can do it all within a split second. And OP meaning OpenPilot sorry.
 
The regulators would probably need crazy granularity in data provided to allow generalized robotaxis that are 1-3x safer than humans, because it would need to be broken down by road type with context around weather etc
There's also the general population's reactions to the types of accidents where even basic driver assistance keeping distance from the lead vehicle should crash far less often than humans rear-ending others. Whereas will regulators listen to people outraged at "silly" mistakes of a self-driving car such as slowing down unnecessarily resulting in a crash with an inattentive/impatient human driver?

Maybe regulators looking at the detailed data will find out that even if self-driving cars crash at the same rate as human drivers, the majority of crashes could be at low speeds and potentially much lower rates of fatalities too and fewer situations that are "at fault." Definitely would be an inconvenience for a robotaxi passenger that might result in lower general adoption of the service, but that's probably more of a business decision of how to handle especially if the competition is still mostly human taxi/uber drivers charging maybe $2-4/mile.
 
....but that time will NEVER come since all Tesla has to do is say they are still "working on" getting HW3 cars to FSD. Remember there is no time line set (that I know of) so as long as Tesla is actively working and updating the cars they can plausibly say it is coming.

Just because HW4 may reach FSD first doesn't mean HW3 can't or that Tesla will say it can't.

Hypothetical scenario:

2025 HW4 reaches L3
2027 HW4 reaches L4 and HW3 reaches L3

Once HW3 reaches L3 Tesla has fulfilled offering FSD for your car even though HW4 would be MUCH better at L4. L3 still qualifies as FSD.
As soon as FSD features I paid for are delivered in non-beta capacity to HW4 vehicles, and not to my car, I will request them from Tesla, or a free retrofit. If their response is not satisfactory, I will get my lawyer to send them a letter, followed by legal action. It's really that simple.
 
Well, the customers who bought the first iPhone found the plastic screen got all scratched up in their pocket. Anyone who uses bleeding edge tech has to expect it to become obsolete in the near future.
OT: No iPhone EVER had a plastic screen. Windows phones did have a plastic screen and Jobs was adamant that no iPhone would. Even made a deal with Corning who had just started marketing Gorilla Glass that at the time no one wanted to use, to use in the iPhone.

Wikipededia said:
...Six weeks prior to the iPhone's release, the plastic screen was replaced with glass, after Jobs was upset that the screen of the prototype he was carrying in his pocket had been scratched by his keys....
 
Last edited:
  • Like
Reactions: CyberGus
Reusing the numbers from Tesla's Vehicle Safety Report: "in the United States there was an automobile crash approximately every 652,000 miles." Assuming a robotaxi matches average humans and say Tesla makes a profit of 10¢/mile, is $65.2k average profit per crash worthwhile? What if the miles per crash was 5x (3.3M miles/crash) and profit was 10x ($1/mile), is $3.3M/vehicle worth the costs (e.g., insurance, maintenance, charging).

I suppose a big factor would be how much would insurance be, and Tesla Insurance covering robotaxis in the future if sold to vehicle owners could reflect how dangerous / expensive it is to run robotaxis on HW3 vs HW4+.
I've done some calculations in my earlier posts on this. But mostly around L3 (not robotaxi). Basically is it worthwhile for Tesla to assume responsibility for L3.

If they do achieve robotaxi and can get money for every mile etc - thats a different economic model.
 
No, you paid for an electric vehicle, which works fine. You also paid for an add-on that doesn't work to your specifications. Take away the add-on and the car still works as an electric vehicle.
I did not. I specifically paid for a car with FSD, which is why I made sure to put it on my original order, not tacked on after delivery. I also paid for 20" wheels and 7 seats in my original order, and got them. If they were backordered, that's fine. But if they fail to eventually deliver in reasonable time, or rather, deliver to others that bought after me, and not to me, then I'd use legal action to enforce the order, take back the car, or pay damages.

This isn't difficult.
 
  • Like
Reactions: pilotSteve