Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Thats the point I'm making.

"Autonomous" doesn't mean "Autonomous production" - you do have pre-beta autonomous, as in this case. As they say Rome wasn't built in one day.

Sorry - the 'Auto' in autonomous doesn't mean automobile. It means self. If you want to try to make the argument that a feature that functions by itself yet makes continuous mistakes and is a danger to others unless it has a person to constantly monitor, intervene and correct is 'autonomous,' fine, but no one else will buy it. By that definition, my 95 Escort with cruise control as autonomous, too.

FSD is no ready for prime time, but that's the point of beta software. we all hope it will be soon but for now it's not.
 
Because everybody lies to the Government
This hits the nail on it's head. In very specific legal terms, some of Tesla's lawyers told CA DMV that the software was not autonomous. Why? Probably to get around onerous reporting requirements. The software is clearly intended to become autonomous (Do you trust the CEO's publicly stated intention or some legalese in an obscure document?). By stating that it is not autonomous, it allows the bureaucrats to check their boxes and go on with their day.
 
This hits the nail on it's head. In very specific legal terms, some of Tesla's lawyers told CA DMV that the software was not autonomous. Why? Probably to get around onerous reporting requirements.

Also because it literally lacks abilities required for autonomy

The software is clearly intended to become autonomous

No, it's not.

It can't be autonomous because again, the actual software does not have the ability to do that

Another, future, software, will. This one doesn't.


(Do you trust the CEO's publicly stated intention

You mean the one who said his intention was an autonomous cross country drive in 2017 and it's 5 years later and nothing remotely like that is yet possible?

His aspirational goals-- and what the actual current software is designed to do and capable of- are.... not the same thing. By a fairly large margin still.


Tesla absolutely intended to eventually release autonomous driving software.

Thus far, including FSDBeta, they factually have not done so


100% of customer software released, beta included, is an advanced driver aid system. That's not autonomous. By Definition.[/QUOTE]
 
According to you it does. Afterall auto = "self", mobile = something that can move ?!

As I said - this is a fruitless discussion. Anytime someone gets into a dogmatic position, like many do in this forum, they just get lost in "purity" of definitions.

It's kind of amazing to me... I own a car that drives itself (Albeit imperfectly). This was Sci-Fi like 2 years ago. I'm confused why the discussion is getting wrapped around the axle on semantics.

Kinda reminds me of the Louis CK bit of "Everything is great and no-one is happy".
 
It's nothing to do with purity, or semantics it's to do with words actually having meaning.

Otherwise I can call a rock autonomous. It does all its rock stuff without need of a human right? Rocks totally beat tesla to autonomy!

Or heck, I can take a car, put it in neutral, aim it down hill, and push. Look! it got to the bottom without a human! it's autonomous!



No- autonomous driving has an actual meaning. There's specific things a system must be capable of doing to be autonomous.

An ADAS system and an autonomous system have specific, clear, defined differences in capabilities.


Definitions are available multiple places, not just the SAE ones, but in state laws (and laws overseas as well).

And nothing Tesla currently has on any customer car meets that meaning, but any of those definitions.

That's not dogma, it's fact.

Even Tesla tells it to you, both during the purchase of FSD, in the owners manual of the car, and in the detailed technical descriptions of FSDBeta provided to the CA DMV.


So the only dogma would be folks insisting they already have it when even Tesla themselves tells you they don't.


Tesla certainly intended to provide autonomous driving in the future. It's not here yet. What's here today, in the beta, is the most advanced ADAS system available- by quite a large marge. What's NOT here today is an autonomous one. At all.
 
  • Like
Reactions: pilotSteve
@Knightshade, I'm curious to know. Do you think that Tesla's current approach will succeed in reaching autonomy? If so, when?


Depends what you define as their "approach"

If you mean at a high level- using vision and AI- then yes, I do. I don't think stuff like LIDAR and mm-level mapping is needed as Waymo and others do.


If you mean using the exact HW in current cars- no, I don't.

HW3 is already maxed out, and that's using both nodes to run a single instance of the software (for L4+ they'll need the entire instance to fit in a single node, so there's a second for redundancy), and it's still just L2 even at that. So they'll need at minimum significantly more compute power in the car.

Is HW4, which we know is coming (it was even mentioned at autonomy day) going to be enough? We don't know. Tesla doesn't either. They originally thought HW2.0 was enough. Then they thought 2.5 was. Then they thought 3.0 was. None were. Nobody can know until the solution is actually done.


And the cameras are relatively poor resolution, too few, and with nowhere near enough redundancy or low light performance to deliver anything remotely approaching an L5 capable vehicle.

What they've done with them is amazing, and I think they can probably get (though likely needing HW4 to do it) to an excellent L2 system that works on city streets... They might even get to an L3 system if they can throw enough compute at it, and the takeover warnings are relatively short....

But there's physical limits AI isn't a "fix" for and that will prevent anything like generalized L4+ use.

I think to deliver L4 or better city driving they're going to need at minimum two more forward/side facing cameras probably located around where the front turn lights are, to be able to see around all the corners the car currently tries to creep most of the way into coming cross traffic to see around.

It'd probably be smart to add a similar pair to the rear for rear cross traffic for situations when the car has no choice but to back out of someplace.

And I think all the cameras are going to need higher resolution (currently all that voxel/depth stuff is being done at 160 pixels of resolution- so again it's amazing how much they can do with so little, but there's limits nothing but better cameras can fix)... and also better low light performance because the current cameras aren't awesome there. Also better design or coating to deal with bad weather- I still get NoA dropping down to regular AP in moderate rain, and "FSD degraded" messages in such as well.



So their general approach? Yup. The exact one today with exactly todays in-car HW? Nope. At minimum they'll need more compute, and without also any camera upgrades they'd be limited to the narrow cases I mention (maybe L3 city and requiring driver takeover for things like when it can't see around corners--- and maybe L4 highway with weather limits on the ODD)
 
Depends what you define as their "approach"

If you mean at a high level- using vision and AI- then yes, I do. I don't think stuff like LIDAR and mm-level mapping is needed as Waymo and others do.


If you mean using the exact HW in current cars- no, I don't.

HW3 is already maxed out, and that's using both nodes to run a single instance of the software (for L4+ they'll need the entire instance to fit in a single node, so there's a second for redundancy), and it's still just L2 even at that. So they'll need at minimum significantly more compute power in the car.

Is HW4, which we know is coming (it was even mentioned at autonomy day) going to be enough? We don't know. Tesla doesn't either. They originally thought HW2.0 was enough. Then they thought 2.5 was. Then they thought 3.0 was. None were. Nobody can know until the solution is actually done.


And the cameras are relatively poor resolution, too few, and with nowhere near enough redundancy or low light performance to deliver anything remotely approaching an L5 capable vehicle.

What they've done with them is amazing, and I think they can probably get (though likely needing HW4 to do it) to an excellent L2 system that works on city streets... They might even get to an L3 system if they can throw enough compute at it, and the takeover warnings are relatively short....

But there's physical limits AI isn't a "fix" for and that will prevent anything like generalized L4+ use.

I think to deliver L4 or better city driving they're going to need at minimum two more forward/side facing cameras probably located around where the front turn lights are, to be able to see around all the corners the car currently tries to creep most of the way into coming cross traffic to see around.

It'd probably be smart to add a similar pair to the rear for rear cross traffic for situations when the car has no choice but to back out of someplace.

And I think all the cameras are going to need higher resolution (currently all that voxel/depth stuff is being done at 160 pixels of resolution- so again it's amazing how much they can do with so little, but there's limits nothing but better cameras can fix)... and also better low light performance because the current cameras aren't awesome there. Also better design or coating to deal with bad weather- I still get NoA dropping down to regular AP in moderate rain, and "FSD degraded" messages in such as well.



So their general approach? Yup. The exact one today with exactly todays in-car HW? Nope. At minimum they'll need more compute, and without also any camera upgrades they'd be limited to the narrow cases I mention (maybe L3 city and requiring driver takeover for things like when it can't see around corners--- and maybe L4 highway with weather limits on the ODD)

I tend to think there are relying too heavily on ML. E.g., a L2 system doesn’t need to figure out when to pass cars that are stopped and occluding progress. In that case ask the driver for help. I think they’ve done a great job using ML to perceive the environment around the car and assuming that’s accurate, other approaches could be used to create a pretty good system. Easily an L2 system. I.e., is there a hammer/nail issue going on? I don’t think our understanding of AI is going to lead to robots or a good understanding by computers of what’s going in. But it seems like telsa seems to think with a large enough training set it will.

How do you know HW3 is maxed out for performance and that all the large optimization in the software have already been applied?

I agree that the camera specs and placement look like they should be improved. They should start on that now..
 
  • Helpful
Reactions: APotatoGod
How do you know HW3 is maxed out for performance and that all the large optimization in the software have already been applied?

Hackers with more direct access to the HW like our own @verygreen have been posting (though moreso on twitter in his case) since at least mid 2020 that Tesla had run out of compute on Node A and was spilling stuff over to B, and that they were increasingly reaching the limits of the 2 nodes combined (in part because the code was originally written NOT to need to split compute across the nodes, and the HW isn't really optimized to split compute that way either).

Some of his many mentions of it here for example:


Some folks are convinced there's gonna be some MAGIC optimizations later that somehow BOTH add a ton of core functionality the current code simply does not have to achieve >L2, AND also fit ALL that code, old and new, back into a single node so it can run redundantly.

I find that.... unrealistically optimistic.



I agree that the camera specs and placement look like they should be improved. They should start on that now..

Elon mentioned next-gen cameras coming on Cybertruck later this year. Of course that was before Cybertruck slipped to next year.
 
Lets see, Cruise & Waymo started as "not an autonomous car" company. Then when a magical disengagement rate was hit, they immediately became autonomous car companies !!!!


...what?

"disengagement rate" has nothing whatsoever to do with if a vehicle is autonomous or not.



A vehicle with a very low disengagement rate, but that when the system disengages it can't do so safely, still isn't autonomous.

A vehicle with a very high disengagement rate, but that when the system disengages it CAN do so safely, might be. (there's other requirements of course)
 
Depends what you define as their "approach"

If you mean at a high level- using vision and AI- then yes, I do. I don't think stuff like LIDAR and mm-level mapping is needed as Waymo and others do.


If you mean using the exact HW in current cars- no, I don't.

HW3 is already maxed out, and that's using both nodes to run a single instance of the software, and it's still just L2 even at that. So they'll need at minimum significantly more compute power in the car.

Is HW4, which we know is coming (it was even mentioned at autonomy day) going to be enough? We don't know. Tesla doesn't either. Nobody can know until the solution is actually done.

Your overall assessment seems similar to mine as far as the hardware. The number one limit is compute. Number 2 limit is low light performance/poor weather performance of the cameras. I'm not however convinced that the camera placement and count is wrong. An interesting experiment would be having an 8 monitor setup with all camera feeds and seeing if if was possible to drive the car from that (Not that anyone is going to do this!). I reckon it would be possible, although more cameras in the future could further improve performance.
A vehicle with a very low disengagement rate, but that when the system disengages it can't do so safely, still isn't autonomous.

This does not seem to be the case to me. If a car can drive 10 billion miles between unsafe disengagements, I would consider it autonomous. If it can only drive 2 miles, probably not. So, at some point between 2 and 10 billion miles per disengagement, it becomes autonomous. What that number is, I'm not sure, but I do know that it exists.