Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: FSD Beta tweets

This site may earn commission on affiliate links.
It's funny how many people say the "the software is improving rapidly". I beg to differ. It's been two years, and they are still at very low levels of miles/DE. As previous efforts have proven, development have an S-curve, it gets very costly to improve the system to the levels of a human. My guess is "autonomous never" at this pace unfortunately.

There's something to be said for the traditional path of demoing and proving a design before making it manufacturable, smaller, cost effective, and reliable. As it is claims are made of huge advances in computers, NNs, training, etc while never knowing what the design requires. So the design specs keep changing and there's no end date for project completion.
 
Elon says dumb summon V2 perhaps coming next month.
Going without saying that we all want ASS (Actually Smart Summon) ASAP!! Probably next month

Elon hyping V11. Rollout increasing each week.
In its present state of doing the worst possible thing at the worst possible time, I wouldn't dare use any kind of summon, even if the car was sitting in an empty parking lot a mile square. I have visions of ego whipping the steering wheel 3 1/2 turns ccw, apply full throttle for 2.8 seconds, whip the steering wheel 7 turns cw, apply regen + full braking for 2.1 seconds, then center the steering wheel and go full throttle for 1.2 seconds, so its now doing 43 mph, rolling over and over. . . . . . . Makes me think of Denny Hamlin crossing the finish line, sliding, upside down, backwards and on fire, but winning second place. "Here I am boss, just stick your groceries in the trunk."
 
In its present state of doing the worst possible thing at the worst possible time, I wouldn't dare use any kind of summon, even if the car was sitting in an empty parking lot a mile square. I have visions of ego whipping the steering wheel 3 1/2 turns ccw, apply full throttle for 2.8 seconds, whip the steering wheel 7 turns cw, apply regen + full braking for 2.1 seconds, then center the steering wheel and go full throttle for 1.2 seconds, so its now doing 43 mph, rolling over and over. . . . . . . Makes me think of Denny Hamlin crossing the finish line, sliding, upside down, backwards and on fire, but winning second place. "Here I am boss, just stick your groceries in the trunk."
For the most part, Smart Summon is poorly conceived. The current state of FSD is such that you certainly must have eyes on the car at all times. That, coupled with a small max range for summon significantly reduces the usefulness, even if Smart Summon worked perfectly. It sounds good, but for the times when it would actually be useful - retrieving your car in poor weather, chances are that you would need to move around in the weather to keep an eye on the car as it slowly makes its way to you.

Someday, when the car is L4/5, Smart Summon will be great. But, I just don't see how it could be useful on an L2 vehicle.

I've tried it a couple time in an empty parking lot and it worked fine for me. But who needs it in an empty lot? You would just park next to the door.
 
For the most part, Smart Summon is poorly conceived. The current state of FSD is such that you certainly must have eyes on the car at all times. That, coupled with a small max range for summon significantly reduces the usefulness, even if Smart Summon worked perfectly. It sounds good, but for the times when it would actually be useful - retrieving your car in poor weather, chances are that you would need to move around in the weather to keep an eye on the car as it slowly makes its way to you.

Someday, when the car is L4/5, Smart Summon will be great. But, I just don't see how it could be useful on an L2 vehicle.

I've tried it a couple time in an empty parking lot and it worked fine for me. But who needs it in an empty lot? You would just park next to the door.
I can't think of the last time I bought another product that has so many advertised and reported-on features that just don't work well enough to use.

It's amazing how wishy-thinking and blind trust in fantasy can work to market and review a product. Seriously, don't the media writers actually test something before just repeating what they read in the handout. A lot of these cool things just don't work well. Say that. And don't say an upcoming update will fix it, that means nothing, it's fixed when it's fixed
 
I would disagree. While I frequently criticize FSD and will continue when needed there has been significantly improvement for me in the last year. A lot depends on what level of autonomous you're expecting which I think explains varying responses to FSD. I now have many zero disengagement drives.
A year ago it was the opposite. A zero disengagement drive then was unusual and only happened on short drives.
Getting from "holy smokes it didn't try to kill anything" to "just works, boring" is at least ten years away at this pace. 30k miles per failure is a conservative target. Even that might not be enough for autonomy. Looking at the current state of AI and CV, where it's not even trusted in radiology yet (which is not time critical) should tell you something about CV and safety critical applications.

Read this - The End of Starsky Robotics: "The S-Curve here is why Comma.ai, with 5–15 engineers, sees performance not wholly different than Tesla’s 100+ person autonomy team. "

The march of nines hasn't even begun.
 
Last edited:
  • Disagree
Reactions: Silicon Desert
Getting from "holy smokes it didn't try to kill anything" to "just works, boring" is at least ten years away at this pace. 30k miles per failure is a conservative target. Even that might not be enough for autonomy. Looking at the current state of AI and CV, where it's not even trusted in radiology yet (which is not time critical) should tell you something about CV and safety critical applications.

Read this - The End of Starsky Robotics: "The S-Curve here is why Comma.ai, with 5–15 engineers, sees performance not wholly different than Tesla’s 100+ person autonomy team. "

The march of nines hasn't even begun.

As my post said "A lot depends on what level of autonomous you're expecting which I think explains varying responses to FSD." Your expectation of 30k miles per failure is totally different than my perception of what is needed to make FSD a really valuable product. Don't fall into the trap of letting perfect get in the way being a useable product. We just differ on what that is.
 
  • Like
Reactions: Silicon Desert
...Looking at the current state of AI and CV, where it's not even trusted in radiology yet (which is not time critical) should tell you something about CV and safety critical applications.
I think this is a fair point, though personally I'm not familiar enough with the state of the art across AI sub-disciplines to draw too many conclusions. Also comma there are a number of dimensions that make it less than a direct comparison

I remember a small comment, I believe it was from John Gibbs aka Dr Know-it-all on YouTube, who attended AI Day 2. As the event was specifically intended for recruiting of top AI engineers (though noting a fair number of YouTube influencers like himself having been invited as well), he was interested in the reaction of some of those potential employees who came to see the presentations.

As I recall, he related that some of these people were indeed isuper-mpressed by depth of the Tesla AI work. Specifically, at least one or two Stanford grad students in the medical image interpretation field who seemed blown away by the level and sophistication of FSD AI engineering.

Again I'm not claiming a huge conclusion from that tidbit of information, but it suggests that we may not be able to measure or predict FSD's near-rerm potential by looking at medical Image analysis.

Looking at another AI sub-discipline, I've just become aware of the recent breakthroughs in cartoon/art synthesis with the Stable Diffusion AI. Perhaps it's just an interesting party trick, but when you first see working it sure feels like some kind of milestone software event, and not something that we could have predicted would appear so abruptly from following prior examples of AI-driven graphics tools. The entire field feels still quite open to disruptive breakthroughs. The challenge for Tesla's AI team is whether they can stay agile enough to consider and implement disruptive Improvements,

It's also human nature that we very quickly adapt to milestone developments and look for the next thing, evoking your example "just works, boring" response. I always think of the moon landings that seemingly went from a historic triumph of mankind to "Seen it already, what else is on?" in a matter of months. Likewise, mankind just got off the ground with powered flight in 1903; less than a century later, mass air travel was not just common but an unpleasantly low-end commodity - and no one even looks out the window anymore.

Predictions are risky but I think good FSD is much closer than 10 years away. I think that 1 year ago, the plentiful supply of skeptics on TMC would give you the derisive laughing response (an antisocial cop-out BTW) if you predicted the amount of progress to come in 2022. Now it has come, yet we know there's so much more needed.

Of course, one of the easiest ways to dismiss and denigrate the progress is to compare it to Elon's hopeful predictions, even allowing for some goalpost-moving on both sides. (Witness the helpful and original reminder just above, in case we'd forgotten.) I try always to remember that in the decades of my engineering career, significant development projects very rarely came in ahead of the set schedule, yet I very rarely came away with the impression that things would have gone better if we'd set the original milestone deadlines to what they actually turned out to be. A reasonable amount of unreasonableness is a key ingredient for engineering achievement. And people calling out from the sidelines "that's not going to happen", though often correct in the specific, don't contribute to the result.

Regarding the Tesla vs comma .ai comparison: it's interesting in the context of engineering team size vs. accomplishment when we're in the 80% phase; it becomes less of an asset as the product gets closer to a major commercial entity that serves a huge customer base and requires ongoing support and fleshing out of use cases. I note that Tesla is heavily criticized around here (including me to some degree) for having an overly and even dangerously lightweight sensor suite, and also for being only an L2 supervised system at this point, even though robotaxi autonomy is the eventual goal. In a fair comparison then, comma.ai hardware would have to be considered criminally deficient, and Hotz has (or had) pointedly distanced their goals from autonomous unsupervised operation. Anyway, I gather that Elon's project management goals have less to do with a predetermined staffing level that fills holes in a chart, and more to do with a cultural imperative that each new engineering-development hire is there to make a significant forward contribution. Everyone expresses principles like this but few organizations achieve it as they mature. There may well be bloat in Tesla's now quite large organization, but evidently not so much on the AI team.
 
1. L5 autonomy is science fiction still, and requires a few real break-throughs in science. There is no point in comparing image or text generation with this domain. LLMs or image generation such as SD have no understanding of what text (or pixels) it generates - it's all fancy statistics at this point.
2. Autonomy means without supervision by definition. I do not longer expect camera only autonomy in a large ODD to happen, and neither should anyone else based on where the science is. The most likely answer is "not in our lifetime" for L5 and "never" for camera-only.
3. A supervised system is less safe than a human, which makes deploying it among VRU:s a bad idea.
4. When Tesla actually tries to "solve autonomy" (as in shipping a SAE L3+ product) they will start in a small ODD (such as highway). FSD beta is a fantastic L2 (but perhaps not use it among VRU:s). I doubt it will be an L3 without more sensing, since the cost of sensors are rapidly falling.
 
Last edited:
I think most people’s expectations is what they were told by the ceo: ability to use your Tesla as a robotaxi. Aka Fully Self Driving vehicle.
I think the percentage of owners who would actually use their car as a robotaxi is extremely low. Under 10%. I know very few owners who believed what Elon said even though they are Tesla proponents.
 
  • Like
Reactions: Silicon Desert
I think the percentage of owners who would actually use their car as a robotaxi is extremely low. Under 10%. I know very few owners who believed what Elon said even though they are Tesla proponents.
I'd love the option to safely and legally have my Tesla drive me home if I'd had a bit too much to drink. And if parking downtown is too expensive, send the car home to park in the driveway after it drops me off, then have it come back at the end of the evening to pick me up.
 
Last edited:
  • Like
Reactions: impastu
I'd love the option to safely and legally have my Tesla drive me home if I'd had a bit too much to drink. And if parking downtown is to expensive, send the car home to park in the driveway after it drops me off, then have it come back at the end of the evening to pick me up.

Agree, Also, especially great for the elderly who wouldn't have to give up their car. My mistake was not clearly defining Robotaxi in my example as using your car to make money.

What I want for now is the ability to let the car drive without having to monitor it constantly so I could do other things in the car. Text/video etc. but be available in a non emergency situation to take over. (say 30-45 second handoff by FSD). Start with highway first then move to city streets/roads. I believe this is realistic by 2024.
 
  • Like
Reactions: VanFriscia
As my post said "A lot depends on what level of autonomous you're expecting which I think explains varying responses to FSD." Your expectation of 30k miles per failure is totally different than my perception of what is needed to make FSD a really valuable product. Don't fall into the trap of letting perfect get in the way being a useable product. We just differ on what that is.
FSD progress seems to be in fits and starts, 2 steps forward and one step back. It’s definitely not as quick as I would like or as I hoped for but on the flip side, when I think back to where it was a year ago there’s also been a lot of progress.

Without having inside knowledge I can’t say for sure but reading the reports, it appears that the FSD team has had to change the underlying architecture of FSD or it’s components several times. At the same time, when using FSD and reading reports of others I’ve been struck how complex many of the issues are; much more complex than they would appear at first blush. My takeaway from this is that Tesla has had to adjust its approach multiple times to try and deal with all the various situations and road geometries that they encounter across the country.

People like to compare Tesla’s FSD to Waymo and GM but from what I understand, the other systems are heavily geofenced. Without minimizing what Waymo and GM have done, solving FSD in a very specific geographic area is a very different and much easier problem than solving it across the entire country.
 
  • Like
Reactions: Silicon Desert
1. L5 autonomy is science fiction still, and requires a few real break-throughs in science. There is no point in comparing image or text generation with this domain. LLMs or image generation such as SD have no understanding of what text (or pixels) it generates - it's all fancy statistics at this point.
A lot of stuff are science fiction, until they aren't. Examples: computer winning at chess, computer winning at the game of go. A box in your home that can answer difficult questions, just by asking with your voice. Example question: why is the sky blue?
2. Autonomy means without supervision by definition. I do not longer expect camera only autonomy in a large ODD to happen, and neither should anyone else based on where the science is. The most likely answer is "not in our lifetime" for L5 and "never" for camera-only.
Most experts would disagree. Many see general A.I. as coming down the pipe within the next few decades. For example a futurists who has gotten many predictions correct, Kurzweil, said later this decade we will have general A.I. Imagine the most clunky general A.I. possible, requiring data center compute resources. Kurzweil Claims That the Singularity Will Happen by 2045
Cameras are great when there is visibility. I think camera's only will work well enough in good visibility and others think the same including past statements by leading experts like Mobileye. I agree other sensors will be great, when visibility isn't so good, such as at night, rain, fog, etc...
3. A supervised system is less safe than a human, which makes deploying it among VRU:s a bad idea.
Can be true, but I think it won't. The monitoring of the monitor / supervisor will get increasingly stringent to make sure they are doing their job. I'm in favor of electric shock therapy. haha
4. When Tesla actually tries to "solve autonomy" (as in shipping a SAE L3+ product) they will start in a small ODD (such as highway). FSD beta is a fantastic L2 (but perhaps not use it among VRU:s). I doubt it will be an L3 without more sensing, since the cost of sensors are rapidly falling.
L3 on freeway in stop and go traffic seems relatively easy. With increased compute resources, better cameras, and a very good map, seems doable without any major breakthroughs. Doable means better than your average human. So it won't be perfect. Will HW4 do it or will it require HW5? I don't know.
 
Last edited:
  • Like
Reactions: Silicon Desert
I think that 1 year ago, the plentiful supply of skeptics on TMC would give you the derisive laughing response (an antisocial cop-out BTW) if you predicted the amount of progress to come in 2022. Now it has come, yet we know there's so much more needed.
Shots fired. Haha. As a very frequent user of the laughing emoji and major skeptic I feel like this is about me.
On the other hand,
I'm a skeptic and I'm way more optimistic than that. I predict it will be 100x better by the end of 2021.
Obviously I was very wrong. I thought Tesla would see the very rapid progress that other AV companies saw early in their development. I think their choice not to use maps is giving them a different trajectory (I still think they will ultimately use maps, mostly auto generated). Anyway I‘ve scaled back my progress estimate, I think FSD will get 5x better in 2023.
The problem is that human Tesla drivers achieve 1 severe collision per 2 million miles. I remain very pessimistic that they can achieve that with the current hardware.
 
Most experts would disagree. Many see general A.I. as coming down the pipe within the next few decades. For example a futurists who has gotten many predictions correct, Kurzweil, said later this decade we will have general A.I. Imagine the most clunky general A.I. possible, requiring data center compute resources. Kurzweil Claims That the Singularity Will Happen by 2045
Name one expert that has said camera-only autonomy will happen in the coming years. I'm thinking never, since the sensors are dropping in price much faster than I expect ML breakthroughs to happen, so no one would want to use only cameras, even if they could for some limited ODD.
Cameras are great when there is visibility. I think camera's only will work well enough in good visibility and others think the same including past statements by leading experts like Mobileye. I agree other sensors will be great, when visibility isn't so good, such as at night, rain, fog, etc...
There has been a lot of "experts" doing marketing. MobilEye hasn't claimed autonomy for camera-only for the last 3-4 years. Tesla is walking Elon's statements and lies back for the last few years by subtly changing the language.

Measuring the distance physically rather than guessing from a monocular 2D image (Tesla doesn't even do forward stereo vision) gives you a lot of extra safety for a relatively low cost.

L3 on freeway in stop and go traffic seems relatively easy. With increased compute resources, better cameras, and a very good map, seems doable without any major breakthroughs. Doable means better than your average human. So it won't be perfect. Will HW4 do it or will it require HW5? I don't know.
Freeway stop and go isn't a meaningful ODD imho, but I agree it may be doable in 3-5 years. The main question is why would you want to do it with camera only. It makes no business sense if you are liable and can 10x MTBF for less than $500 p.u.

Lidar doesn't solve autonomy for sure, but it adds safety for the L2 and the security functions such as FCW, AEB. Tesla won't be removing the driver from the DDT anytime soon.

The world's most expensive L2 is what it is. I look forward to L3 solutions in meaningful speeds from other OEMs. Meanwhile I am keeping my Tesla with FSD until there is such a car I can buy.
 
Last edited:
  • Like
Reactions: Sharps97
Name one expert that has said camera-only autonomy will happen in the coming years.

James Douma, Andrej Karpathy, Kilian Weinberger to name 3.


There has been a lot of "experts" doing marketing. MobilEye hasn't claimed autonomy for camera-only for the last 3-4 years.

Except, of course they have. They claim it RIGHT NOW in fact.


An AV that can drive on cameras alone

They do state they intend to add a second radar/lidar system as a redundant backup- but are very clear as per quote above it can be autonomous with camera only.


BTW still waiting for a citation of your "supervised system is less safe than a human" claim.