Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: Maybe update first week of March

This site may earn commission on affiliate links.
This tweet does not sound like L5 is going to happen this year as Elon previously stated. From this tweet, it sounds like Tesla is in the middle of implementing the 4D rewrite and needs to do lots of other things too. Elon says that there is software that needs to be written and validated. It sounds like they still have a lot to do. It will take time. And there will probably be set-backs which is normal in software development. "Maybe something next week" could turn out to be a month or several months.

So I am happy with the update. It's great to get details on what Tesla is doing. But I think we need to forget any predictions about when Tesla will achieve L5. Just let the team do their work.

How is this different than what Elon has been saying for the last several months? He's been talking about moving all their NNs to surround video for quite some time now, and during this time, he's been very confident of achieving a level 5 feature this year.

https://twitter.com/elonmusk/status/1294375249761275904?s=20

https://twitter.com/elonmusk/status/1353663687505178627?s=20

Elon in the earnings call 1/27/21:

Yes. So it really goes back to what I was saying a moment ago, which is, we need to transition over the neural nets in the car to video. And in order to do that, the whole stack has to be to – the whole stack has to be changed to video. That means gathering video clips than using – and this is actually surround video.

So you've got 8 cameras operating simultaneously with synchronized frame rates. So you've got 8 frame surround video – 8 camera surround video. And then you've got to label basically everything in that video snippet and then train against that and have those neural nets operate the car.

So – and this is coming from the past where we would label, the neural nets would be a single camera, single frame. So no video and not combining the cameras. And then we went from single frame, one frame at a time, one camera at a time, neural nets to surround camera -- neural nets would look at all -- all 8 cameras but only 1 frame at a time, and now to where we include the time dimension, and that's video.

So I really do see this as a question of getting work done. We're getting it done. And you can see the results in the rapidly improving FSD betas that are at least -- we're also going to be expanding the FSD beta itself to include more and more people. So from my standpoint, it looks like a very clear and obvious path towards a vehicle that will drive 100% safer than a person. Yes. I really don't see any obstacles here.

https://seekingalpha.com/article/44...k-on-q4-2020-results-earnings-call-transcript
 
Last edited:
  • Like
Reactions: APotatoGod
That's exactly what I got from that part too (that he's talking about doing NNs on cropped portions instead of on all regions). However, it may not necessarily have to do solely (or even at all) with compute limitations. I imagine an NN that looks only at a cropped portion of the image would be quite different than one that looks at the whole camera view. For example, one that only looked for road lines on a crop of the ground would likely be quite different than one that looked for them on the whole image (including the sky).

I was wondering about this also, and was happy to read the takes from you and @Mardak

Here are some initial thoughts / assumptions I had that might be completely wrong.

I assumed this meant there are certain NNs that are trained for something specific and only process a certain region of interest... So there may be some NNs and/or other processing/intelligence that picks out ROIs then crops image data at those ROIs and runs through these other types of NNs..

examples of this:
  • some NN detects the presence of a traffic sign or light... crops the pixels in those bounding boxes and then passes to new NN that only processes those pixels for classification
  • Vehicle is making an unprotected left.. it knows there is a certain lane it needs to watch for oncoming cars.. maybe some intelligence from early NN and other info and possibly map data determines a cropped area to watch for cars on that lane, then a dedicated NN runs object detection/tracking at a higher framerate and only on that cropped region.
  • some NN detects ambiguous scene or class up a ahead like a construction scene, or some object with ambiguous classification.. then creates these ROIs, crops out those pixels, and passes to a more specific NN that processes them to solve some some of that ambiguity.

These were just the thoughts that jumped into my head when I first read Musk's tweet saying: `using subnets on focal areas (vs equal compute on all uncropped pixels)`
But all of these could very likely be completely wrong
 
I was wondering about this also, and was happy to read the takes from you and @Mardak

Here are some initial thoughts / assumptions I had that might be completely wrong.

I assumed this meant there are certain NNs that are trained for something specific and only process a certain region of interest... So there may be some NNs and/or other processing/intelligence that picks out ROIs then crops image data at those ROIs and runs through these other types of NNs..

examples of this:
  • some NN detects the presence of a traffic sign or light... crops the pixels in those bounding boxes and then passes to new NN that only processes those pixels for classification
  • Vehicle is making an unprotected left.. it knows there is a certain lane it needs to watch for oncoming cars.. maybe some intelligence from early NN and other info and possibly map data determines a cropped area to watch for cars on that lane, then a dedicated NN runs object detection/tracking at a higher framerate and only on that cropped region.
  • some NN detects ambiguous scene or class up a ahead like a construction scene, or some object with ambiguous classification.. then creates these ROIs, crops out those pixels, and passes to a more specific NN that processes them to solve some some of that ambiguity.

These were just the thoughts that jumped into my head when I first read Musk's tweet saying: `using subnets on focal areas (vs equal compute on all uncropped pixels)`
But all of these could very likely be completely wrong
I agree with those examples. In fact, the traffic light example was what I thought of today looking at other threads that mentioned some lights are too dim for AP to identify which light was on. This seems like something a crop would help with, maybe even some HDR type processing to extract more info from the lights.
 
  • Like
Reactions: APotatoGod
I agree with those examples. In fact, the traffic light example was what I thought of today looking at other threads that mentioned some lights are too dim for AP to identify which light was on. This seems like something a crop would help with, maybe even some HDR type processing to extract more info from the lights.

This approach would also make a lot of sense if it was paired with higher resolution cameras... like a few MP. most of the time downscaled and processed and whatever they process at now... but for certain things can take a crop from the high resolution image.. I wonder if Tesla will ever update the cameras to higher resolution.

I forget what the current camera resolution is, can someone remind me?
 
This approach would also make a lot of sense if it was paired with higher resolution cameras... like a few MP. most of the time downscaled and processed and whatever they process at now... but for certain things can take a crop from the high resolution image.. I wonder if Tesla will ever update the cameras to higher resolution.

I forget what the current camera resolution is, can someone remind me?
All the cameras other than the rear view are 1280 x 960 Aptina sensors, unless there had been more recent changes.
Undocumented – TeslaTap
 
  • Informative
Reactions: APotatoGod
How is this different than what Elon has been saying for the last several months? He's been talking about moving all their NNs to surround video for quite some time now, and during this time, he's been very confident of achieving a level 5 feature this year.

Well, back in August 2020, Elon said the 4D rewrite would see a limited release in 6-10 weeks. So I guess I am a little surprised that they have not already finished upgrading the NN to surround video by now.

If Tesla is just now upgrading the NN to surround video, then it sounds like Tesla still has a lot of work to do. It seems like a big leap going from "upgrading the NN to surround video" to actually releasing a L5 to the wide fleet everywhere in the US with no driver supervision at all. I am not questioning the quality of the work, I am just a bit skeptical on the timeline. I am skeptical that we will see a wide release of L5 with no driver supervision this year.
 
  • Like
Reactions: APotatoGod
Well, back in August 2020, Elon said the 4D rewrite would see a limited release in 6-10 weeks. So I guess I am a little surprised that they have not already finished upgrading the NN to surround video by now.

I thought the general consensus is that they have switched some NNS to surround video like the networks that do road layout, lanes, and trajectory... but as we know they have dozens of NNs... and are still working on porting all of them to new arch...

Perhaps this latest tweet suggests, next update they will be finished porting all of the networks to the new arch.
 
Did he say L5 by the end of this year or is that your interpretation.

From what I remember Musk doesn’t care about L3/L5 etc. When asked a question about levels he assumes some stuff and answers.

He already told us since 2015 what he believes fully autonomy or complete autonomy consists of and what he thinks Level 5 means.

Level 5 = "no geofence", "cross country summon".
Complete / Full autonomy = "1 million robotaxis with no one in them", "look outside your window", "safe to fall asleep and wake up at their destination", "human intervention will decrease safety"​

Put two and two together, its quite easy.

Telling comment was this "Maybe towards the end this year, I'd be shocked if not next year, at the latest that having the person, having human intervene will decrease safety. DECREASE!".

That he would be "shocked" about what clearly didn't happen and look at all those "I'm confident" statements.

Let's roll the tapes with early 2021 prediction updates:

December 2015: "We're going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years."

Elon Musk Says Tesla Vehicles Will Drive Themselves in Two Years

January 2016: "In ~2 years, summon should work anywhere connected by land & not blocked by borders, eg you're in LA and the car is in NY"

Elon Musk on Twitter

June 2016: "I really consider autonomous driving a solved problem, I think we are less than two years away from complete autonomy, safer than humans, but regulations should take at least another year," Musk said.

Two years until self-driving cars are on the road – is Elon Musk right?

March 2017: "I think that [you will be able to fall asleep in a tesla] is about two years" -

Transcript of "The future we're building -- and boring"

March 2018: "I think probably by end of next year [end of 2019] self-driving will encompass essentially all modes of driving and be at least 100% to 200% safer than a person."

SXSW 2018

Nov 15, 2018: "Probably technically be able to [self deliver Teslas to customers doors] in about a year then its up to the regulators"

Elon Musk on Twitter

Jan 30 2019: "We need to be at 99.9999..% We need to be extremely reliable. When do we think it is safe for FSD, probably towards the end of this year then its up to the regulators when they will decide to approve that."

Tesla Q4 Earnings Call

Feb 19 2019: "We will be feature complete full self driving this year. The car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year. I'm certain of that. That is not a question mark. It will be essentially safe to fall asleep and wake up at their destination towards the end of next year"

On the Road to Full Autonomy With Elon Musk — FYI Podcast

April 12th 2019: "I think it will require detecting hands on wheel for at least six months.... I think this was all really going to be swept, I mean, the system is improving so much, so fast, that this is going to be a moot point very soon. No, in fact, I think it will become very, very quickly, maybe and towards the end this year, but I say, I'd be shocked if not next year, at the latest that having the person, having human intervene will decrease safety. DECREASE! (in response to human supervision and adding driver monitoring system)"


April 22nd 2019: "We expect to be feature complete in self driving this year, and we expect to be confident enough from our standpoint to say that we think people do not need to touch the wheel and can look out the window sometime probably around the second quarter of next year."

April 22nd 2019: "We expect to have the first operating robot taxi next year with no one in them! One million robot taxis!"
“I feel very confident predicting autonomous robotaxis for Tesla next year,”
"Level 5 autonomy with no geofence"

May 9th 2019: "We could have gamed an LA/NY Autopilot journey last year, but when we do it this year, everyone with Tesla Full Self-Driving will be able to do it too"

April 12th 2020: How long for the first robotaxi release/ deployment? 2023?
"Functionality still looking good for this year. Regulatory approval is the big unknown.

https://twitter.com/elonmusk/status/1249210220200550405

April 29th 2020: "we could see robotaxis in operation with the network fleet next year, not in all markets but in some."

July 08, 2020: “I’m extremely confident that level five or essentially complete autonomy will happen, and I think, will happen very quickly, I think at Tesla, I feel like we are very close to level five autonomy. I think—I remain confident that we will have the basic functionality for level five autonomy complete this year, There are no fundamental challenges remaining. There are many small problems. And then there's the challenge of solving all those small problems and putting the whole system together.”


Dec 1, 2020: “I am extremely confident of achieving full autonomy and releasing it to the Tesla customer base next year. But I think at least some jurisdictions are going to allow full self-driving next year.”
Axel Springer Award

Jan 1, 2021: "Tesla Full Self-Driving will work at a safety level well above that of the average driver this year, of that I am confident. Can’t speak for regulators though."

https://twitter.com/elonmusk/status/1345208391958888448

Jan 27, 2021: "at least 100% safer than a human driver"

 
Last edited:
Feb 19 2019: "We will be feature complete full self driving this year. The car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year. I'm certain of that. That is not a question mark. It will be essentially safe to fall asleep and wake up at their destination towards the end of next year"

That's the one I was looking for… I just didn't go back far enough.

This F'ing guy, seriously.
 
I thought the general consensus is that they have switched some NNS to surround video like the networks that do road layout, lanes, and trajectory... but as we know they have dozens of NNs... and are still working on porting all of them to new arch...

Perhaps this latest tweet suggests, next update they will be finished porting all of the networks to the new arch.

Thanks. That makes sense.

My only point is that I don't think finishing porting all the NN to surround video will mean that Tesla has arrived at L5. I think Tesla will still have a lot of work to do after they finish porting all the NN to surround video to get to L5, at least a L5 that can be released wide the public with no supervision.
 
  • Like
Reactions: APotatoGod
That's the one I was looking for… I just didn't go back far enough.

This F'ing guy, seriously.

Yup, Elon has been unabashedly full of sh*t about FSD promises/progress for years now.
But with FSD Beta, Tesla has also proven they are way further along in solving this problem than any other production car company.

Both things can be true at once.
 
I imagine an NN that looks only at a cropped portion of the image would be quite different than one that looks at the whole camera view. For example, one that only looked for road lines on a crop of the ground would likely be quite different than one that looked for them on the whole image (including the sky).
A neural network should easily learn to ignore parts of the image, but that then is mostly wasted compute resources to multiply some small input value by another trained weight value that is close to 0 (repeat for ~70% of the view). I believe generally multiple neural networks aren't loaded to re-process the same inputs for performance reasons as adding control logic and swapping out weights can be costly, but potentially if the accuracy is much improved, the hit on performance could be worth it.

Overall it'll be interesting to see if the cropping focal areas are dynamic based on some previous network prediction. As others have suggested focusing on traffic lights, sign text or road lines, but the position of those can vary significantly with changing elevation or curves. However, the neural network already makes pretty good predictions about where the road is headed for the next several seconds, so instead of cropping to specific objects, it could dynamically pick what ~30% of the camera to focus. Even relying on the fisheye camera which Tesla says has 60m range means at 80mph, there's 1.7 seconds to travel that far with 60 frames to process, so it doesn't seem too unreasonable for a prediction from the first frame to dynamically focus attention of the later frames that also refocus as necessary.

This dynamic approach could avoid reprocessing the same inputs as only later frames are focused, and the same networks can be used at all times without swapping. It does complicate the unifying birds-eye-view network as now it needs to learn how to combine outputs from networks that had cropped inputs as "a vehicle in the center of view" doesn't necessarily mean it's directly in front of the camera anymore.
 
Yup, Elon has been unabashedly full of sh*t about FSD promises/progress for years now.
But with FSD Beta, Tesla has also proven they are way further along in solving this problem than any other production car company.

Both things can be true at once.

Yeah, I don't want to downplay their progress, they've done a lot over the years. Autopilot was terrifying to try and use in the mountains on the way to Tahoe 3 years ago, and now it's totally different and fantastic for the most part.

My gripes are almost exclusively targeted at Elon and others who might be involved with the decisions regarding how they've treated early FSD buyers.

Well, I also have gripes about some other stuff but those aren't cogent to this discussion.
 
Yeah, I don't want to downplay their progress, they've done a lot over the years. Autopilot was terrifying to try and use in the mountains on the way to Tahoe 3 years ago, and now it's totally different and fantastic for the most part.

My gripes are almost exclusively targeted at Elon and others who might be involved with the decisions regarding how they've treated early FSD buyers.

Well, I also have gripes about some other stuff but those aren't cogent to this discussion.

Agreed. It's frustrating to see a company that is so unbelievably capable, yet so inept in their lack of transparency and honesty and in how they have treated early FSD buyers.
 
Thanks. That makes sense.

My only point is that I don't think finishing porting all the NN to surround video will mean that Tesla has arrived at L5. I think Tesla will still have a lot of work to do after they finish porting all the NN to surround video to get to L5, at least a L5 that can be released wide the public with no supervision.


Hahah, absolutely agree
 
  • Like
Reactions: diplomat33
Agreed. It's frustrating to see a company that is so unbelievably capable, yet so inept in their lack of transparency and honesty and in how they have treated early FSD buyers.

Yeah, it's crazy because I was such a true believer when I got on board the train, too. It's not like I bought the car begrudgingly or something, I was all about the future potential and for some *totally bonkers* reason I had it in my head that Tesla would actually appreciate the customers who supported them before things were ready for prime time, and want to make sure they stayed customers.

I got my Model 3 over three years ago and within moments was sure it was the best car I'd ever owned (despite a couple of things needing to be fixed from day 1, but all quite minor), and I still feel that way (though it did take a little while for the "ooh shiny new toy" to wear off and for me to realize that I really didn't like the seats!). I took all my friends out for drives, I'd let anyone I know test drive it, talk to strangers who asked questions, etc.

I knew I'd want to sell it after 3 years or so to buy a later build with some improvements (I didn't REALISTICALLY think the Y would happen in time, so that was a pleasant surprise), and then I'd replace our ICE car with another one, and replace each on a 3 year cycle roughly. As recently as a year ago I more or less figured I'd buy a Y right about now in Q1 2021, and another in the back half of next year, because surely Tesla would see the light and let us move our early, not-remotely-even-close-to-fulfilled purchases of FSD to a new car, even if I was still a bit pissy about the whole discounted-FSD-after-purchase stunt they pulled, and the lie about early access invites, and the lie about priority software updates.

I even bought a Tesla solar setup late last year, even after hearing all the gripes about how many people had issues with the process (mine was VERY smooth, for the record!).

Then the most recent earnings call happened and it finally became crystal clear that they simply don't care about the early buyers because to them, we are all easily replaced.

Now I'm looking at the E-Tron Sportback and thinking "this is *ALMOST* what I want from a car, but they need to improve the range for me to be able to seriously consider it… but that's just a matter of time", so I'll just keep my Model 3 for a while to see how the EVs advance with other companies over the next 18-24 months. Maybe Tesla will come to their senses and I'll give them another chance. Maybe they won't. Either way, I no longer have the mindset of "my future vehicles will all be Teslas", and the only reason it happened was because of Tesla's own decisions.
 
Well, back in August 2020, Elon said the 4D rewrite would see a limited release in 6-10 weeks. So I guess I am a little surprised that they have not already finished upgrading the NN to surround video by now.

If Tesla is just now upgrading the NN to surround video, then it sounds like Tesla still has a lot of work to do. It seems like a big leap going from "upgrading the NN to surround video" to actually releasing a L5 to the wide fleet everywhere in the US with no driver supervision at all. I am not questioning the quality of the work, I am just a bit skeptical on the timeline. I am skeptical that we will see a wide release of L5 with no driver supervision this year.

This seems unlikely to even happen in the next few years, let alone in 2021
 
  • Funny
Reactions: M3BlueGeorgia
Agreed. It's frustrating to see a company that is so unbelievably capable, yet so inept in their lack of transparency and honesty and in how they have treated early FSD buyers.

My biggest beef isn't the lack of transparency. It's the outright lies about what this will do to the value of your car, safety, etc.

If Elon said all the things he said, but turned the hyperbole back a little bit, I'd be 100% fine with what he's said. I even think if he were honest and said "look, we're working on this hard, and here is our timeframe but it's subject to change. We realize that we're asking you to shell out a lot of money now for something with an indefinite timeframe but consider it to be like a kickstarter program, where some features you have now and you're helping fund the rest" he'd be telling the truth and I believe sales of FSD wouldn't be all that much lower than they are now.

Instead you have all this BS of your car becoming an appreciating asset, and how a robotaxi is going to make you all this money, etc. Yeah, ok, no. So now people view him as a ponzi scheme architect and totally use it as an excuse to gloss over actual accomplishments.