Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Friend just ordered a y for his father and the delivery window was 1-12 weeks. Suggesting they MIGHT try to sell him one in the push next week but otherwise backordered through all of Q2

🐂

That's the typical thing they say to Bay Area Model Y buyers. Tesla prioritizes Bay Area buyers to the end of the quarter because they're the easiest ones to delivery to, so if you don't take delivery at end of quarter, they can't guarantee you'd get the car beginning of next quarter.
 
This has been characterized as a typically weak quarter with a need to maximize deliveries in the final weeks. Is there an ongoing reason for this practice? When demand is not an issue, I expect end of quarter "pushes" to eventually become irrelevant. Why would end-of-quarter deliveries be more important than beginning, if production is consistent?
 
@Discoducky,
Thank you for these observations. It has been a very long time since I worked on these sorts of sensors and signal processing, back when NNs were in their infancy. I can see you have been involved more recently, so I would appreciate your views on two questions.
Q1. Given the volumes Tesla is manufacturing at, at what point does it become worth bringing these in-house ?
Q2. If I understand correctly you are describing use of a fairly basic automotive radar of several years ago (giving basic scans of vertical slices) - at what point does it become sensible to start incorporating the additional sensor information from AESA-style radars (giving the ability to do more sophisticated scanning of a 'grid') ?
Great questions:
A1: TL;DR - They've past it in my opinion. The tech isn't that hard and not that hard to create innovation (for a company like Tesla)

Long winded - Back in the day I was a proponent of 4 corner radars as I could perceive only so much as a human driver impaired by just my crappy vision capabilities. If I could have the benefit of radar at the corners of the vehicle I, as a human, would be a much better driver as I would have data that allowed me to see around corners (effectively) as well as through steel. Anyway, radars are expensive and take up valuable package space in the chassis. So, as a thought experiment let's think of what we'd do if any of the cons were zero. Let's say cost went to zero. You bet I'd put those in the car. Let's say size went to zero. Not so certain, but yeah, still really good. And finally, let's say coding went to zero and you could get the features for free. Amazing and a slam dunk. Back to the real world. If we brought this in-house, what would it take? A lot of engineers, but we have a lot of engineers, so why not? The main detractor is there a slew of patents in the way, so working with the industry leaders, like Bosch, who owns most of these 'blocking' patents was the way to go. And since they were expensive ($200 to $300 a pop) meant we had to be conservative and just use a forward long distance radar. This you can't live without as without it, getting to 10X or 100X safer than a human is much, much, so freaking much harder that it is worth the cost and pain.

A2: TL;DR - More advanced radars haven't yet come to fruition; in my opinion. However, I've been out of the thick of it for a few years. Maybe others have opinions?

The path to a more robust radar that can be put into a car that has more value than traditional simple automotive radar is a long and horribly pitted road. Challenges from A1 above to name a few, but to get more data that is better, cheaper and overall more reliable in all automotive conditions is just not a high enough priority over solving vision.
 

Surround video

Tesla FSD Beta pushed to limits in real-world torture test

Time is used for trajectory predictions (of objects like cars and people in view.

“Trajectory prediction and 4D data continuity: why Tesla’s FSD beta is such a big deal!”

Dr Know it all mentions that Elon and Karpathy said so during autonomy day.

“Trajectory prediction and 4D data continuity: why Tesla’s FSD beta is such a big deal!”

Building a 3D surround allows you to see what is station at and what is moving (or could start moving, given their shape).
 
9 will use a Birdseye view, a view of all cameras together, with also a time component where the system still knows what it saw in the previous Birdseye view. This makes it much more aware and better at judging situations.


Not too much use of releasing 8.3, feedback for which isn’t helpful as it will not apply to the new Birdseye approach.

That Elon hopes it will be April is not encouraging, as it will mean that I have to hold my breath even longer before I can see movies of it in action.

i was under the assumption they have been using the Birdseye approach for all of FSDbeta no?

just because there isn’t a user facing view of it doesn’t mean that’s not what’s happening under the hood so to speak
 
  • Like
Reactions: UncaNed
As a software guy, the fact that they are still making architectural changes in FSD leads me to believe that they are further away from releasing this than just fixing a few bugs here and there from the beta feedback. Some assumptions they originally made were wrong for one reason or another, and the underlying code needed to be changed. It could mean that they figured out a clever new way to do something, but either way, these are not the kind of changes you make if a wide release is imminent.
They probably systematically re-architecture frequently. Jim Keller is a fan of this principle, so it’s likely the FSD team does this too. It prevents wasting time on dead-ends.
 
i was under the assumption they have been using the Birdseye approach for all of FSDbeta no?
These responses from Elon Musk seem to suggest 4D architecture / rewrite / birds-eye-view are the same and was indeed released as part of the initial FSD private beta October rollout including 4D traffic light predictions. And from Karpathy's presentations about birds-eye-view, it seems like moving objects, road lines, and road edges would have been some of the earliest network predictions that took advantage of the new architecture.

4d.png


My guess is Elon Musk originally thought handling this subset of predictions with the new architecture would be good enough for wider release, but after evaluating the private beta progress, the team realized they would need to convert the rest of network predictions to also use the new architecture to improve safety before the wider release. Specifically, I believe they realized their reliance on (sometimes missing or incorrect) OSM-based map data such as number of lanes, e.g., for turns or going straight, resulted in some very unexpected behaviors, e.g., lane "change" in the middle of an intersection, so they pushed that problem to the neural network for 4D predictions including signs, road markings, and static objects.

Overall, it's not necessarily a new re-architecture but more of they already knew they wanted to convert all predictions to 4D, and the initial prioritization focused on converting a minimal set of predictions for a release that they hoped would be safe enough. Also, by converting more things to 4D, existing 4D predictions can improve too, e.g., detecting a one-way sign in a side camera could shift a wrong solid double yellow line prediction to correctly identify a dashed white line.
 
Last edited:
I am a bit puzzled that he would say he doesn't even need radar. We drive with multiple senses, so focusing on only vision and ignoring other senses seems like a step back? Imagine driving deaf. Maybe its offset by more cameras, but diversity of sense seems like an evolutionary advantage to be more resilient in a variety of environments and partial disablement?

I think they are concentrating on vision for now, so as not to use radar as a crutch. When vision is solved, radar and audio will be added to provide sensory diversity and extra safety by detecting potential crashes earlier. To do this with radar, perhaps the current sensor is not particularly suitable. Not only do they need forward radar (which is good for highway driving), but also to the sides (for junctions, pedestrians and other road users being hidden behind cars).
 
  • Like
Reactions: capster
Does anyone have thoughts as to why some beta testers have videos that look amazing while some drivers have videos like this for the same version?

Select parts of that video is making the rounds among Tesla detractors...
The windshield has a sticker with something like "new vehicle dealer..."
- could it maybe be a demo car that somehow has FSD enabled and some
not properly briefed driver is taking it for a spin?
 
On the other hand, when you are able to make architectural changes at this stage, can see and accept they're necessary, and implement in a matter of days/weeks, that's frickin' agile as hell, I love it!
Yes, the fact that Tesla can pull this off (and I expect them too) is actually amazing - it least in comparison to traditional auto makers, in most organisations the development would grind to a halt given the need for such profound changes.
 
Great questions:
A1: TL;DR - They've past it in my opinion. The tech isn't that hard and not that hard to create innovation (for a company like Tesla)

Long winded - Back in the day I was a proponent of 4 corner radars as I could perceive only so much as a human driver impaired by just my crappy vision capabilities. If I could have the benefit of radar at the corners of the vehicle I, as a human, would be a much better driver as I would have data that allowed me to see around corners (effectively) as well as through steel. Anyway, radars are expensive and take up valuable package space in the chassis. So, as a thought experiment let's think of what we'd do if any of the cons were zero. Let's say cost went to zero. You bet I'd put those in the car. Let's say size went to zero. Not so certain, but yeah, still really good. And finally, let's say coding went to zero and you could get the features for free. Amazing and a slam dunk. Back to the real world. If we brought this in-house, what would it take? A lot of engineers, but we have a lot of engineers, so why not? The main detractor is there a slew of patents in the way, so working with the industry leaders, like Bosch, who owns most of these 'blocking' patents was the way to go. And since they were expensive ($200 to $300 a pop) meant we had to be conservative and just use a forward long distance radar. This you can't live without as without it, getting to 10X or 100X safer than a human is much, much, so freaking much harder that it is worth the cost and pain.

A2: TL;DR - More advanced radars haven't yet come to fruition; in my opinion. However, I've been out of the thick of it for a few years. Maybe others have opinions?

The path to a more robust radar that can be put into a car that has more value than traditional simple automotive radar is a long and horribly pitted road. Challenges from A1 above to name a few, but to get more data that is better, cheaper and overall more reliable in all automotive conditions is just not a high enough priority over solving vision.
Thank you. It is really only the single forwards-looking radar I am asking about.

I have to admit that I'm surprised that there are any blocking patents in radar & signal processing. I'd have thought that all the accessible technology patents have long since timed-out, and if that is not the case then I fully understand that $300ea is a big bill and worth avoiding.

The traditional automotive radars viewed the world as a series of left-to-right slices and did not give height. I think this is one reason why we hear of "phantom braking" events associated with driving below bridges - the NN sees a sharp shade delineation on the road, and couples that with a blocking radar echo that originates from the bridge above, and quite understandably calculates that there is too great a probability of this being a stationary obstruction spanning the roadway. There are various lines of attack on that problem but the most obvious one suffers from needing to build (extract) a database of bridges/overpasses so as to ignore braking calcs when there - and that in turn suffers from a) poor scalability, and b) one day something will be there. Another line of attack is to improve the ability of the cameras to see into highly contrasted shadow and for the NN to then pick out drivable roadway below and a bridge above quickly enough to ignore the radar return and to not hit the brakes.

A third approach is to get height out of the radar. Clearly for automotive applications we cannot do mechanical scanning, so instead we have to go to AESA radars, which also have other advantages we can exploit (track whilst scan, multiple beams, etc). But even just using them in a simplistic single beam mode we can get height out of them. Which in turn ought to solve a lot of problems (phantom braking, and improved probability of overhead gantry recognition, higher definitions). And they seem to be becoming commercially available with radar on a chip solutions for automotive.


and these sorts of things were the origins of AESA

But if they can solve it with vision, then that's cheap and great !
 
I think last time made a report on Tesla was also during the weekend. The Monday following stock price went up by about 19%. Mind you Cathie Wood was not as known back then. I also don’t know if the jump was caused by the report. There could be other factors involved.
I wonder if Cathie Wood 2x TSLA price update in ARK analysis will have the same effect on stock price as Warren Buffet announcing they bought Chevron.
 
A couple of thoughts on why:

First, you could argue that ARK doesn't have the analysts to understand the energy side, much like AJ has had such trouble wrapping his head around a non-traditional auto company. However, I don't buy that at all - the ARK crew is incredibly shrewd. Those cats aren't missing the energy equation and its enormous implications. So then the question becomes, why would they continue to leave it out?

1) They may not view it as important to the Tesla story until after 2025. It will continue to scale and mature during the next few years, but really take off after 20M autos/yr are reached. In other words, it will be the story of Tesla from 2025-2035 and beyond. And if you start including it in price targets now, they really start to seem detached from reality. There's no sense in ARK putting out a 10T market cap for Tesla right now. Just focus on the growth that's immediately in front of you, and that in of itself is a lot. EDIT: To put a finer point on it, you can't turn the energy markets on their heads until you have enough batteries to do so, and you won't have enough batteries to do so until you can make all the cars you want to be making.

2) Ignoring TE provides their predictions with a hedge. Even if they overshot what auto and FSD can/will do by 2025, it's possible TE will pick up the slack and propel the stock price to their targets nonetheless.

3) TE, though not secret or hidden, is THE information advantage that Tesla longs enjoy over everyone else right now, with auto and FSD plans in the open and mostly well-understood (belief is another question). ARK may wish to keep that information advantage for some time longer so they can accumulate more. With money continuing to flow into their funds, they have a lot more TSLA to buy.
Ark’s rationale for not including Energy was that they wanted their forecasts to be directly comparable to other analysts (all of whom were ignoring energy).