Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
BI in the last few days has come out with a few anti-Elon, Pro-Eberhard hit pieces. If they run out material they dig from their archives and recycle this crap.

Next, they are going to dig deeper and write a series on the life of the (deranged) whistleblower that was shattered by evil Musk.
Maybe they'll really get creative and dredge up that flight attendant Musk verbally sexually assaulted. And maybe a Pedo-guy interview.
 
Hmmm... feel like that was at one of the AI days...

(rummages through transcripts...)

This sounds like it... The "doubling" is what I was thinking of when I said "significant change", which of course doesn't preclude incremental changes to their existing GPU cluster...






Yeah, there may be no real dojo-factor in this release, the notes emphasizing the dataset/labeling improvement stuff just jumped out at me and got me wondering...
Doubling is pretty stepy. Nice thing about autolabling is that it scales well with parallel systems. Each clip/region is its own independent data set (though retraining the labeler itself probably requires merging/ single system).
 
  • Like
Reactions: scaesare
FSD Beta v11.3 Release Notes

  • Enabled FSD Beta on highway. This unifies the vision and planning stack on and off-highway and replaces the legacy highway stack, which is over four years old. The legacy highway stack still relies on several single-camera and single-frame networks, and was setup to handle simple lane-specific maneuvers. FSD Beta’s multi-camera video networks and next-gen planner, that allows for more complex agent interactions with less reliance on lanes, make way for adding more intelligent behaviors, smoother control and better decision making.
  • Added voice drive-notes. After an intervention, you can now send Tesla an anonymous voice message describing your experience to help improve Autopilot.
  • Expanded Automatic Emergency Braking (AEB) to handle vehicles that cross ego’s path. This includes cases where other vehicles run their red light or turn across ego’s path, stealing the right-of-way.
  • Replay of previous collisions of this type suggests that 49% of the events would be mitigated by the new behavior. This improvement is now active in both manual driving and autopilot operation.
  • Improved autopilot reaction time to red light runners and stop sign runners by 500ms, by increased reliance on object’s instantaneous kinematics along with trajectory estimates.
  • Added a long-range highway lanes network to enable earlier response to blocked lanes and high curvature.
  • Reduced goal pose prediction error for candidate trajectory neural network by 40% and reduced runtime by 3X. This was achieved by improving the dataset using heavier and more robust offline optimization, increasing the size of this improved dataset by 4X, and implementing a better architecture and feature space.
  • Improved occupancy network detections by oversampling on 180K challenging videos including rain reflections, road debris, and high curvature.
  • Improved recall for close-by cut-in cases by 20% by adding 40k autolabeled fleet clips of this scenario to the dataset. Also improved handling of cut-in cases by improved modeling of their motion into ego’s lane, leveraging the same for smoother lateral and longitudinal control for cut-in objects.
  • Added “lane guidance module and perceptual loss to the Road Edges and Lines network, improving the absolute recall of lines by 6% and the absolute recall of road edges by 7%.
  • Improved overall geometry and stability of lane predictions by updating the “lane guidance” module representation with information relevant to predicting crossing and oncoming lanes.
  • Improved handling through high speed and high curvature scenarios by offsetting towards inner lane lines.
  • Improved lane changes, including: earlier detection and handling for simultaneous lane changes, better gap selection when approaching deadlines, better integration between speed-based and nav-based lane change decisions and more differentiation between the FSD driving profiles with respect to speed lane changes.
  • Improved longitudinal control response smoothness when following lead vehicles by better modeling the possible effect of lead vehicles’ brake lights on their future speed profiles.
  • Improved detection of rare objects by 18% and reduced the depth error to large trucks by 9%, primarily from migrating to more densely supervised autolabeled datasets.
  • Improved semantic detections for school busses by 12% and vehicles transitioning from stationary-to-driving by 15%. This was achieved by improving dataset label accuracy and increasing dataset size by 5%.
  • Improved decision making at crosswalks by leveraging neural network based ego trajectory estimation in place of approximated kinematic models.
  • Improved reliability and smoothness of merge control, by deprecating legacy merge region tasks in favor of merge topologies derived from vector lanes.
  • Unlocked longer fleet telemetry clips (by up to 26%) by balancing compressed IPC buffers and optimized write scheduling across twin SOCs.
Awesome!

But as the owner of a 2018 Model 3 with EAP, I wonder how much of this I will inherit?

More generally, somewhere within Tesla is a huge matrix 'spreadsheet' with vehicle hardware permutations on one side and software features on the other, as well as which v. of release has which features.

Sadly, I'm thinking Tesla is in the unfortunate situation of the legacy software providers compared to Cloud circa the 2010's. A huge benefit of Cloud software, among others, is that there is only one (or at most very few) active version. This minimizes testing and support. There were several releases of Microsoft Word 2010, each running in a variety of versions and patches of Microsoft Windows, most of which Microsoft couldn't force users to upgrade. Therefore, there were literally hundreds (thousands?) of hardware + software combinations that, ideally, Microsoft had to test. That's where Tesla is today, and their code has to accommodate that. e.g., "okay we're approaching a red light, and this car has x cameras and y CPU and radar is (boolean) and the user has z features enabled, therefore I should do ___" (pseudo-code). Maybe the case of Salesforce is a better example since they did decimate SAP and Oracle in the CRM space, whereas Microsoft Office has been able to migrate to the Cloud pretty well (although I continue to believe Google does it better with the Cloud-native Workspace). I digress.

The only silver lining is that Tesla can and does upgrade their software at least, and sometimes even their hardware, limiting permutations and functional gaps. It doesn't have to plead with users to upgrade Windows 98 anymore and make their support job easier. But as long as we're in 'the hardware business' and don't auto-upgrade everybody in old Teslas, we won't really have "one AutoPilot to rule them all". Not the end of the world, just the facts.
 
Unfortunately, I'm still ICE-ing about, until my TSLA holding gets to a point that I can transition to BEV. Hoping this will happen before my CT reservation is filled. :)

From what I've read here and elsewhere, Tesla is great at quickly pivoting toward deployment of safeguards as the Beta-Data reveals the need.
At this point, why don't you just wait for the truck?
 
Which begs the question “is it safe for Tesla to beta test fsd on the highways”

EAP already requires few interventions. FSD Beta won't introduce any new features on highways but will hopefully handle more situations without driver input and act more human-like (in the positive sense).

It doesn't matter if it's highway or city streets, for any L2 system, there's a certain risk that drivers get complacent and one may argue that it increases as the perceived capabilities of the system improve. MTBF is one factor, assertiveness and "smoothness", i.e. low G-forces and jerk can also lead to a wrong assessment of existing failure modes.
On the other hand, a driver assistance system does exactly what the name implies: It relieves the driver of tiring tasks, reduces fatigue and stress and issues warnings or even initiates evasive actions.

Therefore, the only valid question is "Do the risk reducing factors outweigh the risk increasing aspects?"
As stated up thread, it's difficult to measure the benefits from the outside as avoided accidents rarely make headlines. Tesla measures accident rates with and without EAP / FSDB and their conclusion is that there's a net benefit.

Could Tesla do better w/r/to gauging driver attention? Certainly, and it wouldn't even take bleeding edge AI for that. Would it give a higher ROI (in terms of overall safety) if they diverted a couple of man-months from FSD development for a better driver monitoring? Probably yes, at least short term. A reliable driver monitoring system furthermore not only shuts down this favorite talking point of naysayers and FUDsters, it can also de-risk the driving strategy (e.g. reduce speed, avoid lane changes) when it detects a lack of supervision.

Edit to add:
Based on posts further up, it seems like the inside camera is already doing a decent job.
 
Last edited:
This Teslarati article focuses upon a known flaw in the human system. Essentially, "familiarity breeds contempt" is a very real issue. Here, IIHS comments upon the problem of this tendency of people to become complacent when using automated driving aids.

You really cannot depend upon many/most drivers to exercise the level of vigilance needed as the march of 9s strives to reach a functional full autonomy. There will be those instances of tragedy, and uncounted instances of near tragedy, as the data is gathered to accomplish this worthy goal.

On the flip-side, there will still be many, many lives saved which will not be counted, simply because you don't tend to notice when a system is working well.

This plays out in a thread on these forums regarding the absence of USS for parking assist. Some owners are enraged and claim they are literally unable to park their car now that they don't have park assist technology. As in, they can't even pull "in" to a parking space without hitting something.
 
I was wondering if anyone else has noted this pattern. It seems that when TSLA rises, SGML falls, and vice versa.
I saw an opportunity in that, particularly in the IRA account where there are no tax consequences.
I painfully liquidated my lifelong investment in TSLA (at $230) avoiding that not-so-sweet slide to the bottom and loaded up on SGML, which was climbing.
As SGML stalled, I moved into cash and in January re-entered TSLA at $123. The moral of the story is: Never get so emotionally attached to a stock that you cannot let it go when the charts and numbers scream to do so. There is no need to ride it to the bloody bottom.
~GETTING TOO EMOTIONALLY ATTACHED TO ANY STOCK CAN BE HAZARDOUS TO YOUR WEALTH~

I suspect our magic inverse relationship between TSLA and SGML might not be around too much longer with TESLAs potential takeover bid for SGML

May they both prosper!

One error in your thinking is that holders hold because they are emotionally attached to the stock. Speaking only for myself, I hold because I want to minimize risk and maximize gains.

The fact that I don't particularly like to lose money makes this much easier.
 
This Teslarati article focuses upon a known flaw in the human system. Essentially, "familiarity breeds contempt" is a very real issue. Here, IIHS comments upon the problem of this tendency of people to become complacent when using automated driving aids.

You really cannot depend upon many/most drivers to exercise the level of vigilance needed as the march of 9s strives to reach a functional full autonomy. There will be those instances of tragedy, and uncounted instances of near tragedy, as the data is gathered to accomplish this worthy goal.

On the flip-side, there will still be many, many lives saved which will not be counted, simply because you don't tend to notice when a system is working well.

Personally, I think drivers being a bit “Complacent” is fine. It’s a trade. You are paying a bit less attention to driving, but you are also less tired because you can relax more most of the time. This means when you are needed you are a bit fresher.

What you do need to be is aware enough that when something out-of-the usual comes into view you can react and take over. When you see flashing lights, you should take over. If you don’t, a Tesla (usually) slows down noticeably which should alert you even more. If you are so unaware that neither flashing lights ahead, nor the car slowing down alert you, then you should not be behind the wheel.

There is a big gap between “complacent” and completely punched out. Almost every single AP incident I’ve heard of has involved drivers that are completely punched out.
 
Personally, I think drivers being a bit “Complacent” is fine. It’s a trade. You are paying a bit less attention to driving, but you are also less tired because you can relax more most of the time. This means when you are needed you are a bit fresher.

What you do need to be is aware enough that when something out-of-the usual comes into view you can react and take over. When you see flashing lights, you should take over. If you don’t, a Tesla (usually) slows down noticeably which should alert you even more. If you are so unaware that neither flashing lights ahead, nor the car slowing down alert you, then you should not be behind the wheel.

There is a big gap between “complacent” and completely punched out. Almost every single AP incident I’ve heard of has involved drivers that are completely punched out.

Yes, that first sentence of the second paragraph describes what I was going on about. Too many drivers take for granted the concept of personal responsibility rather than actively practice it. These drivers casually discount the risk because driving is something "they do all the time."

I consider this behavior as the autonomous vehicle parallel for "An idle mind is the devil's Play Station"

It is human nature to take the path of least resistance and a conscious effort is necessary to thwart this behavior. Otherwise, it leads to dozing off, or spending too much time fiddling with something that better grabs one's focus. Either way, not enough attention being paid to the road. Granted, there are plenty of us with enough self restraint to be highly vigilant in avoiding distractions, but there are too many cases of those who aren't, and they make the headlines.

In my experience with other road users there are a lot of them that could embrace a better understanding of physics, and apply this to improving their driving skills.
 
As ChatGPT said, true gambling is a completely random outcome.
Everyone speculating in options needs to understand that. Random does not happen when the probabilities are predefined. Random and stochastic are, after all, almost synonyms but random refers to a variable and stochastic to a process. Neither is true of the US derivatives products, options of any kind. They are managed according to Market Makers choice, thus no random variables, no stochastic process. Slot machine gambling, by contrast has entirely random individual actions with a predefined probability of a given outcome favorable to the host. Basic statistics, ignored by marks participants everywhere.