Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Musk already lost, by taking time out of his busy schedule answering BS questions about how this decision was a bail out which hurt shareholders..hahahaha as I look at my balance of Tsla.
This may work out to be another “TechnoKing” joke that turns into an unpaid advertising bonanza.
 
Well, I would not call it "fair overall". Consider this part:

To explain the behavior, Mr. Musk told the court he didn’t respect Mr. Baron because the lawyer had once worked at a law firm whose partners became engulfed in an ethics scandal. “I think you are a bad human being,” Mr. Musk said to Mr. Baron.

I mean that's one way to describe it if you want minimize his comments as some cheap stab at the lawyer's past. In reality, though, if you dig up the stories Elon was referring to, as I am sure someone like the WSJ should do as a reputable news organization, you will find that:
  • The first company Baron was working for Milberg Weiss: the two name partners got sentenced to jail after a court has found that through their 150 cases, they have earned 216 million dollars by paying 11 million in bribes to witnesses and experts. they have also laundered 44 million bucks and were found guilty in 3x mail fraud.
  • Then our good friend went to work for another company, Robbins Geller, where later on, the partners got jailed and as recently as this May, that company was thrown off a case by a court after it found the law firm failed to disclose their class action clients were actually shorting the company they were suing.
So yeah, that is the work history of our standup citizen, Mr. Randy Baron here - I am sure this is just a coincidence, bad choice of law firms in a difficult jobs market, he is a decent guy. So it is perfectly fair to refer to this as "he has once worked for a firm where there were some shady people".

A cut and paste.
Randy Baron is also the Lead Plantiff litigator in a similar lawsuit - this one against Larry Elison with Oracle's purchase of NetSuite in 2016
I am not sure the status of that lawsuit.

 
A cut and paste.
Randy Baron is also the Lead Plantiff litigator in a similar lawsuit - this one against Larry Elison with Oracle's purchase of NetSuite in 2016
I am not sure the status of that lawsuit.

Wonderful person. A moral compass that is sorely in need of calibration.
 
Good day for us long holders today...hope this continues tomorrow....i'll leave y'all with this CNBS headline:

1626131580967.png

TL: DR

1626131628266.png


:rolleyes: :rolleyes: :rolleyes: :rolleyes: :rolleyes: :rolleyes: :rolleyes: :rolleyes: :rolleyes:
 
I was expecting Mainstreet Media to create a list of FSD Assists and quirks like this, but it's out so soon? We're over a week from earnings still.
The good news is that many more will hear that FSD Beta 9 is being tested, the movies have started again, and one just made it down Lombard Street at night.

Those Waymo fails are abundant and boring. But it's those Tesla videos that blow my mind by how quickly the future got pulled in. I mean... how old were you when you could first drive a car? And now how old is FSD? Does anyone see a trend here?

We have to step back and think about how the driver knows when to intervene.

The overwhelming majority of the time there is a visual clue that something isn't right, or the diver loses confidence that FSD knows what it is doing...

The vast majority of those visual errors, seem to be not recognising or correctly categorizing, something the car sees,,

These fall into 2 categories:-
  1. More training needed against a known label with a predefined course of action.
  2. New label needed and perhaps additional driving instructions..
There is no obvious indication Tesla is running out of headroom in terms of training with more data, and no obvious indication they are running out of capacity to add more labels.

In terms of the part of FSD that can be solved with vision alone, there is plenty of scope for further improvement, some of that being rapid improvement, some taking months rather than weeks.

The march of 9s is never 100%, any driving situation that requires sensors or information beyond what vision can provide, has a element of risk.
Vector maps, auditory cues and situational memory in training data can help.

But when I look at the videos, most of the time the driver intervened in response to a visual cue, that sort of problem is a NN training problem.
 
I am not a computer programer. I know nothing about AI. But it seems to me that FSD can't be solved by writing lines of code for every possible scenario. You need a computer that can learn to drive a car and think on its own, so it can react to a completely new situation like a human does. Hopefully better though, since the computer will never be drunk, tired, or distracted. In my limited understanding, that is the point of DOJO.
 
I am not a computer programer. I know nothing about AI. But it seems to me that FSD can't be solved by writing lines of code for every possible scenario. You need a computer that can learn to drive a car and think on its own, so it can react to a completely new situation like a human does. Hopefully better though, since the computer will never be drunk, tired, or distracted. In my limited understanding, that is the point of DOJO.
Hopefully better though, since the computer will never be drunk, tired, or distracted - Unless it gets hacked :eek:
 
  • Like
  • Funny
Reactions: Nocturnal and MC3OZ
I am not a computer programer. I know nothing about AI. But it seems to me that FSD can't be solved by writing lines of code for every possible scenario. You need a computer that can learn to drive a car and think on its own, so it can react to a completely new situation like a human does. Hopefully better though, since the computer will never be drunk, tired, or distracted. In my limited understanding, that is the point of DOJO.
My understanding is that is correct.,.

Where we think of the car recognising static objects and responding, it more accurately recognises situations and predicts the paths other vehicles are likely to follow based on various clues. Part of that learning is a memory of how a good human driver responded in that situation.

Exactly how that is done is beyond my knowledge. They have all inputs from the driver, and the video of the driving sequence. So they know the path the car followed, the steering inputs from the driver, when the car accelerated or decelerated. They know the predicted path of other vehicles, and how accurately the predictions translated into actual paths...
 
Last edited: