Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
The way you deal with irrelevancies is to ignore them. Can we please stop giving them air?
I am not saying we need to keep talking about this topic on this thread, but I am happy that Alexandra has decided to continue her pursue for an upgrade based on her tweeter poll.

Looks like Fact Checking also voted yes, with good reason.


Tesla Facts @truth_tesla · 4h Replying to  @ajtourville  and  @TeslaBoomerMama ▪️The reply of Moody's to  @TeslaBoomerMama  is very much proof that credit rating agencies are feeling the PR pressure. ▪️Elon's Moody's is irrelevant increased the pressure. ▪️Public pressure always extracts a PR price & it's the Tesla community's job to increase the price.


Edit: >> Looks like Elon's tweet is working to increase the pressure . . .

On Sept 3, The Street wrote:
. . . Musk, who has become the most powerful and influential CEO in the world with nearly 105 million followers on Twitter at last check, there is no institution or company that is untouchable. The rating agency Moody's has thus just learned the hard way.

In an incendiary tweet, the billionaire claims that Moody's is no longer relevant. . .


and The Street goes on to summarize the arguments twitters presented for an AAA rating.
 
Last edited:
We need to be clear about what Dojo is and what it is not.

As Gandhi allegedly said about Western Civilization: That would be a good idea.

Many folks won't believe in fully autonomous driving until they see it, and that's fine with me. But let's be clear that Elon is hellbent on achieving it very soon. Yes, his timeline has been wrong many times before, but history (of neural nets and Elon's achievements) makes me optimistic. We'll know more about "what Dojo is and what it is not" in 26 days.

 
I just watched it run an occluded stop sign in one of Chuck's videos. It knew it had seen the sign too late, but it didn't slam on the brakes, so Chuck had to do it. That's something I thought was fully solved.

Unfortunately the proper way of solving that particular behavior will never be solved using Teslas current architecture. A human deals with that by doing it wrong the first time and then remembering there is a stop sign there subsequent times. Tesla FSD has no memory like that. It approaches every intersection as if it were seeing it for the first time.
 
Elon said FSD beta is on pace to be in general release by the end of this year. That means anyone in N. America who has purchased FSD (or leases it monthly) can have access to the current beta w/o any safety scores, or other pre-qualification.

Yes, that is what Elon said. But it isn’t guaranteed that is what he will do. Going by past behavior, you’d have to assume he will blow that prediction as well. Just trying to caution anyone who might have options betting on this scenario….
 
Last edited:
Unfortunately the proper way of solving that particular behavior will never be solved using Teslas current architecture. A human deals with that by doing it wrong the first time and then remembering there is a stop sign there subsequent times. Tesla FSD has no memory like that. It approaches every intersection as if it were seeing it for the first time.
Tesla definitely has way to solve it using their architecture by crowdsourcing data and adding it to the map data kind of what people do with speed trap for waze. It's just something Tesla needs to figure out how they want to implement it. It will solve so many issues but I imagine things like these will be added once they figure out with the entire perception thing.
 
Many folks won't believe in fully autonomous driving until they see it, and that's fine with me. But let's be clear that Elon is hellbent on achieving it very soon. Yes, his timeline has been wrong many times before, but history (of neural nets and Elon's achievements) makes me optimistic. We'll know more about "what Dojo is and what it is not" in 26 days.

The question isn't simply a "yes or no" for believing in fully autonomous driving -- at least not for me.
It comes down to timeline.

Do I believe Elon has a good chance to achieve it ? (without a specific deadline attached to it) Yes, I do !
He has thrown the best engineering talent at it and spares no expense to pursue it, so if any team can do it, his team will; and I do believe the problem is solvable.

On the other hand, I am skeptical about the "very soon" part. Sure, they are making great progress. I even believe that they will release some version of FSD to the wide public this year. But that version will still be a level 2 ADAS, not fully autonomous driving.
How many more iterations (software and hardware) will it take to make it true "FSD" (remember what the acronym stands for!) I have no clue, but my gut feeling is that it will take many more than what Elon currently projects. There will be many more gotchas just like there was many in the past 6 years of Autopilot development, don't forget his projection about "driving from coast to coast on autopilot" within a year from 2016.
 
Tesla definitely has way to solve it using their architecture by crowdsourcing data and adding it to the map data kind of what people do with speed trap for waze. It's just something Tesla needs to figure out how they want to implement it. It will solve so many issues but I imagine things like these will be added once they figure out with the entire perception thing.
So you want people to recognize that FSD missed a stop sign and somehow tell Tesla about itand then Tesla manually adds it into a new stop sign database? Yes, that would work, but would also be kinda of a manual bandaid.

Actually, Tesla does not need crowdsourcing for this, the car eventually recognized the stop sign, just too late. So the car did know about it, eventually. My point was that Tesla should modify their architecture to include memory of this type.
 
  • Like
Reactions: navguy12 and ZsoZso
So you want people to recognize that FSD missed a stop sign and somehow tell Tesla about itand then Tesla manually adds it into a new stop sign database? Yes, that would work, but would also be kinda of a manual bandaid.

Already said this once in this thread, but it's worth repeating. This particular incident was not a failure of perception. If anything, it's a validation of how well the perception network is doing and how much work driving policy needs. Here's a screenshot of the point at which the heavily occluded stop sign was recognized by FSD and added to the visualization:

1662303857574.png


By my estimation, it had almost 2 seconds, or 66 feet to stop after recognizing the sign. I think this particular failure was caused by a braking latency bug introduced in 10.69.1, and rectified in 10.69.1.1.
 
So you want people to recognize that FSD missed a stop sign and somehow tell Tesla about itand then Tesla manually adds it into a new stop sign database? Yes, that would work, but would also be kinda of a manual bandaid.
Data from disengagements is sent back to tesla, so the driver simply needs to do what they’re supposed be to doing and pay attention. Failing to disengage at the appropriate moment, testers can tap on the camera icon and send a clip + associated data back to tesla. Who knows how these things are prioritized, but surely they are in some way.
 
  • Like
Reactions: wipster and mongo
So you want people to recognize that FSD missed a stop sign and somehow tell Tesla about itand then Tesla manually adds it into a new stop sign database? Yes, that would work, but would also be kinda of a manual bandaid.

Actually, Tesla does not need crowdsourcing for this, the car eventually recognized the stop sign, just too late. So the car did know about it, eventually. My point was that Tesla should modify their architecture to include memory of this type.
Elon said they are already working on localized crowd sourcing.
Electrek - Potholes
 
So you want people to recognize that FSD missed a stop sign and somehow tell Tesla about itand then Tesla manually adds it into a new stop sign database? Yes, that would work, but would also be kinda of a manual bandaid.

Actually, Tesla does not need crowdsourcing for this, the car eventually recognized the stop sign, just too late. So the car did know about it, eventually. My point was that Tesla should modify their architecture to include memory of this type.

Might it be solved by code which presumes there may be hidden traffic control signage whenever sight line is obstructed while approaching an intersection? The AI could scan for things to build confirmation, like the base of a pole that might hold a stop sign, or, the back of a stop sign across the intersection for the oncoming lane and/or crossing lanes. (absence of crossing lane stop sign would weigh toward there being a hidden stop sign)

Speed would be adjusted down allowing for a hard stop prior to entering the intersection until presence/absence of traffic control is confirmed.

Isn't that how a (particularly clever) meat sack would do it?

Edit: Then, the AI would generate a report of the occluded sign to the managing entity (city, county, state). Something most drivers wouldn't do.
 
Last edited:
So you want people to recognize that FSD missed a stop sign and somehow tell Tesla about itand then Tesla manually adds it into a new stop sign database? Yes, that would work, but would also be kinda of a manual bandaid.

Actually, Tesla does not need crowdsourcing for this, the car eventually recognized the stop sign, just too late. So the car did know about it, eventually. My point was that Tesla should modify their architecture to include memory of this type.
No automate it. Have a few options to pick from vs just the camera button. One of the option can be "add stop sign to map data". If more than 5 unique people hit this button then it will automatically add it to next update. Basically have the fleet fix all the map data. Make another one for speed limit mismatch will go along way as well.
 
Already said this once in this thread, but it's worth repeating. This particular incident was not a failure of perception. If anything, it's a validation of how well the perception network is doing and how much work driving policy needs. Here's a screenshot of the point at which the heavily occluded stop sign was recognized by FSD and added to the visualization:

View attachment 848831

By my estimation, it had almost 2 seconds, or 66 feet to stop after recognizing the sign. I think this particular failure was caused by a braking latency bug introduced in 10.69.1, and rectified in 10.69.1.1.
Yep, potentially. In the past, Tesla has talked about how they handle occluded objects and are respecting object permanence in the real-time scene so this one seems to just be a bug with the current release.

Also, it could be either demonstrating that Tesla isn't using stop signs in their coarse map data or maybe that this data isn't weighted high enough to account for this scenario. Either way, this is great training data for partially/fully occluded permanent static high value signage that should be included in the coarse map data that is being fed/weighted for the real-time camera data. Or said another way, the human knows, from memory, that there is a stop sign at this intersection and expects it to be there. Even though it is occluded, the human knows about where it should be relative to the dimensions of the intersection and only needs to see a tiny bit of it to recognize that it is still there.
 
No automate it. Have a few options to pick from vs just the camera button. One of the option can be "add stop sign to map data". If more than 5 unique people hit this button then it will automatically add it to next update. Basically have the fleet fix all the map data. Make another one for speed limit mismatch will go along way as well.
It is already more automated than that as Ashoks patent is taking the real-time camera data sets from the fleet and turning them into ground truth with humans in the loop. This is massive automation and I believe the first step to using a percentage of this data is in 10.69 as this was the first build to use some non-real-time coarse map data to either augment or be fused with real-time camera data for path planning/controls.