Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
i see this in the horizon

bladr1.jpg


or this (whatever u prefer)

bladr2.jpg
 
I'm just wondering if they are even working on new features for EAP yet or just still trying to get what out there stable. Haven't seen any noticeable updates to summon (although I don't use it), I would think if they were working on other EAP features this would be one they would start to focus on.
 
+100. This is exactly what it is.

If Tesla said they had only 5 disengagements, then shorts will say Tesla is doing no testing because they are behind.

If Tesla said they had 548759 disengagements, then shorts will say Tesla is far behind others due to large number of disengagements

Well, they also have to report how many miles were driven, so the metric everyone uses is disengagements per mile(s). So I don't fully agree on your post.

But I personally do think they are testing somewhere, otherwise they could just stop the AP program, buy it externally, maybe stand in front of the MobileEye HQ with a boombox and focus on the UI...

IMO the reason why they don't test in CA, or report in CA, is because the results aren't good enough for reporting. Loosing to GM in self driving would have effects on the stock and I do think a big article about how Tesla is worse in self driving than GM, would also have an effect on FSD orders, which are 3k free cash for Tesla

So I do agree that the reason for not reporting has a lot to do with Tesla's finances/stock. It won't really be the opinion of the shorts they are concerned with, but those might be the happiest, if Tesla reported.

Or to put it another way, if they would be among the best in AVs, they would test in CA and report there.
 
  • Like
  • Funny
Reactions: Matias and pdxgibby
It’s a strategy. Tesla wants to keep everything close to the vest. Why test in a state with such stringent reporting that might tip their hand? No news is more exciting that ANY alternative.

Being ahead of Waymo on disengagement’s would do two things:
1) Motivate Waymo (and others) to accelerate in order to keep up
2) give a false sense of optimism to all of us

Being behind Waymo and others:
1) Undermines sales in a huge way
2) opens Tesla up for criticism

There is no motivation to be this transparent. If I were an executive at Tesla, I would handle this exactly the way they did and not test in California so I didn’t have to report anything.

Shadow mode is the ultimate in testing and confidentiality. We’re all testing FSD with every mile we drive.
 
  • Like
Reactions: TrafficEng
Well, they also have to report how many miles were driven, so the metric everyone uses is disengagements per mile(s). So I don't fully agree on your post.

Even that can be gamed. Trailing a lead car at night on an empty freeway gets you lots of miles without disengagements (esp if it's a convoy of FSD cars). Whereas getting a 5 stack interchange right will get tons of disengagements with little distance.The reports do state the type of road and disengagement, but who is going to report that level of detail?


Shadow mode is a big fantasy.

Any data to back that up? To create the FSD video that was released, there was only one month of reported on road testing performed by 4 vehicles. Either they went from no testing/ training/ sw to a working subset in 5 days, or they have a way to train the software that does not require reporting.
 
Any data to back that up? To create the FSD video that was released, there was only one month of reported on road testing performed by 4 vehicles. Either they went from no testing/ training/ sw to a working subset in 5 days, or they have a way to train the software that does not require reporting.

Right back you where's your proof of "shadow" mode, and saying Elon tweeted doesn't count, because he also said three months maybe six months definitely, so clearly tweets aren't based in fact.

evidence.png
 
  • Love
Reactions: croman
Any data to back that up?
Let me turn this around - do you have any info that "shadow" mode exists?

@verygreen had about 8 months with rooted APE and the closest to the "shadow" mode function he observed was mothership taking snapshots. That's it.

There is also a post from PM YOUR NIPS PAPER who went into more details and said that "shadow" mode consist of Tesla sending bunch of images to a 3rd party vendors for analysis.

That's all we got. You?
 
Right back you where's your proof of "shadow" mode, and saying Elon tweeted doesn't count, because he also said three months maybe six months definitely, so clearly tweets aren't based in fact.

View attachment 277642
@MasterT

Re-read my post. They created the FSD video using 4 cars in 5 days of reportable driving. Either they can create that much progress in only 20 vehicle days, in which case there should be less concern/ doubt about getting full FSD working. Or they have a shadow/ off line/ nag exemption system which allows training in a non-reportable manner. Open to other possibilities.

I've done vehicle sensor testing/ tuning based on real world conditions. If you have the raw data, you don't need to test in the real world until the end for verification. The same system is used for airbag control. They record the acceleration data for multiple types of events, tune the control system, then test a few cases at the end (must not deploy/ must deploy).

There is no reason for verygreen's car to have development software on it. I'm not talking about the general population of Teslas (other than the ability to collect interesting anomalies), I'm talking about engineer vehicles/ people under NDA for verification data, the base data sets would have required much more logging space and speed than a standard car has (instrumented car with raw logging). They could (yes assumption here) have daily drivers that upload the trips every day while parked. This code would NOT be released to the general population (high confidence in this statement).
Even the Model 3 test fleet seen around the country could have been (yes assumption) serving dual duty in collecting raw driving data.

There is also a post from PM YOUR NIPS PAPER who went into more details and said that "shadow" mode consist of Tesla sending bunch of images to a 3rd party vendors for analysis.
Image categorization is needed to train the network. Very common thing to outsource, see the image classification competitions.
 
I'm just wondering if they are even working on new features for EAP yet or just still trying to get what out there stable. Haven't seen any noticeable updates to summon (although I don't use it), I would think if they were working on other EAP features this would be one they would start to focus on.
I personally would think summon improvements would require moving from ultrasonics to video based operation.

Video recognition is going to be required for FSD... So I think summon improvements would come after FSD.
 
  • Like
Reactions: mongo
I'm happy to hear counterarguments.
You presume the video is truthfully showing a real self driving car.
Remember the two joggers the car stopped for? How did it decide to go past them? Was the car remote controlled or only the hard coded GPS route? The hard coding map could slow down the car to safe speeds to react to lights and signs, but our cars today can't even stop behind a stopped car or follow a thight turn in a safe way.
Did we see the backseat, maybe a technician was sitting there with an xbox controller? How many times did they train the software at that exact route? 1000, a mill?
Far fetched, but did we see the roofline and front bumper if they did not use any other sensors?
I caught the bait and feel so stupid now with my non-working FSD and poor working AP2... Trust is gone.
 
  • Love
Reactions: croman and BigD0g
I'm happy to hear counterarguments. It may well be the case I'm making is not what is happening, but please give me reasons why it can not be what is happening.

Ok, let's say I believe you. Let's say there is this magical mythical unicorn code base ( and not @verygreen ape oh, how I miss those reports). But let's go with it, this mythical code base exists !! IT WORKS !!! It's does EVERYTHING!! Billions of miles of driving and it's knocking it out of the park!!

Ok, now with that assumption, let's go back to our fearless leader Elon who's on record of saying "I believe in releasing software as soon as I can prove it's safer then a human driver ala AP1"

So, with those two statements, why on this earth is my car still diving for ditches on well marked major highways? Why does my car swerve so far right on left curves that I have to disengage because I'm riding the lane line. Why is my car not leveraging the "unicorn" codebase and NN's, heck not even the codebase, just the mythical NN's would be just fine. Why does my car still want to ram into the backside of a stopped car at a light if it was previously untracked?

So, your telling me, Tesla has already solved ALLL of this, they just don't want us to stop playing Russian roulette, because it's fun to watch the disengagements I guess?
 
You presume the video is truthfully showing a real self driving car.
Remember the two joggers the car stopped for? How did it decide to go past them? Was the car remote controlled or only the hard coded GPS route? The hard coding map could slow down the car to safe speeds to react to lights and signs, but our cars today can't even stop behind a stopped car or follow a thight turn in a safe way.
Did we see the backseat, maybe a technician was sitting there with an xbox controller? How many times did they train the software at that exact route? 1000, a mill?
Far fetched, but did we see the roofline and front bumper if they did not use any other sensors?
I caught the bait and feel so stupid now with my non-working FSD and poor working AP2... Trust is gone.

Those are not data based arguments, they are what-ifs.
You are also linking EAP and FSD which (my opinion) have nothing to do with each other.
Events were only reported on 5 days with 150ish events total. Draw your own assumptions on how many test runs that represents.
Personal opinion, I think they had a few dead ends in FSD development, which required basically a restart with a new model.