corduroy
Active Member
Setting my first buy order for 100 at $545.25.
No offence, but I sincerely hope you miss out on this dip.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Setting my first buy order for 100 at $545.25.
621.44 close, so 559.296. Breaker, breaker. 10-4?Did we trip a circuit breaker yet? First one should have been at ~578 right?
No offence, but I sincerely hope you miss out on this dip.
All I know is when it turns around, the return to $901 will pump the account crazy fast.
Even if it takes a whole month, I'm patient.
This forum is a live ticker tape, lol. Still waiting for $567... and then I'll quit buying!!!30 $TSLA @ $570
I thought I heard Douma say that they are at the self-supervised stage but not at the unsupervised stage. Interesting nevertheless. I'll re-watch.
This really shows how many members of this forum are completely ignoring the competition. It reminds me of the GME apes at WSB.
Did we trip a circuit breaker yet? First one should have been at ~578 right?
That was also my thinking. But then I watched a video (AI Driver I think), attempt a right turn 3 times and it improved from a fail to more and more success, no edits, a difficult maneuver, swear it learned it within 10 min to perfection, anticipating the timing.To my knowledge the current (non beta) system does not "learn" on-car in any way (see Greentheonlys deep dive into how much more limited "shadow" mode is than many think)
To my knowledge the current (non beta) system does not "learn" on-car in any way (see Greentheonlys deep dive into how much more limited "shadow" mode is than many think)
Changes in behavior or programming only happen when Tesla makes changes to the master code and pushed it out to the fleet.
AFAIK that's ALSO true of the beta program (though a few folks have thought otherwise because the car acted differently sometimes, but there were external explanations for that, and same happens with current SW because they tried under different conditions and got different results).
The biggest "improvement" from a less-manual-work-needed perspective in the 4D video approach AFAIK is drastically reducing the amount of manual human labeling...you don't have to manually label each thing in each frame if you can get the system to figure out the thing you manually labeled in frame 1 is the SAME thing it sees in frame 2 just moved slightly in time...and same for future frames.... and even better if you tie all the video streams together it understands an object is the same object even when it changes cameras.
Labeling the training video is what they use to, well, train the NNs... (and what Dojo is intended for- to allow it to train against massively more video, massively faster).
This should also get rid of that annoying thing where surrounding vehicles jump around on the screen since the car would understand the truck it saw in one camera is the same truck it now sees in a different-location camera.
Edited for incorrect info.621.44 close, so 559.296. Breaker, breaker. 10-4?