Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon Says Level 5 by the end of the year....

This site may earn commission on affiliate links.
I’m not sure that the camera had any better view than the driver would have had - other than the fact that it had no massive A pillars to look round. Is that why the driver missed seeing the oncoming car?

Remember there is also the fisheye in the windscreen. So for AP (at least after the rewrite), the 'driver' can be in two places at one time.

First view is the view we currently see on dashcam footage.

upload_2020-7-18_16-18-19.png
 
Last edited:
Link below to two stories published yesterday, updating his previous comments on FSD. Whether any of this will relate to our European roads is another matter although he does mention worldwide timescale in the second link:

August 14 2020
Elon Musk says Tesla’s Autopilot/Full Self-Driving (FSD) rewrite is going to result in “quantum leap” improvements, and the rewrite is going to be pushed to the fleet in the coming “six to 10 weeks.”

As we recently reported, Tesla is going through “a significant foundational rewrite in the Tesla Autopilot.” As part of the rewrite, CEO Elon Musk says that the “neural net is absorbing more and more of the problem.”

It will also include a more in-depth labeling system.

Now, Musk has commented again on the Autopilot/FSD rewrite.

The CEO claim that it will result in a “quantum leap”:

“The FSD improvement will come as a quantum leap, because it’s a fundamental architectural rewrite, not an incremental tweak. I drive the bleeding edge alpha build in my car personally. Almost at zero interventions between home & work. Limited public release in 6 to 10 weeks.”
/continues...


Elon Musk: Tesla Full Self-Driving is going to have 'quantum leap' w/ new rewrite, coming in '6 to 10 weeks' - Electrek

Major Tesla FSD improvements coming Sept/Oct, roundabout and pothole support confirmed - techAU
 
Last edited:
  • Informative
Reactions: MrBadger
Hilarious. Then again, maybe he doesn't understand the word "weeks"
Take into consideration that in the scientific community quantum means very small.
Definition of QUANTUM
webster said:
: any of the very small increments or parcels into which many forms of energy are subdivided
: any of the small subdivisions of a quantized physical magnitude (such as magnetic moment)
 
Just like the boy who cried wolf: we no longer get excited at Elon promises. or weeks consider months, then time to sort the things the new code breaks.. and then years for regulators to allow us to use it UK... by which time all the other motor manuacturers will have caught up..
 
  • Like
Reactions: Artiste
Im going with benefit of the doubt on this one. From memory the rewrite changes from essentially a 2D model to a 3D model. That is fairly fundamental and heave been on it a while now. I do think it has the potential to make vast improvements. Have you ever driven on AP over a hump back bridge on a bend? It looks very different in 2D and 3D.

How much of that will we get this side of Christmas, we’ll see. There is still a very long tail of little problems to solve. Phantom braking for bridges, cars parked on the side of the road, etc etc regardless.
 
  • Like
Reactions: Wol747
Just like the boy who cried wolf: we no longer get excited at Elon promises. or weeks consider months, then time to sort the things the new code breaks.. and then years for regulators to allow us to use it UK... by which time all the other motor manuacturers will have caught up..

Only in this case, the boy will probably die of old age before the wolf actually comes.
 
3d is a big step forward, once it's delivered, but I think they need to add interframe time based info to
Another interesting article. Tesla is throwing huge resource at this problem.

Tesla Autopilot Innovation Comes From Team Of ~300 Jedi Engineers — Interview With Elon Musk

Key quote in there:
“We also have over 500 highly skilled labelers,” Elon added. “This is a hard job that really does require skill and training, especially with 4D (3D plus time series) labeling.”
Emphasis mine. I had a half written reply on this thread a day or so ago saying until its confirmed that they are doing time based, FSD will really struggle. This says they are and the pieces of the puzzle are probably there.

From what is being promised, I'm hoping for the same night and day difference as we saw with deep rain on the windscreen wipers - it has been done before. That went from barely functional to acceptable (for most people, it does still have its quirks) in a single update. Maybe FSD will too. But I'll give them 20 weeks, not 10...
 
Another interesting article. Tesla is throwing huge resource at this problem.

Tesla Autopilot Innovation Comes From Team Of ~300 Jedi Engineers — Interview With Elon Musk


Autopilot’s advanced driver-assist and self-driving features are deceivingly smooth and simple on the user end”

This guy has clearly never experienced violent phantom braking at 70mph that almost has the car behind shunting up your rear end.

Tesla may get there eventually but it will take years. Elon’s timescales are about as reliable as a 1970s Binatone radio alarm clock.
 
Autopilot’s advanced driver-assist and self-driving features are deceivingly smooth and simple on the user end”

This guy has clearly never experienced violent phantom braking at 70mph that almost has the car behind shunting up your rear end.

Tesla may get there eventually but it will take years. Elon’s timescales are about as reliable as a 1970s Binatone radio alarm clock.

Looking at the data as 4d should solve most problems. Previous 2d labeler is relying on really noisy data to make decisions, and best guess actions. It kina works, but has failure modes. With true 3d+time, you can ID something, plot its movement and apply a vector to make a decision based on that vector. What we have just now makes a decision 30 times a second based on, in that moment, static data.

You can see this if you go back and read some of the stuff from the v2 re-write update about how they improved cut in performance - its all about wheel position in other cars and where they are in a lane. Absolutely no understanding or attempt at understanding where the car might be aiming for, it was wheel over lane line = cut in. In 4d, you can detect a likely cut in as someone starts moving in their lane (even if you don't action immediately), before they get to the line, equally if their vector starts swinging back to straight forward rather than into your lane you can detect that their crossing the line was probably just a wobble. OK, you have to know what to do with all this info and help, but that's what the dojo and all the historically labeled data is for.

Equally, it can stop with the 'am i in the middle, am i in the middle, am I in the middle behavior of the lane centering that gets you the hard jerk into position if you engage AP while not centred. By looking at 'where do I want to be in 4 car lengths' you can drift to the right place, use the lane space to skirt dodgy drivers or big trucks or navigate parked cars. Again, opens the possibilities, doesn't nescisaraly mean they will be delivered.

Really, hopefully this can address the 'drives like a learner' problem that in the fundamental issue with everything AP does.

Kind of with you on the timeline tho. I'm sure he is driving it at 98% acceptable. Will it make it to the 99.99% it needs to even hit beta? I'm not as pessimistic as you, but will not be surprised at further and continuous delays. But equally, my AP works well enough for me, so I'm a interested observer rather than reliant on it, and happy to watch from the sidelines.
 
Dojo will definitely help. If you’ve downloaded loads of fringe cases from the fleet, banked and processed it you can just plug them all into your new model to train/simulate/test the NNs and the rules. Think Fast-forward on an old VCR but x5000. The challenge they have is taking all the old 2D footage and rebuilding it in 4D. I don’t think that will be as simple as ‘stitching’ it together, they might need to re-markup a lot of footage in 4D (Or bin the old markup and let Dojo mark it up).

The Dojo sounds like a lot of computing power bought in from AWS or Elon has his own bot net on the job, or both....

Speculating here but my guess is that as of a few versions back when they opened access to more of the FSD architecture (turned on visualisations) we’re probably automatically uploading computer generated 4D mark-up to be checked / edited by a human in some kind of 3D/4D vector overlay environment. Probably looks like the unrendered skeleton of Grand Theft Auto, the output of which looks like @greentheonly’s vids.