Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Blog Musk Touts ‘Quantum Leap” in Full Self-Driving Performance

This site may earn commission on affiliate links.


A “quantum leap” improvement is coming to Tesla’s Autopilot software in six to 10 weeks, Chief Executive Elon Musk said a tweet.

Musk called the new software a “fundamental architectural rewrite, not an incremental tweak.”






Musk said his personal car is running a “bleeding edge alpha build” of the software, which he also mentioned during Tesla’s Q2 earnings. 

“So it’s almost getting to the point where I can go from my house to work with no interventions, despite going through construction and widely varying situations,” Musk said on the earnings call. “So this is why I am very confident about full self-driving functionality being complete by the end of this year, is because I’m literally driving it.”

Tesla’s Full Self-Driving software has been slow to roll out against the company’s promises. Musk previously said a Tesla would drive from Los Angeles to New York using the Full Self Driving feature by the end of 2019. The company didn’t meet that goal. So, it will be interesting to see the state of Autopilot at the end of 2020.

 
Last edited by a moderator:
Regardless, it seems Mobileye has abandoned vision based FSD, as they are depending on HD maps, and they're also developing a lidar based FSD system as well. I don't regard use of HD maps as vision-based. All of this discussion usually just involves us chasing our own tails.

I would not go that far. Remember that Mobileye is planning to combine a camera-only FSD stack with a lidar/radar FSD stack. To do this would require vision based FSD. We also have the demo video that proves that Mobileye already has vision based FSD. So I don't think we can say that Mobileye is giving up on vision based FSD. They are just planning to add to vision based FSD.
 
I don't expect it to happen quickly. Will take some years before we get to level 3 or 4 in minimal situation. What I think is the bigger challenge, is will people fall asleep when the chance of Tesla misreading the traffic like is only 1/1000? 99.9% accurate? I don't think I can trust myself. There will be accidents. There will be deaths. Will Tesla pay millions in lawsuits? Will some governments force Tesla to improve driver monitoring? Sounds doable, but there will be some big bumps on the road.
Maybe I've missed it somewhere but where in the requirements for the different levels of autonomous vehicles does it say they must not crash. Of course they will, just as human driven taxis crash. They will crash at a much lower rate but they will crash just as the space shuttle crashed.
 
  • Funny
Reactions: Daniel in SD
Maybe I've missed it somewhere but where in the requirements for the different levels of autonomous vehicles does it say they must not crash. Of course they will, just as human driven taxis crash. They will crash at a much lower rate but they will crash just as the space shuttle crashed.
How often do you think humans crash?
Who said it has to never crash? I swear I've never seen anyone argue that on this forum.
 
Would love to see the honest reaction to the release and their original comments.

When I saw Elon's tweets of excitement regarding the FSD beta he was testing, I expected the re-write to provide improved handling of intersections and better object detection and avoidance at first. But what I saw with the beta release, was a rather mature product that was able to tackle fairly complex driving situations out of the box. I was impressed and still am.
However, unlike others on the forum, I was always looking forward to a solid L2 or L3 product. If L4 was possible, fine, but it was never a realistic expectation of mine. I even had doubts about L3 as time progressed, but now, I'm almost certain some form of L3 is coming. And like I mentioned already, I would not rule out the possibility of GF L4 at some point.

All I'm waiting for now, is for a reduction in the disengagement rate, and hopefully, the camera-based monitoring that was mentioned a little while ago.
 
@Knightshade I would be curious to know your honest reaction, now that FSD beta has been out to the "chosen few" and videos are providing unfiltered data on the performance of the system.

Do you think this is a "quantum leap" like Elon claimed?


In the quantum means the minimum amount? Absolutely!

So far though it's less of a total re-write than I anticipated.... and greens analysis of the actual code involved appears to support that... it's mostly just "turning on" stuff that's been in the code since last year (2019.40.50 specifically) and just wasn't visible or enabled to end users... or in some cases simply taking stuff that was already there and integrating it

Some relevant remarks from him:

Green said:
2020.40.8.10 is just the stuff that first appeared in 2019.40.50 with "fsd preview", it just added BEV shim NN layer under hydranet NNs (that got direct input from cameras) It still works on a single frame at a time of camera input at the hydranet layer. hydranet itself first came into prod firmwares in September 2018. AK did presentation in February about this hydranet+BEV nets and what we see in that presentation today 100% matches what's in the firmware code.

the EAP added conventional c++ code module named city_streets that also appeared in dev firmwares around 2019.40 but so far was absent in prod firmwares - the responsibility of this module is to drive on city streets, track priority of other cars around you and so on.

So NN-wise 2020.40.8.10 differences are very minimal from 2020.40.8



He further references AKs Feb presentation and why it's not quite the "fundamental rewrite" promised in these 2-


[Andrej] RE: rewrite rollout “I expect Elon will cover it during the call shortly :) Definitely very excited to have so many months of hard work finally land in production builds!” : teslamotors

Green said:
its' same stuff that was presented in Feb (with iterative fixes/enhancements since then). Hydranet-BEV sandwich. See all the jitterness of stop lines and such. traffic lights still jump around a bit and so on

2020.40.x and Autopilot Rewrite


Green said:
Because all this stuff appeared way before Elon mentioned the "fundamental rewrite" in January 2020 and there was supposed to be a March/April hackathon to actually drive the rewrite and also a brief appearance of a new NN architecture called "plaidnet" in 2020.8-2020.12 (after which it disappeared after I tweeted about it - a good sign they did not want it to be known) - I am assuming THIS was the fundamental rewrite Elon spoke about and this is what I mean by "the fundamental rewrite". 2020.40.8.10 does not have any of the plaidnet stuff.

the old hydranet heads fed into intermediate NNs to then tie them together


So essentially instead of a fundamental ground-up rewrite as originally claimed that would process all 8 cameras as a single view... this is the old 8-individual-camera hydranet NN heads fed into intermediate NNs to then tie them together.

Thus you still get the jitteriness he mentioned above.




So from a technical perspective it's not as good, new, or "rewritten" as he (or I) had been hoping for.


Now in terms of how it performs?

Nothing I've seen makes me think Robotaxis will be here anytime soon. They are maybe at the first 9 in 99.999%. By that I mean they're maybe at 90% so they're not even at the 9 nearest to the left of the . right now. And each subsequent 9 is harder than the one before it.

On the bright side nothing I've seen makes me doubt what I've long held- than I think Tesla could deliver L3 highway driving Real Soon Now if they wished to. City driving, or L4, has a long long way still to go.
 
Thanks for the reply.
In the quantum means the minimum amount? Absolutely!
:rolleyes: "quantum leap" has been defined for you on the very first page of this thread.

Some of this "So far though it's less of a total re-write than I anticipated" and "this is not the rewrite I expected" BS is getting out of hand.
In June 2020 - Karpathy gave a presentation on how they are solving FSD and the challenge of scaling at Tesla.

upload_2020-10-29_14-27-50.png

the old hydranet heads fed into intermediate NNs to then tie them together
Notice that is HydraNet - the one that has existed prior to the PlaidNet (which coincidentally, PlaidNet no longer exists in the firmwares) But HydraNet lives on!
Karpathy is presenting HydraNet as the current solution being used at Tesla for FSD, not a past solution.

When Karpathy shows this slide, he says the following about the 1000 distinct predictions coming out of the 48 NNs.
"none of these predictions can ever regress and all of them must improve over time and this takes 70,000 GPU hours the Train of the neural nets"

Now, you posted this next image on another thread... it is useful

upload_2020-10-29_14-31-51.png

The only thing that I can come up with for "PlaidNet" was a new set of NN's that failed the test laid out by Karpathy "none of these predictions can ever regress and all of them must improve over time and this takes 70,000 GPU hours the Train of the neural nets"

That is my guess as to why we do not see PlaidNet any longer.

But back to the image you posted.
Notice how they have "Shadow Mode" and then the arrow loops back around data is recorded and the training process is restarted.
I think PlaidNet was put out to the fleet in Shadow Mode to see how it would perform. Maybe PlaidNets performance on the actual fleet is what killed it off, Karpathy knows for sure.

In June Karpathy was discussing this rewrite at Tesla as the upcoming big milestone for deliverables.
And he only presents HydraNet and then later how those HydraNet outputs feed the BEV net, I am going to go with the most straight forward and direct answer of them all: These are all part of the FSD solution - when they started them they thought they can go the normal route of training these, this is where the "rewrite" came in - how to properly label/train the NN's especially the BEV net..

upload_2020-10-29_14-39-31.png


HydraNet and BEV Net are the backbone of the 4D perception infrastructure and the fundamental rewrite that Musk brought up in January on Third Row podcast was about how certain pieces of the stack would be trained. (see video transcript below)

The Temporal Module in that diagram above, this is the 4th dimension of Time. (this is so your FSD enabled Tesla does not have amnesia and can track objects across time)
In case people are not sure BEV net is for Birds Eye View - this is the NN that takes the processed output of each camera feed (processed through HydraNet) and stitches it all together into 3D.

This is a transcript (copied from 3rd Row Tesla podcast with Musk at 2:20:04
)
The word "rewrite" comes up only ONCE in the entire transcript.
Code:
140:04 significant foundational rewrite in the
140:07 telepath system that's almost complete
140:10 really yeah and what it what part of the
140:13 system like perception like planning or
140:16 just like it's it's instead of having
140:21 planning perception image recognition
140:24 will be separate they're they're being
140:27 combined so yeah I don't even understand
140:35 what if actually like the you're the
140:40 sort of neural net is absorbing more
140:43 more of the problem right beyond simply
140:46 the is this is if you see if an image is
140:50 this a car or not oh no it's it's kind
140:54 of what where does it where you do from
140:56 that

140:58 3d labeling is the next big thing where
141:02 the car can go through a scene with
141:05 eight cameras and and and kind of paint
141:08 a a would paint a path and then you can
141:12 label the path in 3d this is probably
141:16 two or three order of magnitude
141:17 improvement in labeling efficiency and
141:19 labeling accuracy you know you have to
141:23 do two throwers be proven in labeling
141:24 efficiency and significant improvement
141:27 in labeling accuracy as opposed to
141:30 having to label individual frames from
141:31 eight cameras at 36 frames a second you
141:36 just drive through the scene rebuild
141:40 that scene as a 3d thing with it's like
141:46 there might be a thousand frames that
141:47 were used to create that scene and then
141:49 you can label it all at once is that
141:53 related to the dojo thing you mentioned
141:54 a thought on Amida no doges for learning
141:57 for training the neural net that's like
142:00 when you're trying to build the neural
142:01 net that you ship into the car dojo
142:03 speeds that up by Hardware accelerating
Disclaimer: YouTube auto-generated transcripts do not delineate who is speaking, so the transcript has people talking over each other in some spots, it is just a time-stamped helper, so that you do not have to sift through 3 hours of video to find relevant info.

Again, Karpathy's CVPR presentation video is 6 months after Musk "broke" the news of a rewrite.
All the pieces seem to jell well with both what Karpathy said and what Musk said.

But on here, "it is not what I expected"! Maybe it's time to check your expectations with actual facts of what has been presented so far from multiple sources.

PlaidNet is dead - it failed the test, that is why it was dumped. Long live PlaidNet!

Can we please stick to actual sources that work on the stuff and maybe attempt to reconcile 3rd party sources against them?



Oh, one last little gem from the same video (re another "expert" opinion)
Note how Karpathy specifically delineates "No Lidar" and "No HD maps" ....
So, the argument that by "NO HD Maps" Karpathy means that it is not lidar generated maps is silly.
upload_2020-10-29_14-45-36.png



Lastly, this video is immensely relevant when discussing FSD and Tesla.
Please take 28 minutes and watch it:
 
Last edited:
Thanks for the reply.

:rolleyes: "quantum leap" has been defined for you on the very first page of this thread.

I prefer to use the actual correct definition of quantum rather than one a story editor made up :)


Some of this "So far though it's less of a total re-write than I anticipated" and "this is not the rewrite I expected" BS is getting out of hand.

Mainly because it's not BS, and is based on a look at the actual code in question and how it's basically the same as it was in 2019, just with things now shown to the user that only folks with admin access saw before.

Again Green has shown actual vids, from old firmware, showing the same kind of stuff as the "re-write" is



In June 2020 - Karpathy gave a presentation on how they are solving FSD and the challenge of scaling at Tesla.

View attachment 603520

Notice that is HydraNet - the one that has existed prior to the PlaidNet (which coincidentally, PlaidNet no longer exists in the firmwares) But HydraNet lives on!
Karpathy is presenting HydraNet as the current solution being used at Tesla for FSD, not a past solution.

But it's both!

Hydranet has been there since before any talk of a "fundamental rewrite" and it's still there.

They just stuck ANOTHER NN after it to tie together each head in the BEV (and again the BEV stuff has ALSO been there since at least late 2019 in the existing code)


When Karpathy shows this slide, he says the following about the 1000 distinct predictions coming out of the 48 NNs.
"none of these predictions can ever regress and all of them must improve over time and this takes 70,000 GPU hours the Train of the neural nets"

Now, you posted this next image on another thread... it is useful

View attachment 603523
The only thing that I can come up with for "PlaidNet" was a new set of NN's that failed the test laid out by Karpathy "none of these predictions can ever regress and all of them must improve over time and this takes 70,000 GPU hours the Train of the neural nets"

That is my guess as to why we do not see PlaidNet any longer.

Sounds like they went down a genuine re-write path with plaidnet and found out it didn't work...so they decided to do a patch job on the existing system.


But back to the image you posted.
Notice how they have "Shadow Mode" and then the arrow loops back around data is recorded and the training process is restarted.
I think PlaidNet was put out to the fleet in Shadow Mode to see how it would perform. Maybe PlaidNets performance on the actual fleet is what killed it off, Karpathy knows for sure.

That's not how Shadow mode works- and I know you've been shown the explaination of how it ACTUALLY works many times.

That diagram shows shadow mode as collecting data to be used for training.

Still images or video of specific objects or events that Tesla specifically sends out a campaign looking for- that are sent back to Tesla for manual, human, labeling- and then once labeled they use it to train NNs.

That's it.

There's not a secret additional entire FSD system running silently and checking how good it is.

It's just a passive data collection system.


In June Karpathy was discussing this rewrite at Tesla as the upcoming big milestone for deliverables.
And he only presents HydraNet and then later how those HydraNet outputs feed the BEV net, I am going to go with the most straight forward and direct answer of them all: These are all part of the FSD solution - when they started them they thought they can go the normal route of training these, this is where the "rewrite" came in - how to properly label/train the NN's especially the BEV net..

The rewrite was supposed to be the FSD stuff running on the car.

Labeling isn't done on the car- it's done manually by humans (until dojo anyway).

Training NNs isn't done on the car either- it's done by more powerful systems back at Tesla. Then the updated NNs are pushed out via firmware.

That's how it has worked for years now.

Musk wrote "t’s a fundamental architectural rewrite, not an incremental tweak."

But it isn't.

https://twitter.com/greentheonly/status/1318337282873708547

Green said:
this is NOT the rewrite, this code was there for a long time and the presentation this picture is from is from Februrary. the top squares that tie into BEV nets are good old Hydranets that were premiered in September 2018 in customer firmwares





HydraNet and BEV Net are the backbone of the 4D perception infrastructure and the fundamental rewrite that Musk brought up in January on Third Row podcast was about how certain pieces of the stack would be trained. (see video transcript below)

Except, they aren't the "fundamental rewrite"

Since again Hydranet has been in use since 2018 for each camera.

And STILL IS in the new FSD beta.

With the 2019 BEV NNs processing the results coming from the hydranets.

Nothing was "fundamentally rewritten" here.


In case people are not sure BEV net is for Birds Eye View - this is the NN that

Has been in the code since 2019.40.50 so isn't "new" it's just now visible to end users.



processed output of each camera feed (processed through HydraNet)

Which has been in the code since 2018.

Also not new.


Weirdly enough many people expected the FUNDAMENTAL REWRITE to be NEW stuff.

Not largely 2018 and 2019 code finally just turned on.


The word "rewrite" comes up only ONCE in the entire transcript.
40:04 significant foundational rewrite in the
140:07 telepath system that's almost complete
140:10 really yeah and what it what part of the
140:13 system like perception like planning or
140:16 just like it's it's instead of having
140:21 planning perception image recognition
140:24 will be separate they're they're being
140:27 combined so yeah I don't even understand
140:35 what if actually like the you're the
140:40 sort of neural net is absorbing more
140:43 more of the problem right beyond simply
140:46 the is this is if you see if an image is
140:50 this a car or not oh no it's it's kind
140:54 of what where does it where you do from
140:56 that

But this current system does not more move of the work to the NNs.

The NNs are still only used for perception

All the driving code is still conventional C++ as it always has been.


Having NNs actually do planning and decision making WOULD be a fundamental change- but this doesn't do that.



But on here, "it is not what I expected"! Maybe it's time to check your expectations with actual facts of what has been presented so far from multiple sources.

What we know for a fact is the hydranet->BEV nets were present before any rewrite talk.

So the fact that you think that's the re-write suggests you're the one whose expectations are not in line with reality.



Can we please stick to actual sources that work on the stuff and maybe attempt to reconcile 3rd party sources against them?

You mean like the guy who actually, unlike you, has direct access to the code and has for years and says the same things I'm telling you? That this isn't a fundamental rewrite at all? That it's mostly stuff that's been there since 2018 and 2019 and is now just consumer-facing?

Yes- that's be great instead of you misunderstanding what AK has been saying in some desperate attempt to divert from the fact Elon overhyped how "fundamental" and "rewritten" the current code actually is.

Now- it's entirely possible he was ORIGINALLY talking up plaidnet as such a fundamental change... but as you note- that appears to have failed and been removed so they went back to what they already knew worked and tried to improve it.





Oh, one last little gem from the same video (re another "expert" opinion)
Note how Karpathy specifically delineates "No Lidar" and "No HD maps" ....
So, the argument that by "NO HD Maps" Karpathy means that it is not lidar generated maps is silly.

I never made any such an argument, but you have fun with your strawman...
 
@Knightshade wow, is there an opposite of fanboi? Dang!

I get it, you believe "Green sees all", but what Karpathy says and presents is false, did I get that right?

Also, in February 2020, Karpathy presented at another conference with an almost identical slide deck and info.
A little timeline:
  • 1/2020 - Musk says a "fundamental rewrite in Autopilot system that is almost complete" at 2hr 20 min in the 3rd Row Podcast
  • 2/2020 - Karpathy presents "AI for Full-Self Driving at Tesla" and only talks about HydraNet and BEV Net
  • 6/2020 - Karpathy presents "Workshop on Scalability in Autonomous Driving" again only talks about HydraNet and BEV Net.
But "a brief appearance of a new NN architecture called "plaidnet" in 2020.8-2020.12" in the fleet firmware and we are supposed to throw out consistent 6 months of direct info from the company? (it only lived 4 weeks in production cars -- 4 weeks!)

Am I understanding you correctly?

If I am an Elon fanboi, I guess that would make you a @verygreen fanboi.
 
@Knightshade wow, is there an opposite of fanboi? Dang!

I get it, you believe "Green sees all", but what Karpathy says and presents is false, did I get that right?

Not even slightly, no.

Which is pretty on brand for you :)


Also, in February 2020, Karpathy presented at another conference with an almost identical slide deck and info

Yes- I have specificially referenced that presentation multiple times.

So it's weird you're now acting like YOU just discovered it and are telling me about it.



A little timeline:
  • 1/2020 - Musk says a "fundamental rewrite in Autopilot system that is almost complete" at 2hr 20 min in the 3rd Row Podcast
  • 2/2020 - Karpathy presents "AI for Full-Self Driving at Tesla" and only talks about HydraNet and BEV Net
  • 6/2020 - Karpathy presents "Workshop on Scalability in Autonomous Driving" again only talks about HydraNet and BEV Net.
Does Karpathy describe these as a "fundamental rewrite" at any point?

Does he claim Hydranet is not stuff they've had in the code since 2018?

Does he claim BEV is not stuff they've had in the code since 2020?

I ask because I think the answer is no to all 3.

Which means Karpathy says you are wrong and I am right.


But "a brief appearance of a new NN architecture called "plaidnet" in 2020.8-2020.12" in the fleet firmware and we are supposed to throw out consistent 6 months of direct info from the company? (it only lived 4 weeks in production cars -- 4 weeks!)

Am I understanding you correctly?

Again- not even slightly.

At least you are consistent.
 
Does Karpathy describe these as a "fundamental rewrite" at any point?

Does he claim Hydranet is not stuff they've had in the code since 2018?

Does he claim BEV is not stuff they've had in the code since 2020?

I ask because I think the answer is no to all 3.

Which means Karpathy says you are wrong and I am right
A "foundational rewrite of the autopilot system that's almost complete" and you expected it to start when? December 2019?

If it was almost complete when Musk said those words in 1/2020, how long do you think Karpathy and team were actually working on it?

I mean, I get it, Karpathy is a brilliant guy, but I am pretty sure even for him this is not a weekend project.

HydraNets have existed for for a long time the Temporal and BEV Net were later additions and it fits into the actual described functionality by both Karpathy and by Musk.


So it's weird you're now acting like YOU just discovered it and are telling me about it.
I am not sure why you would jump to a conclusion of me "discovering" it now. I used the LATEST presentation in my initial detailed reply (since it would represent Karpathy's most recent public statements on the FSD system)

I added the 2/2020 presentation to show that Karpathy in no way contradicted Musk or himself this entire time.
The functionality Karpathy showed and what Musk talked about align.

Can you let me know what Green says about that?! Please don't forget the videos from Greens presentations as well. :)
 
A "foundational rewrite of the autopilot system that's almost complete" and you expected it to start when? December 2019?

If it was almost complete when Musk said those words in 1/2020, how long do you think Karpathy and team were actually working on it?

I mean, I get it, Karpathy is a brilliant guy, but I am pretty sure even for him this is not a weekend project.

HydraNets have existed for for a long time the Temporal and BEV Net were later additions and it fits into the actual described functionality by both Karpathy and by Musk.


Hydranets come from 2018.

BEV from 2019.

If they're promising a foundational re-write with be "coming" in 2020, and then it turns out it's just the code that has already been in use from 2018 and 2019, that's not really a foundational rewrite coming out.

I'm really not sure how you can make any argument otherwise.




I added the 2/2020 presentation to show that Karpathy in no way contradicted Musk or himself this entire time.
The functionality Karpathy showed and what Musk talked about align.

They really don't.

For example nothing Karpathy discusses moves anything "new" to the NNs on the car- they're still only used for perception. Just like they always have been.

While the Musk quotes from 3rd row claim they're moving lots of things other than perception into the NNs.

Which it turns out this "rewrite" doesn't actually do.

Which again makes it sound like Karpathy has been telling you the same thing I have- that the "beta FSD" is 1-2 year old code now with more features exposed to the public.... and Musk was instead hyping an attempted different path they since abandoned but he doesn't want to say so.
 
  • Disagree
Reactions: mikes_fsd
Does Karpathy describe these as a "fundamental rewrite" at any point?
Software 2.0 is Karpathy's view of Machine Learning and future of coding -- Software 2.0

In the video from February he says (again YouTube transcript with the normal caveats)
19:27 time since I joined about two and a half
19:29 years ago the neural net
19:30 have expanded in how much of software
19:32 one panel and they've taken over and so
19:34 the trend is upwards
19:35 so in particular in the case of these
19:38 road edges as I described who are now
19:41 working towards is that we don't
19:42 actually want to go through this
19:42 explicit occupancy tracker software
19:44 bumping a code we'd like to engulf it
19:46 into neural net code and we see that
19:48 typically that works really well so a
19:50 lot of our networks have been moving
19:51 towards these bird's-eye view Network
19:52 predictions where you take all these
19:55 cameras and you feed them through
19:56 backbones and then you have neural net
19:59 fusion layer that stitches up the
20:02 feature maps across the different views
Note: I wish the quotes would not collapse so that the highlighted portions would be visible -- that is why I split it into 2 quotes

That is all consistent with what Musk said and what Karpathy said repeatedly.
My prediction: HydraNets will get smaller / more focused as more stuff is absorbed by BEV Net. But HydraNets will still process each image so that it is able to be stitched by "NN fusion layer".
 
  • Disagree
Reactions: Knightshade
Not even slightly, no.
If I am not understanding you correctly, can you please enlighten us all with your experience:

How long does a "foundational rewrite of the Autopilot system that is almost complete" in 1/2020 take to accomplish?
What did Karpathy do for the 2.5 years at Tesla?

He is the architect of HydraNets as well as the BEV Net (appeared in car firmware's 2019.40.x -- that is just about 1 year ago), Fusion Layer and Temporal Layer.
How long would that take?

Does green have something to pass along on this as well? ;) Would love the all seeing green to chime in!
 
Last edited:
  • Disagree
Reactions: Knightshade
The fact that Karpathy said he was excited it would be going into production builds before Elon announced the beta should be enough. And Elon said the beta was only a few days behind the build on his personal car which he’s claimed had the latest and greatest.
 
  • Like
Reactions: mikes_fsd
There's a lot of this dumbing down in this forum.

Just because Tesla uses vision, doesn't mean they copied Mobileye. No technology is absolutely novel. Every technology is inspired by existing implementations or ideas.

Regardless, it seems Mobileye has abandoned vision based FSD, as they are depending on HD maps, and they're also developing a lidar based FSD system as well. I don't regard use of HD maps as vision-based. All of this discussion usually just involves us chasing our own tails.

Literally the entire camera setup and angles including the trifocal camera with 3 different FOV was mobileye's invention. Mobileye already had dozens of automakers testing out the setup and chip in 2014. One of them being Volvo.

Autonomous drive technology – trifocal camera


Secondly all the variety of neural networks currently running on Tesla's stack are all the things Amnon presented back in 2016.

 
Last edited: