Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

New Elon Musk AI Podcast: Neuralink, AI, Autopilot, and the Pale Blue Dot

This site may earn commission on affiliate links.
Haven't listened yet. I'm interested in hearing why Elon's prediction from the last interview hasn't borne out.
"I think it will require detecting hands on wheel for at least six months or something like that from here. Really it's a question of, from a regulatory standpoint, how much safer than a person does Autopilot need to be for it to be okay to not monitor the car."
I guess he did say "at least"!
 
  • Funny
Reactions: AlanSubie4Life
Haven't listened yet. I'm interested in hearing why Elon's prediction from the last interview hasn't borne out.
"I think it will require detecting hands on wheel for at least six months or something like that from here. Really it's a question of, from a regulatory standpoint, how much safer than a person does Autopilot need to be for it to be okay to not monitor the car."
I guess he did say "at least"!

The FSD part starts at 27:18 in the video. I don't think he really addresses that issue. He basically goes over the 3 parts of autonomy that he mentioned in the earnings call recently, low speed autonomy (ie Smart Summon in parking lots), intermediate speed autonomy (traffic lights and stop lines) and high speed autonomy (NOA Highway). He says NOA can do lane changes and highway changes and does not require a lot of interventions and is safer than human driving in stop and go traffic. Elon says stopping at lines at intersections is relatively easy because you can use GPS and vision to know where the stop but complex traffic lights and extremely curvy roads are the 2 main problems that Tesla still needs to solve. He says perception is harder than controls because you need to see the environment perfectly but once you have that, controlling the car is "relatively easier".

I feel like Elon is really glossing over a lot of "features" needed for autonomy since he only mentions auto lane changes and highway changes for NOA and traffic lights and stop lines for intermediate autonomy (presumably a reference to "automatic city driving" I am guessing). What about reading traffic signs, responding to debris on the road, pulling over for emergency vehicles, handling construction zones, etc...? Perhaps Elon only mentions traffic lights and curvy roads as remaining challenges because the other stuff is actually basically done already on the AP3 NN and just not released yet? That would be good news if it is true, if the AP3 NN actually can do that stuff already. If not, Elon is very naive if he thinks that full autonomy just requires navigating parking lots, auto lane changes and highway changes on the highway and traffic lights, stopping at intersections and making turns on city streets. I am hoping his list is not exhaustive or literal but rather just stream of consciousness based on some features that they are currently working on now.
 
  • Funny
Reactions: AlanSubie4Life
I found it amusing how far apart Lex and Elon are on full self driving. Lex: it is a really hard problem, we don't even know how hard it is yet. Elon: it is easy, we already do it on the freeway, in stop and go traffic it already is better than a human pilot.
 
I found it amusing how far apart Lex and Elon are on full self driving. Lex: it is a really hard problem, we don't even know how hard it is yet. Elon: it is easy, we already do it on the freeway, in stop and go traffic it already is better than a human pilot.

Yeah, I noticed that. It is probably worth noting that Elon drives the latest software in his car so his comments are probably based on more advanced FSD software than what we have in our cars. That might explain why he seems to talk about autonomy in ways that seem way better than what we have.
 
Elon, has been making unrealistic driverless statements since 2015. This is just Elon being Elon.
How do you know they don't already have a human level artificial intelligence in the lab and they just need to convince it to drive the car and not go on a homicidal rampage after sitting in Bay Area traffic for two hours?
 
I found it amusing how far apart Lex and Elon are on full self driving. Lex: it is a really hard problem, we don't even know how hard it is yet. Elon: it is easy, we already do it on the freeway, in stop and go traffic it already is better than a human pilot.

Yeah, I too find it pretty astonishing how far apart a truth teller is from a liar.
 
  • Funny
Reactions: AlanSubie4Life
Yeah, I too find it pretty astonishing how far apart a truth teller is from a liar.
I don't believe Elon is lying. A common trait of successful entrepreneurs is to have unrealistic expectations and to demand those unrealestic expectations from the people who work for them. He actually believes it and is not concerned with nuances, that is for the people who work for him to figure out.
 
I don't think Elon is lying. A common trait of successful entrepreneurs is to have unrealistic expectations and to demand those unrealestic expectations from the people who work for them. He actually believes it and is not concerned with nuances, that is for the people who work for him to figure out.

Years ago I would have accepted an excuse that the EAP time table was the result of Elon being naive about the difficulty of an ADAS system, and having unrealistic expectations on engineers who worked for him.

But, I don't think that excuse works anymore.

He should have learned through that whole ordeal that there are things you can push to accelerate a design. Like I'm firmly of the belief that hardware engineers tends to excel when pushed if given enough resources. Especially if you're talking about well understood HW architectures, etc. So this is why the HW3 project was so successful.

But, the same isn't true with SW and especially not AI related stuff.

The only way Elon isn't 100% full of BS is if the FSD software stack is monumentally better than what's in the car.

Part of me is really hopeful it is. When I look at Karpathy's talk about smart summon it seems a lot smarter than the smart summon in my HW 2.5 car. It's as if there is released code doing one thing, and unreleased code doing a different thing.

But, another part of me is completely cynical as both NoA, and Smart Summon suck is in ways that should have been painfully obvious. As an example there are hundreds if not thousands of EAP/FSD owners updating OpenStreetMaps in a desperate attempt at making Smart Summon suck a little less.

In my experience it cut the sucky by half.
 
He called perception and bringing objects into a vector space hard, which other like Mobileye call easy.
Holy crap, Talk about being behind.

The very fact that Tesla can't ace a pedestrian detection test in the middle of the day with the pedestrians crossing in front of the car going straight is pretty telling.

But, what I don't understand is why so many other manufactures also failed AAA's much harder test. The ones done at night.

Now I don't know where each manufacture got their ADAS systems from, but if MobileEye calls perception easy then why do so many ADAS systems suck?
 
  • Like
Reactions: DanCar
The very fact that Tesla can't ace a pedestrian detection test in the middle of the day with the pedestrians crossing in front of the car going straight is pretty telling.
I find it surprising that Tesla's cameras are still in low resolution mode. 1280x960. When they go to high resolution, that may make pedestrian detection, traffic lights and other things easier but it is going to increase the load by 4 times on the machine learning systems (training and inference).
 
I find it surprising that Tesla's cameras are still in low resolution mode. 1280x960. When they go to high resolution, that may make pedestrian detection, traffic lights and other things easier but it is going to increase the load by 4 times on the machine learning systems (training and inference).

The AP2 computer probably can't handle high resolution considering that NOA is already pushing AP2 to about 80%. I imagine that Tesla's goal is high resolution and that is where Dojo and the AP3 or better computer come in.
 
  • Like
Reactions: DanCar
Yeah, I noticed that. It is probably worth noting that Elon drives the latest software in his car so his comments are probably based on more advanced FSD software than what we have in our cars. That might explain why he seems to talk about autonomy in ways that seem way better than what we have.
What is the hardest part of FSD
In the last AI interview : Parking lots
In this AI interview : Traffic lights

Musk just talks about the next hill that they are climbing.

May be that's the way he deals with very difficult problems. One difficult problem at a time. When building out Model S, if he was thinking about where to place GF4 (BTW, in Berlin as announced today), he wouldn't have done well with S.

There is an assumption by people listening to him (or reading what he said) that what he is saying is the difficult problem now means rest is easy or even whether he thinks rest is easy. I think it is better to interpret his comments as things Tesla is facing now and are trying to solve. In a way it gives a very transparent window into their progress.

You can see that even in SpaceX or "production hell" turning into "delivery hell" to "service hell". We can either say why can't he plan ahead and address something before it becomes "hell" or we can say - he is recognizing what the most important problem is and getting it solved.

ps : When talking about neuralink he gave a fairly overall picture and difficulties involved. That is because I think that is the stage at which they are operating now i.e. figuring out the various kinds of things that they need to get done (and hire people for).
 
Last edited:
  • Like
Reactions: diplomat33