Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
  • Want to remove ads? Register an account and login to see fewer ads, and become a Supporting Member to remove almost all ads.
  • Tesla's Supercharger Team was recently laid off. We discuss what this means for the company on today's TMC Podcast streaming live at 1PM PDT. You can watch on X or on YouTube where you can participate in the live chat.

How will Tesla demo FSD?

This site may earn commission on affiliate links.
@diplomat33 @CarlK Regarding your earlier exchange... two what-ifs:

1) What if Tesla is intentionally secretive?

2) What if Elon Musk is not always honest?

The perspective certainly changes a bit when analyzing through different answers to those questions.

Absolutely. If you answer that Tesla is being intentionally secretive then you will probably conclude that Tesla is trying to hide their lack of progress. But if you answer no, Tesla is not being secretive on purpose, you will probably think of a completely benign explanation for the secrecy.

I would argue that the April 22 event is actually proof that Tesla is not being intentionally secretive. After all, if Tesla were trying to hide their lack of FSD progress, why voluntarily do a big public event where they have to show that progress? Remember, "Autonomy Investor Day" was Tesla's idea. Furthermore, I would argue that if Tesla is being intentionally secretive, why let Musk boast about FSD which only draws attention and scrutiny? Again, if Tesla is being intentionally secretive in order to hide their lack of progress, it would be better to keep a low profile and talk about FSD. So yes, Tesla is secretive in their FSD work but not intentional IMO.
 
  • Love
Reactions: CarlK
I would argue that the April 22 event is actually proof that Tesla is not being intentionally secretive. After all, if Tesla were trying to hide their lack of FSD progress, why voluntarily do a big public event where they have to show that progress? Remember, "Autonomy Investor Day" was Tesla's idea.

I don’t see that as proof of this. Tesla could be secretive and lying throught their teeth up to that event and then go back doing it right afterwards (not saying they are doing this, just outlining the concept) — while spinning as much as they can at the event. Or maybe they have ”faked it until they make it” and now they are ready to come out of secretive/lies, more lies mode. :) Don’t take this too seriously, just outlining the idea that this doesn’t really prove much of the past... or necessarily the future.

Tesla could also have some completely unrelated motivations like wishing to publicize some FSD progress for stock market purposes.
 
  • Like
Reactions: am_dmd
I have also learned something from the Lex Fridman video linked above. It confirmed what we thought that camera is more challenging to program but it made up for that by able to provide a lot more machine learning experiences. I can't think of a way to avoid redo the learning from Lidar-centric system to camera-centry system becasue more complex NN is needed for the camera system. You have to learn how to walk without crutches.
Everyone is using cameras though. Who is using a Lidar-centric system? Everyone, including Tesla, is using a a combination of sensors.
I'm not too worried about it. If there are disengagement data those people will still find something else to discredit Tesla. I trust Elon who although often optimistic but is always honest in what he says. Like I said if he wants to game the CA reporting system he's more than smart enough to do it. There are too much noises out there most people just don't have the ability to pick out real stuff amidst them. Not saying I wasn't confident before but that video from Lex Fridman, someone who is as knowledgeable and objective in this subject as any, did give me quite a warm and fuzzy feeling. Almost want to shout I told you so.
"so if you look at a suite that for example Tesla is using which is ultrasonic radar and camera and you compare it to just lidar and see how these paths compare that actually the suite of camera radar and ultrasonic are comparable to lidar so that those are the two comparisons that we have you have the costly non machine-learning way of lidar and you have the cheap but needs a lot of data and is not
explainable reliable in the near term vision based approach and those are the two competing approaches. Now of course Waymo will talk about they're trying to use both but ultimately the question is who catches, who is the failsafe in the semi autonomous way when there's a camera based method the human is the failsafe when you say oh crap I don't know what to do the human catches it. In the fully autonomous mode, so what Waymo's working on, and others, the failsafe is lidar, failsafe is maps, that you can't rely on the human but you know this roads so well that if the camera is freaked out if there's any of the sensors freaked out that you're able to you have such good maps you have such good accurate sensors that the fundamental problem of obstacle avoidance which is what safety is about can be solved, the question is what kind of experience that creates."
It sounds like he's skeptical that the a camera only approach can be safe enough for fully autonomous vehicles in the near term. Autopilot is still running into fixed objects even after 1 billion miles with the current sensor suite. Maybe HW3 will fix it.

Trying to game California's autonomous testing rules didn't work out so well for Uber. For Uber it was videos of their cars running red lights that caused the DMV and attorney general to bring the hammer down. We'll see what happens if Tesla tries a similar testing strategy.

P.S. Youtube transcribed it! Neural nets are awesome :D
 
  • Funny
Reactions: dhanson865
Indeed what is happening here is that Tesla probably has a worse than average computer vision system but a vastly higher than average desire to build advanced driver’s aids and the OTA means to iterate on those. That is a genuine benefit for Tesla and its customers.

It's really hard to say how well Tesla stacks up against the competition in terms of computer vision.

The entire ADAS industry is a bit of mess right now with promises not being met by automotive companies who incorporate these systems.

Like BMW claimed the X1 could do pedestrian detection, but it completely failed when the IIHS tested it.

Nissan is facing an NHTSA investigation into false AEB braking in their system on the Rogue. We're not talking slight false braking events like Tesla's AP2 system, but full on AEB braking.
Nissan Rogue Braking Issue Prompts NHTSA Investigation

As consumers we don't really know where the failures within the ADAS are coming from. Are they failures of the vision system or some other part of the ADAS system?

Both you and Blader went out your way to make sure I was educated that MobileEye wasn't the complete ADAS vendor in most cases.

But, customers like myself don't really care.

We just want a system that has lots of capabilities, and we don't want a bunch of finger pointing.

Sure its disappointing that Tesla's current vision system with HW2/HW2.5 doesn't recognize a whole lot of stuff. But, I have confidence that the vision system itself will actually be something Tesla will make great strides on with HW3.

Within 6-12 months I think the any difference in the vision system will be irrelevant. My biggest concern is whether Tesla can make great strides on the driving logic front, and with maps.

I have a Model 3 right now for the reason you said (Tesla is more aggressive with rolling it out), and that I wanted a system where a single company had control over the entire ADAS system. So anything it didn't or didn't do would be completely the fault of that company. Assuming it's not some defect with the radar itself or cameras.

That's the exciting thing about Tesla. They're the only automotive maker that owns their own ADAS system. At least that I know of.

The frustrating part is that they don't parter up to get to the market with features faster.
 
@Daniel in SD He said camera+radar+ultrasonic sensor cover everything Lidar covers in addition to what camera does best. However Elon said recently Tesla is reducing the reliance on radar and thinking one day they may be able to do without it. He did not elaborate though.

@S4WRXTTCS Another thought after reading your post about the ADAS situation. Not having OTA capability really put rest of the industry in a serious bind. They don't have the luxury of change anything after the car is delivered to customer's hand. Sure they could do a recall type of service but that really makes things very cumbersome.
 
Last edited:
  • Like
Reactions: S4WRXTTCS
Everyone is using cameras though. Who is using a Lidar-centric system? Everyone, including Tesla, is using a a combination of sensors.

"so if you look at a suite that for example Tesla is using which is ultrasonic radar and camera and you compare it to just lidar and see how these paths compare that actually the suite of camera radar and ultrasonic are comparable to lidar so that those are the two comparisons that we have you have the costly non machine-learning way of lidar and you have the cheap but needs a lot of data and is not
explainable reliable in the near term vision based approach and those are the two competing approaches. Now of course Waymo will talk about they're trying to use both but ultimately the question is who catches, who is the failsafe in the semi autonomous way when there's a camera based method the human is the failsafe when you say oh crap I don't know what to do the human catches it. In the fully autonomous mode, so what Waymo's working on, and others, the failsafe is lidar, failsafe is maps, that you can't rely on the human but you know this roads so well that if the camera is freaked out if there's any of the sensors freaked out that you're able to you have such good maps you have such good accurate sensors that the fundamental problem of obstacle avoidance which is what safety is about can be solved, the question is what kind of experience that creates."
It sounds like he's skeptical that the a camera only approach can be safe enough for fully autonomous vehicles in the near term. Autopilot is still running into fixed objects even after 1 billion miles with the current sensor suite. Maybe HW3 will fix it.

Trying to game California's autonomous testing rules didn't work out so well for Uber. For Uber it was videos of their cars running red lights that caused the DMV and attorney general to bring the hammer down. We'll see what happens if Tesla tries a similar testing strategy.

P.S. Youtube transcribed it! Neural nets are awesome :D
Tesla has at least one patent I've read on improving GPS accuracy with vision based mapping, which is different but similar to what LIDAR mapping is trying to accomplish. It's really clever although still relies on the primary sensors.
 
@Daniel in SD He said camera+radar+ultrasonic sensor cover everything Lidar covers in addition to what camera does best. However Elon said recently Tesla is reducing the reliance on radar and thinking one day they may be able to do without it. He did not elaborate though.
Except when it doesn't. Right now the sensor suite and software on HW2.5 is not a good enough vision system for fully autonomous driving. Maybe that will change with HW3.
Lex Fridman is obviously most interested in computer vision but it seems like path planning is even more difficult than the vision system. We have some visibility to how much progress Tesla has made in computer vision but almost none on how much progress they've made on path planning.
 
Except when it doesn't. Right now the sensor suite and software on HW2.5 is not a good enough vision system for fully autonomous driving. Maybe that will change with HW3.
Lex Fridman is obviously most interested in computer vision but it seems like path planning is even more difficult than the vision system. We have some visibility to how much progress Tesla has made in computer vision but almIt'sost none on how much progress they've made on path planning.

It's not vision per se. Meaning the challenge is not sensors can not see things. The challenge at this point is machine can not always identify what sensors see. To make it to work the machine needs to identify everything as definitely a hazard or definitely is not a hazard. There can not be anything in between. As Andrej Karpathy said it's mostly to resolve the remaining edge cases. It's mostly machine learning problem not the vision problem to solve at this stage.
 
Those are exactly how I felt for years about Mobileye and reason for numerous debates with Mobileye fans/trolls on this forum. Something is very wrong for that company when all you have to show are powerpoints and videos. And then you have to keep on inventing new stories when things did not pan out.

I'd like to go on record and say that I believed MobilEye was a scam back in the 1999/2000 when they came out. They made huge sweeping promises of all the things their research led them to believe they could do. It took them five years to develop their first product, and it was a total joke compared to what they promised feature wise. They had all kinds of partnerships, they had deals lined up, and started taking serious investment money, but that first generation of non-demo hardware went nowhere. It took them a total of nine years to actually release any of the EyeQ lineup!

Somehow, even without the EyeQ platform being in any vehicles, they were winning all kinds of awards and receiving all kinds of grants and investment money. The first two releases of the platform were absolute flops, and it wasn't until the EyeQ3 that the system could start to handle actual simplistic driver assistance scenarios. People talk about how the AP1 cars drive much better than AP2+, but they don't realize that's because the line follower logic and ADAS was basically all that the EyeQ3 platform was doing. This was such an issue early on in AP1 deployment, that Tesla went directly to Bosch (the RADAR module manufacturer) to ask if they could deploy custom firmware loads on the radar units that would send raw radar data off to another device while still integrating with the EyeQ3 unit. That move pissed MobilEye off, and the two stopped working with each other.

It wasn't until after Intel bought them that the EyeQ4 platform actually got any traction, because investment money dried up. And for all the companies that touted their use of EyeQ4, and the imminent implementation of Level 3+ driving have all quietly released updates to their timelines and plans. Suddenly none of their flagship platforms are doing anything that the EyeQ3 didn't already do. You'd imagine that if MobilEye was so good at all of this, then EyeQ4 would have been a slam dunk. But it's not. Because the entire intent was to find some whale of an investor to come buy them. MobilEye realized early on that the only way to sell autonomous driving was to never actually deliver it. The art of the grift.

The problem is I don't care about how advanced your demo videos are. I only care what my car does. And my Tesla can do much much more than any of those Waymo cars that I cannot buy. End of story.

I agree with this 1000%.

No, the Waymo cars can do more than our Tesla car. The problem is that we cannot buy the Waymo car so the fact that they can do more is useless to us.

Flat out, no they can't. Waymo's cars can't do anything a AP2+ Tesla can't do. You've been duped by overzealous press coverage. I forget who it was, but there was a journalist that wrote a real report and snuck some video footage from Waymo's last press day. His reporting and the video posted appear to show that Waymo is great at following clearly marked lanes, and dodging obstacles (thanks to lidar no doubt). But that if the vehicle needed to make a 90 degree turn onto a road that wasn't an intersection, it was effectively unable to do so. As of right now, we have Youtube video from Tesla owners showing their cars stopping at intersections and properly following clearly marked roads as well as highways.

All this tells us is that Waymo's marketing department is doing their job.
 
It's not vision per se. Meaning the challenge is not sensors can not see things. The challenge at this point is machine can not always identify what sensors see. To make it to work the machine needs to identify everything as definitely a hazard or definitely is not a hazard. There can not be anything in between. As Andrej Karpathy said it's mostly to resolve the remaining edge cases. It's mostly machine learning problem not the vision problem to solve at this stage.
Obviously the cameras can see things, that's how people keep posting videos of all the things the vision system can't see yet :rolleyes:
MobilEye was a scam
Waymo's cars can't do anything a AP2+ Tesla can't do
So basically you're saying that all FSD demos are a lie. It certainly seems possible. This is why Tesla needs to do more than show a demo or do test rides. They've had hundreds of employees using HW3 for months so that should be about a million miles. In order for Elon to claim that they'll be hands free in six months there must be some hard data that they can present.

What is your feeling about Cruise? Their demos are amazing!
 
Noooope. Nope. nope. AI doesn't do inference. It's literally something that hasn't yet been done/discovered yet.

Generally speaking Machine Learning is a branch of AI

Deep Learning is a component of Machine learning.

Within Deep learning you have deep neural network that typically require massive datasets to train. Once you get done training the model you can then optimize it for inference using whatever hardware you have. I don't have a good way to describe inference, but it's pretty much running your data through the model to see the results.

There is contention on what exactly we should be considering AI.

But, I believe it's pretty well agreed upon that ML is definitely a branch of AI. Now we can split up AI between Applied AI, and Generalized AI.

Where generalized AI hasn't really been done yet or discovered yet.

I don't believe one needs generalized AI to achieve Self-driving cars. That issue can be solved through regulations and infrastructure improvements.

The reason China will get to self-driving cars first (on a large scale) is they have the ability to do this.

The people that knock the achievements of Waymo, and Cruze are typically people who want generalized self-driving solutions that will handle all roads types. Where they don't like the idea of geofencing, and white listing various roads.

They do have a point in that requiring white listing of roads, and necessary improvements in roads for self-driving cars is going to take forever.

There has to be a better way so Tesla is seen as that better way.

Where it's just so absurdly crazy that it might just work.

Not that it will work, but it will simply lower the barrier of entrance and people will simply accept that deaths will happen with self-driving cars.

The Tesla solution to FSD is simply accepting the risk that comes with growing a L2 system to L4.
 
Last edited:
So basically you're saying that all FSD demos are a lie. It certainly seems possible. This is why Tesla needs to do more than show a demo or do test rides.

I don't know if I'd call them lies, but I'd call them effective marketing. But I do agree whoever wants to show the next step of actual progress needs more than press demos with cameras banned, and an edited video of a car driving around. I think the idea of live streaming is pretty powerful, and obviously deploying the next step of autonomy to a fleet of cars is irrefutable.

They've had hundreds of employees using HW3 for months so that should be about a million miles. In order for Elon to claim that they'll be hands free in six months there must be some hard data that they can present.

Yeah, the HW3 upgrade is interesting to me, but I do wonder how much validation it would take to apply their existing model to full resolution data.

What is your feeling about Cruise? Their demos are amazing!

Is Super Cruise considered a publicly available version of Cruise? Or are they considered entirely different? If they're considered part of the same thing, then I'm fairly unimpressed with Super Cruise. Geo-fenced lane keep assist is neat, but it doesn't exactly knock my socks off. But, based on autonomy filings in CA, I do know that people love crashing into Cruise's vehicles while they're stopped at intersections.
 
@S4WRXTTCS Go check out the Lex Friedman chat with Greg Brockman. They both agree with me- AI does not do intuition. It doesn't exist. Once it does, we're potentially in serious danger.

I don't have any disagreement that AI does not do intuition.

The only disagreement I have with you is about what constitutes AI.

The AI that does exist today is applied AI.

Generalized AI would be one that had intuition. That is what doesn't exist.
 
  • Like
Reactions: DrDabbles and CarlK
Is Super Cruise considered a publicly available version of Cruise? Or are they considered entirely different? If they're considered part of the same thing, then I'm fairly unimpressed with Super Cruise. Geo-fenced lane keep assist is neat, but it doesn't exactly knock my socks off. But, based on autonomy filings in CA, I do know that people love crashing into Cruise's vehicles while they're stopped at intersections.
I don't think so. Super cruise is EyeQ3 plus super accurate maps. Cruise is a startup working on full autonomy that GM bought.
If Tesla is going to make a (possibly) fake demo they should try to top this one: