Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Full Self-Driving Capability

This site may earn commission on affiliate links.
No one seems to mention the video by Tesla:

It looks to me(and many others) that the FSD is already there, just waiting for approval.

My concern is, was the car in the video autonomous? or someone was sitting on the backseat wire/radio controlling the car?
 
That video shows how limited their system is. It's just reacting to what the cameras see and using GPS for the route.

Where is the 3D model of the surrounding area? It doesn't exist, they don't have one. All they do is recognise objects in the car's path, they aren't mapping their position in space.

Google has LiDAR for that purpose. Tesla is going to have to either do that mapping without range information or stereoscopic vision, or just rely on following the road with some specific detection for things like pedestrians.
 
  • Like
  • Informative
Reactions: kbM3 and lymex2017
I was told yesterday that FSD will not ready ready for at least a year and that it still has to be approved by the FEDS.

Not if the Start AV Act goes into law. In fact Tesla will only have to self report FSD on the first 240,000 units. Congress will allow OEMS to manufacture up to 80,000 units per year for three years without traditional approval or review. Intent of the bill is not to slow innovation.

Peters, Thune Introduce Bipartisan Legislation to Advance Development of Self-Driving Vehicles | U.S. Senator Gary Peters of Michigan

S.1885 - 115th Congress (2017-2018): AV START Act
 
That video shows how limited their system is. It's just reacting to what the cameras see and using GPS for the route.

Where is the 3D model of the surrounding area? It doesn't exist, they don't have one. All they do is recognise objects in the car's path, they aren't mapping their position in space.

Google has LiDAR for that purpose. Tesla is going to have to either do that mapping without range information or stereoscopic vision, or just rely on following the road with some specific detection for things like pedestrians.
If I'd were to guess, that video is based on a almost end-to-end neural network with very limited vision capabilities. That video is in no way comparable with the releasable FSD. FSD is possible with vision only, but much harder than using a Lidar.
 
I am a software geek, have built and sold companies and my latest company is in AI.

I don't have first hand experience of Google's self driving car. But based on public information available and my understanding about software, even their FSD is at least 3 years away.

And Tesla is definitely behind Google. When today Tesla is struggling to get Blind Spot Monitor, has phantom braking, basic things like Homelink break with software builds - I would be surprised if they release FSD in the next 5 years.

Google has a lot of talent, expertise and tools for rapid software development. Best engineers aspire to work at Google. To see an example their software expertise, look at Google Chrome. They built so much testing automation that within a few years they were ahead of all competitors (Firefox, IE, Safari), so of whom had been around for decades. I know the Google Chrome team and haven't seen so many geniuses in a room before that.

They are much ahead of anyone else in AI. An example of this is Google Pixel Buds - they translate from other languages in real time. Even a few years back this would be considered SciFi stuff.

In summary, if I had to bet my money - FSD is at least 3 years away and Google would be the first company to bring it to market.
 
A couple thoughts on the current status of FSD. Until recently, I had been planning on paying for FSD when my Model 3 arrives in Q3. As that date is approaching, and after observing some recent events, I'm now thinking that I won't be pulling the trigger on the FSD option just yet.

There has been discussion about 2 independent software stacks: one for EAP, and another for FSD. I've never been convinced of that. I think that with all the effort in EAP, it would be a shame to throw that out and start fresh with FSD. It would make much more sense to take the best of EAP and add enhancements and features while progressing towards FSD. People generally think that individual mature features will start to creep into EAP, and at some point, new features will be introduced for those who paid for FSD. Not initially Level 4/5, mind you, but just something to differentiate the FSD software stack. Baby steps, leading to bigger things.

But I have to say... one feature that would significantly "enhance" EAP, and would be absolutely essential for FSD, would be the ability to avoid slamming into things like stationary fire trucks. If FSD was really making great strides, why wouldn't they break off this little gem and add it to EAP, to save Tesla's public image, and maybe a life? There has been discussion about how radar isn't good at detecting stationary objects. To me, it doesn't really matter. They have to figure this out. I think that when and if they do, they'll roll it out... to everyone.

The other element I've been pondering is the FSD pricing motivation. Those like myself, who eventually want FSD, will find it attractive to buy in at a lower price when configuring, instead of paying more to add FSD post-delivery. Lately I'm starting to consider that the early FSD adopters will be the tip of the spear, so to speak. Someone has to go first. Will it be me? After achieving Level 4/5, would I be willing to risk a life (mine, a jay-walker, a deer) employing the technology? I'm starting to think that letting someone else take those first confidence-building steps might be worth something to me. Like perhaps more than $1k (the FSD early-adopter savings).

I'm disappointed that this post is coming off as a downer. I'm a big fan, and I can't wait to get my car and use EAP responsibly. But some things come into better focus as one gets closer. Curious y'all's thoughts.
 
We purchased EAP/FSD with our S 100D last year - and have included that with our Model X owner (hopefully get it in June).

EAP is likely based on the software used with AP1 and the AP1 sensors, with the new Tesla Vision software replacing what Tesla lost with Mobileye. By limiting EAP to only use 4 of the 8 cameras, EAP will always have limitations. It is working reasonably well today on limited access highways, though it still is challenged in handling some unusual conditions (like with the recent accidents). Based on getting TACC (cruise control) and the current EAP capabilities, it's a low risk to include EAP today - and it saves you at least $1000 in future activation after they get it completely working.

FSD is clearly riskier. However, even before Tesla gets regulatory approval to operate FSD without driver monitoring, Tesla intends to roll out the software using all 8 cameras to cars with FSD activated. Even though it will be operating in "driver assist" mode, I expect the AP to operate better, monitoring objects 360 around the car, and work safer and more reliably in more conditions than EAP with only 4 cameras.

While we don't expect to see FSD running without driver monitoring for a few years, we do expect to see benefits for having FSD in the next year. And because we plan to keep both of our FSD/Tesla cars for at least 100K miles - we're willing to accept the risk as a trade-off for locking in the FSD activation price. If we intended to keep our cars for a shorter time, we probably would NOT have included FSD in our configurations.

Plus, there is a possibility Tesla could increase the FSD activation price in the future for cars delivered without FSD activated. While the website states late activation would be $4,000 - there's no guarantee Tesla won't increase that price, should they determine a hardware upgrade (new processor?) is needed to get FSD validated and approved for unattended use.

As for the possibility Tesla might have different software for FSD and the current EAP, as someone who's been responsible for major software projects, I'm comfortable with that. It's not unusual for software systems to undergo major changes after a "version 1" system has been released, learning from what did and didn't work well in the initial implementation and using that to make a significantly better "version 2".

AP1 was intended to work in relatively simple situations - primarily doing lane monitoring with a few additional features. FSD has to operate in ALL conditions, which is much more complicated - so it wouldn't be surprising to see Tesla use a different software architecture for FSD than AP/EAP.
 
  • Like
Reactions: Enginerd
It really depends if they need to upgrade the hardware for FSD. They are so far away from it at the moment, not using all the cameras or even comparing sequential images for rudimentary stereo vision, that it's difficult to say but my guess would be they have to.

Given that, they aren't going to retrofit expensive upgrades just to make EAP better.
 
It really depends if they need to upgrade the hardware for FSD. They are so far away from it at the moment, not using all the cameras or even comparing sequential images for rudimentary stereo vision, that it's difficult to say but my guess would be they have to.

Given that, they aren't going to retrofit expensive upgrades just to make EAP better.
But they are selling FSD capability based on the hardware and the man in charge has repeatedly stated that this is enough.
If they don't retrofit, they're getting sued. hell, even if they do retrofit, they're getting sued.
 
Assuming Tesla has done a reasonable job verifying the sensors are capable, the most likely area needing upgrades is the AP processor.

Humans rely on the equivalent of two cameras spaced slightly apart, which can rotate about 180 degrees, with limited visibility on the sides and rear.

Assuming the AP cameras are capturing similar information, then it should be a matter of having enough software and processing power to extract from the cameras & radar the object recognition humans can do. And with the additional sensors and 360 degree view, with enough processing power, AP should be able to do a better job than humans do visually.

A processor upgrade seems very likely - which should be a plug-replaceable component.

If Tesla has guessed wrong on the sensors - and determines they need more radar or lidar or more/better cameras, Tesla will face law suits - not only for the additional price of purchasing the FSD option - but for the entire car, because Tesla (Musk) has claimed all cars built since late 2016 are capable for Full Self Driving...
 
Humans rely on the equivalent of two cameras spaced slightly apart, which can rotate about 180 degrees, with limited visibility on the sides and rear.

Assuming the AP cameras are capturing similar information, then it should be a matter of having enough software and processing power to extract from the cameras & radar the object recognition humans can do. And with the additional sensors and 360 degree view, with enough processing power, AP should be able to do a better job than humans do visually.

I think the problem will be making the kind of decisions humans are capable of in the real world. Even avoiding simple potholes in the road will be a major challenge for FSD! I don't see FSD coming any time soon and EAP is a million miles away from FSD capability.
 
  • Like
Reactions: 1 person
I guess the real state is revealed by the fact that it does slam into static objects and by the fact that even trivial things that would improve the AP usability greatly such as basic traffic sign detection are missing. Looks as if the whole thing is at the very starting point.

In other words, hardest things are the exceptions and this one doesn't even do the most basic rules yet.
 
I think the problem will be making the kind of decisions humans are capable of in the real world. Even avoiding simple potholes in the road will be a major challenge for FSD! I don't see FSD coming any time soon and EAP is a million miles away from FSD capability.
I don't think decisions is the hard part once you have all the data about your environment. Perhaps the most complex decisions to be made is where to drive in complex intersections that may have bad signs and road marking. Most driving decisions require no AI at all.

The rest is a matter of priority of where to drive given the data you have. Eg avoiding death > avoiding damage to car/rims/property (eg pothole) > keeping the car on a legal path > keeping the car on a correct path > keeping the car on a comfortable path.

What is a major challenge is vision and classifying the drivable areas into the metadata needed to make good decisions. Eg how do you tell a windshield bug from a pothole in the road?
 
Imagine a roundabout with some roadworks. FSD has GPS data for the roundabout, but has to identify the roadworks and navigate them with vision and radar alone.

Many small objects (cones), flashing lights, diversions, narrow lanes, people wondering around... Maybe there is a guy holding a stop/go sign and it has to drive on the wrong side of the road that was oncoming traffic 10 seconds ago...

FSD is really, really hard. Google is doing it as a limited area taxi service so they can avoid stuff like I described for now. Tesla has to make it work everywhere on day one.
 
  • Like
Reactions: kbM3 and Peteski
I am a software geek, have built and sold companies and my latest company is in AI.

I don't have first hand experience of Google's self driving car. But based on public information available and my understanding about software, even their FSD is at least 3 years away.

And Tesla is definitely behind Google. When today Tesla is struggling to get Blind Spot Monitor, has phantom braking, basic things like Homelink break with software builds - I would be surprised if they release FSD in the next 5 years.

Google has a lot of talent, expertise and tools for rapid software development. Best engineers aspire to work at Google. To see an example their software expertise, look at Google Chrome. They built so much testing automation that within a few years they were ahead of all competitors (Firefox, IE, Safari), so of whom had been around for decades. I know the Google Chrome team and haven't seen so many geniuses in a room before that.

They are much ahead of anyone else in AI. An example of this is Google Pixel Buds - they translate from other languages in real time. Even a few years back this would be considered SciFi stuff.

In summary, if I had to bet my money - FSD is at least 3 years away and Google would be the first company to bring it to market.

I’m sure I’m the dumbest guy in the chrome dev team room but I think you missed a key factor driving adoption:

Giant nagging banner on Google.com for any other browser that is not chrome.

I’m sure the folks at Waymo are talented but Waymo is a wholly subsidiary. They are not the Chrome team so not sure how the examples relate. However, I’m sure if Tesla jacked up each cars price by 40K to add a Lidar suite AND look like Ecto 1 - Tesla would be further on FSD.

Even though Tesla is late on AP1 parity, MBLY is a 15 billion dollar market cap company, valued at roughly 1/4th of Tesla do one trick only.
 

Attachments

  • 2BEC2AA7-54A5-446B-9422-B5EB09E8EAE5.jpeg
    2BEC2AA7-54A5-446B-9422-B5EB09E8EAE5.jpeg
    76.2 KB · Views: 64
Last edited:
I am impressed with Chrome's pace of development and how fast it became the best browser. Firefox is finally starting to catch up!

Agreed that Waymo is a subsidiary, but they must have benefitted from the experience and infrastructure already available at Google.

Anyways, whoever brings self driving to market first, we all win!


I’m sure I’m the dumbest guy in the chrome dev team room but I think you missed a key factor driving adoption:

Giant nagging banner on Google.com for any other browser that is not chrome.

I’m sure the folks at Waymo are talented but Waymo is a wholly subsidiary. They are not the Chrome team so not sure how the examples relate. However, I’m sure if Tesla jacked up each cars price by 40K to add a Lidar suite AND look like Ecto 1 - Tesla would be further on FSD.

Even though Tesla is late on AP1 parity, MBLY is a 15 billion dollar market cap company, valued at roughly 1/4th of Tesla do one trick only.
 
I think the problem will be making the kind of decisions humans are capable of in the real world. Even avoiding simple potholes in the road will be a major challenge for FSD! I don't see FSD coming any time soon and EAP is a million miles away from FSD capability.

Once Tesla's software is able to detect objects in front and around the car, the biggest challenge will remain - interpreting that data to operate the car safer than a human driver.

Getting a car to stay inside of a lane or even change lanes is relatively simple - something student drivers can pick up quickly when they are learning how to drive.

The hard part is handling all of the unusual cases that come up during driving - complicated by the lack of standardization on the design of roads and signs, along with interactions with humans such as police, pedestrians, ... [Still concerned about how FSD will detect non-visual indicators, such as train horns or emergency vehicle sirens - for objects that are approaching but not yet visible...]

Wouldn't be surprised to see Tesla continue upgrading the AP processor to get more and more performance to support the amount of software that will be required to react in real-time and operate at least as safe as human drivers.
 
  • Like
Reactions: Peteski