Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
I think feature complete will mean that it does city navigation, but you have to supervise it.
I mean all the features are there for it to theoretically drive by itself, but it will ghost brake and hesitate a lot, and piss off a lot of people around you in the coming months.
You basically have to hover your hands over the wheel. Sign reading might not even be there.

Then in 2020 it will be slightly better.

This is judging from the progress of NoA. it has been a year almost with NoA.
 
Not going to happen in 2020. To much turn over in SD engineering. I understand Elon's motivation for changing SD staff... "Work harder or else dammit". But, Tesla needs to get to six nines for L5, and the reality of software departmental turnover is this: It takes at least six months for the new team to get up to speed on development processes and the quirks of a proprietary top-secret code base. We're not building CRUD apps for the 1,000th time. And that six months doesn't include overcoming the real issues to achieve L5 required to allow for robotaxi'ing.

Elon is amazing, I can totally relate to him, I am notorious for being overly optimistic about software development resources and timelines. But, reality: Software = Twice as long, and three times as much as expected...

2022, maybe 2021 with geofencing in select markets...
 
I don't think lidar is the limiting factor here. Lidar is more of a technology crutch that others are using were money is no object. (A good one mind you, just not feasible in consumer vehicles $$).

Regardless of they type of sensors; AI is data in, data out. The cars have "good enough" sensors for driving in 'good' consistent conditions. By example, I'd use the choice of Phoenix of Vegas by other SD orgs.

Humans navigate very well with stereoscopic imagery. Cars can too. +-2mm laser accuracy is not necessary. The real issue is the interpretation of sensor data in a consistent and sensible way. Lidar is easy, any video game collision detection alogo can do that. With image based data, well, that is where the software comes into play. Elon is right, it can be done, it's just much harder, and ultimatly, not as accurate or fault tolerant (bright light, weather, optical illusions, etc).

By geofencing, Tesla can focus on a limited set of driving conditions. E.g. traffic laws, native driver behaviors, and weather conditions. Not to mention the legality of SD in the prevailing jurisdiction.
 
By geofencing, Tesla can focus on a limited set of driving conditions. E.g. traffic laws, native driver behaviors, and weather conditions. Not to mention the legality of SD in the prevailing jurisdiction.

Okay, I agree with your premise but is there a better term than geofencing? Disallowing FSD during poor weather is not level 5, but it's also not geofencing.
 
Where do I sign up? :) If they're that inept, that they can't generate NN for AP 2.5 anymore, then I don't have much hope for anything remotely resembling FSD.
Not if HW2.5 is not powerful enough. If it is, then they probably already have the same NN.

Yes, they can eliminate some layers etc but then that means training again to optimize for 2.5 separately. Not sure they want to do that now ...
 
To much turn over in SD engineering.
It sounded like some managers got axed/left and some others promoted. Very difficult to assess the impact for outsiders like us.

By geofencing, Tesla can focus on a limited set of driving conditions. E.g. traffic laws, native driver behaviors, and weather conditions. Not to mention the legality of SD in the prevailing jurisdiction.
I expect Tesla to be constrained in the beginning - but not in the way he uses geo-fencing (i.e. a small geographical area like a city). It is likely to be available in US - most of the cities, but only when not snowing (and many other restrictions like traffic redirections, emergency vehicles, double parking etc). It will be a plain vanilla city NOA with very few features.

More importantly City NOA will have the same hands on, be vigilant and ready to take over at any time limitation of freeway NOA. Hopefully they get this sometime this year/early next year.

Then they will slowly add "edge cases" and get to better reliability. Actual L5/Robotaxi timeline, IMO, is unpredictable - but not anytime soon.
 
It sounded like some managers got axed/left and some others promoted. Very difficult to assess the impact for outsiders like us. ...
The managers that told Elon it can't be done, where either fired, demoted, or lost some responsiblity and Elon promoted those that weren't as negative. Not sure where I read this on the vast internet. So it seems the impact is that there is a glimmer of hope of getting the job done in the time frame that Elon wants it done. There wouldn't be hope if Elon left in the naysayers.
 
  • Disagree
Reactions: rnortman
More importantly City NOA will have the same hands on, be vigilant and ready to take over at any time limitation of freeway NOA. Hopefully they get this sometime this year/early next year.

I agree this is how it will probably be rolled out, but man, that is going to be hard. There are so many quick changing scenarios in city driving that trying to pay attention AND monitor that the car is acting appropriately is going to be a challenge.

Right now at least if you use it in the city you know it wont stop for signals or stop signs or make complicated turns, so you can disengage early for those. Once it is capable of all that it may be challenging to keep an eye out that the unprotected left is actually safe when the car decides to go for it.
 
I agree this is how it will probably be rolled out, but man, that is going to be hard. There are so many quick changing scenarios in city driving that trying to pay attention AND monitor that the car is acting appropriately is going to be a challenge.

Right now at least if you use it in the city you know it wont stop for signals or stop signs or make complicated turns, so you can disengage early for those. Once it is capable of all that it may be challenging to keep an eye out that the unprotected left is actually safe when the car decides to go for it.
Funny thing is I always feel freeway is much more dangerous because of the speeds involved and thus very little time to react.

Lets take stop sign, red traffic light, for eg. Just like I now make sure the car starts slowing down when it is coming behind a stopped car, I'll monitor to make sure it is slowing down as expected at red traffic light or stop sign. When I started using AP - I used to be very nervous on curving roads. Now I've figured out how far ahead it sees the curves and when I should expect it to start turning etc. It will take a bit of getting used to. Left turns will be similar to lane changes, you also have to monitor.

I have found that if I concentrate like I'd for manual driving I can figure out fairly easily when the car deviates from what I'd normally do (since I actually hold the wheel). So it becomes easier to correct, if needed (which is quite rare).
 
Not if HW2.5 is not powerful enough. If it is, then they probably already have the same NN.

Yes, they can eliminate some layers etc but then that means training again to optimize for 2.5 separately. Not sure they want to do that now ...

They used about 80% of the HW2. Maybe sign reading will take > 20% Not sure.
So does that mean that people who never bought FSD with cars that are pre-HW3 will only have to get map based speed limits forever?
 
Funny thing is I always feel freeway is much more dangerous because of the speeds involved and thus very little time to react.

Lets take stop sign, red traffic light, for eg. Just like I now make sure the car starts slowing down when it is coming behind a stopped car, I'll monitor to make sure it is slowing down as expected at red traffic light or stop sign. When I started using AP - I used to be very nervous on curving roads. Now I've figured out how far ahead it sees the curves and when I should expect it to start turning etc. It will take a bit of getting used to. Left turns will be similar to lane changes, you also have to monitor.rare).

Tesla in the presentation said it's cars can learn from how roads go and roadside features change to predict which way they curve to without having to actually see past the curve. Sounds like it is more sophisticated than we gave it credit to.

I have found that if I concentrate like I'd for manual driving I can figure out fairly easily when the car deviates from what I'd normally do (since I actually hold the wheel). So it becomes easier to correct, if needed (which is quite rare).

We have the thing called steering feel or feedback that is very useful for drivers or good drivers. Not sure if self driving cars utilize that. It wouldn't be too hard to implement it I would think.
 
Funny thing is I always feel freeway is much more dangerous because of the speeds involved and thus very little time to react.

Freeway driving is faster, but is vastly simpler. There are no bicyclists, stoplights, or turns, etc.

... stop sign, red traffic light, ...

Personally, I do not want to be among the first to test this out, or left turns, protected or unprotected. This will require accurately reading the lights in all conditions including sunlight glare just after sunrise and just before sunset, as well as stop signs partially obscured by foliage, and it will require being aware of on-coming cars and cars approaching from the side, which the present software is unaware of.

I think these are more difficult problems than some others do. I think they will take longer to solve, and even after they've been thoroughly tested by Tesla, I'll wait for user reviews here on TMC before I try them out myself.

Given Elon's track record, I think we will get these features, but they'll come around the end of 2022, rather than 2020, and will still be Level 2 for a few years after. EAP is still Level 2 after all this time.
 
  • Love
Reactions: emmz0r
The managers that told Elon it can't be done, where either fired, demoted, or lost some responsiblity and Elon promoted those that weren't as negative. Not sure where I read this on the vast internet. So it seems the impact is that there is a glimmer of hope of getting the job done in the time frame that Elon wants it done. There wouldn't be hope if Elon left in the naysayers.
Yes men always get stuff done better and faster. :rolleyes:
 
  • Like
  • Funny
Reactions: rnortman and DanCar
They used about 80% of the HW2. Maybe sign reading will take > 20% Not sure.
So does that mean that people who never bought FSD with cars that are pre-HW3 will only have to get map based speed limits forever?
I don't know whether you saw Karpathy's presentation. The whole architecture on HW3 NN (NN 9?) is different from that of HW2. So, its not just replacing NN for one task. If the entire architecture fits within HW2 then backporting would be easy enough. Otherwise they'll have to cut down on the layers and optimize the NN for HW2 separately. Optimization is quite time consuming - so I wouldn't expect them to be optimizing on HW2 until after FC. BTW, we don't even know whether we already have the HW3 NN in production on HW2 already. We have heard about how they can cross-compile the code (just procedural or NN too ?) to run on HW2 and HW3.
 
Personally, I do not want to be among the first to test this out, or left turns, protected or unprotected. This will require accurately reading the lights in all conditions including sunlight glare just after sunrise and just before sunset, as well as stop signs partially obscured by foliage, and it will require being aware of on-coming cars and cars approaching from the side, which the present software is unaware of.
Ofcourse, you start with zero trust ;)
 
People who have an idea of how to go about getting things down faster <> yes men. Its not like they can just say yes and not deliver.
Actually, my experience was that in estimating projects, the people who said they could get it done the fastest often wound up missing their estimates by a lot. However, that was a year or two later so the company really had no choice but to continue down the same route.

At the same time, as the development proceeded towards a huge schedule miss, the reasons why the original estimate were so far off got treated more as a shared company problem than as a failure of the individual original estimators. By the time the massive slips occurred, everyone in management had a pretty clear understanding of what the issues were and understood why the original estimates couldn't be met. Plus, if senior management blamed the project management, that would be admitting that they'd made a major mistake in the original decision. In the end, it was generally all good.

In the cases where the companies went out of business or were sold because of these failures, the managers were often able to land senior positions in other companies based on their multi-year experience in leading a large project.
 
Personally, I do not want to be among the first to test this out, or left turns, protected or unprotected. This will require accurately reading the lights in all conditions including sunlight glare just after sunrise and just before sunset, as well as stop signs partially obscured by foliage, and it will require being aware of on-coming cars and cars approaching from the side, which the present software is unaware of.

I disagree. All of these issues will be less dangerous for a car to negotiate than for a human driver. The car has the advantage of accessing precise GPS data, a cross referenced dataset of known intersections, and then mix that with a billion mile dataset of prior driver behaviors. Something that a human driver can not do...

I propose that people overestimate their capabilities, fear of lack of control, and a system 'untested by them' vs. it's real capabilities.

That said, I had a discussion with a colleague over lunch about this very thing. And, I personally don't believe Elon will roll this out until they hit six 9's of reliability. The cost of being wrong is just to high. Not just for Tesla, but for autonomous cars in general... Queue Good Morning America, "Killer Teslas" - Done, over...

Will FSD it be perfect, and never fail? No. Imposible. Will people die with vehicles in control? Yes. It just needs to be safer that the median driver on the road to be a success... But even then, people are still scared to fly, and we all know it's the safest form of travel!
 
  • Disagree
Reactions: rnortman
I don't know whether you saw Karpathy's presentation. The whole architecture on HW3 NN (NN 9?) is different from that of HW2. So, its not just replacing NN for one task. If the entire architecture fits within HW2 then backporting would be easy enough. Otherwise they'll have to cut down on the layers and optimize the NN for HW2 separately. Optimization is quite time consuming - so I wouldn't expect them to be optimizing on HW2 until after FC. BTW, we don't even know whether we already have the HW3 NN in production on HW2 already. We have heard about how they can cross-compile the code (just procedural or NN too ?) to run on HW2 and HW3.

I saw it. I also know that it's not a single huge NN. It's several NNs running in separate threads / tasks. So I had a small hope it could be compiled for HW2+ and consume < 20% with sign reading.