Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Seeing the world in autopilot, part deux

This site may earn commission on affiliate links.
And he said they already had it running in cars as well at the last quarterly.

Those are engineering samples of the silicon. The path from that to something shippable can be a year of more. If you look at Intel CPU steppings, they go through at least 2 if not more silicon revisions before it ships to customers, and that's a fairly long pipeline (takes months) to get back the last thing you fabbed so you can find silicon bugs and fix them and try again.
 
  • Like
Reactions: Pale_Rider
I believe jimmy_d is speculating that path planning in Autopilot v9 uses a neural network. Mobileye is using reinforcement learning for path planning, I think.



It seems like the best neural network architectures are all open source. Meaning the competitive advantage is not the architecture, but the training, and therefore the data.



Why not?



That's Tesla's plan!

All Tesla Cars Being Produced Now Have Full Self-Driving Hardware

I realize jimmy_d speculated that... I actually speculated the same thing, if you see my response to him.

"best neural network architectures are all open source"


lol!!! There are many great open source networks yes.....

but the competitive advantage IS

  • the network architecture. Yes
  • the training techniques
  • and the data Yes... but it is the prepped data and techniques uses to prep the data, (organize, label, structure, filter, etc, etc)... and is NOT having large vast amounts of Raw data coming from fleet... this is such a big misconception.

"Thats Tesla's Plan"
I realize Tesla has said that is there plan....



I agree with most if not all of that. There is a lot more Tesla can and should do for safety. I do not believe one system is better than the other. They have different objectives.

I believe Tesla does deserve a nod for finally delivering these convenience features even if they are 2 years late. More importantly, Tesla should add in car monitoring and other features to move these convenience features to be actual safety features.

I agree Tesla deserves a nod... and these convince features, I understand why it took so long for them to be launched.

Nobody yet has such a system in production...

Moreover, Tesla is developing such a system, just like Mobileye.

This is true.

Nope. Reportedly they yanked it out. though I guess people would still try by allowing the car to do it after they signal their allowance anyway (the autopilot game of chicken?)

Wait what was yanked out?? auto lane changing was yanked out of the update?
 
Wait what was yanked out?? auto lane changing was yanked out of the update?
It looks like the completely self-initiated lane changing is gone gone in the latest v9 builds. It was never truly there, seemed like everyone was hacking some sort of feature flag that made it show up.

Now the car will propose that it wants to make a lane change and I think that pops up a little banner saying to tug the Autopilot stalk or something to actually make it happen. (The equivalent of the Yes setting for require confirmation)
 
Shadow mode as described by elon does not exist and never existed, neither does fleet learning. Just another lie by elon.

That's hilarious because you're a software engineer as I understand it.

As a software engineer you have to be used to a sales/marketing making inaccurate technical statements. Where they simplify things or they exaggerate what's really being done.

I'm a huge advocate of fleet learning because of the importance of that data. They're certainly doing data collection, and there is a lot of proof of that.

We also know that there are things running in the car all the time. Obviously it doesn't take a genius to say "hey, let's make a trigger when the drivers behavior doesn't match the data". Like if the car thinks it should stop, but the driver runs right into what it thought was there.

Elon is a dreamer, and has a tendency to sprinkle "magic dust" onto something that's conceptually possible, but implementationally hard or what there isn't a clear path to it. Every Tesla owner at some point will experience that "But, didn't Elon say??" moment.

Like I'm having that moment now with the unassisted auto-lane change thing not being included in actual release of V9.0. I'm not sure why I'm even replying to this as the purpose of coming to my computer was to double check the things I read/watched to see what had me convinced that unassisted auto-lane changes would be there. Maybe I'm replying because your depressed/cynical attitude is better aligned for the moment than excitement.

Yesterday everything was lining up for that defining moment when cars on the road could do something they couldn't do yesterday. That moment in time that would dramatically change how it was to be behind the wheel.

But, it wasn't really that moment. Instead Elon led us to the unveiling only to pull out something entirely different than what we heard him say.

My first reaction was "that makes more sense as that feature relies on new things that haven't been there".

Sure there are a ton of improvements so it's important not get too bummed out. The bummed part is the feeling that we're never going to break past this moment. It's always going to be 1-2 years away for everyone.

I don't think anyone really knows why it didn't make it.

Was it regulatory concerns?
Was it simply not ready for prime time?
Have they decided not to do it even though it's not the EAP feature list?

Hopefully it's the middle one, and just needs a little bit more time. It's hard not to be cynical about unassisted lane changes simply because asking drivers to be responsible for something the car decided to do itself is a bit weird. Where it requires not just knowing what's in front, but what's on the side/rear.
 
it does not really do debris on the road (or potholes) and nobody else does either I guess.

the hard is pretty happy to dorp the speed to make the exit in the fact of traffic, somewhat more happy than I would do it, so just gridlock situation should not be too bad. The huge speed difference when the car needs to go from slow lane to a fast lane - I have no idea how that would work, it does not accelerate too well from a stand still in the autopilot mode and it probably does not see too far back either (to be verified). I am not super eager to try this last mode in my own car, due to the whole repair parts situation too! ;)
I might try some of the milder scenarios in the coming days (like I said, I don't really want to get into any accidents for waaay too many reasons).

But this is sort of shifting the goal posts already. just a week ago what we have now was a bit of a pipe dream all by itself and if you told me last week that we'll have pretty much fully autonomous drive by nav on pretty much any highway in US and beyond (granted, in mostly simple conditions of no debris/tires and such) - I'd laugh in your face.
Thank you @jnuyens now i can make an actual response to this.

it does not really do debris on the road (or potholes) and nobody else does either I guess.

Other cars can do this, for example Waymo. The thing about tesla is that AP is the limit of their self driving system, the upper echelon. There's no hidden futuristic eap/fsd somewhere. Unlike other company whose actual R&D software like cruise and waymo is far more advanced and in no way reflect their ADAS system.

the hard is pretty happy to dorp the speed to make the exit in the fact of traffic, somewhat more happy than I would do it, so just gridlock situation should not be too bad.

That's actually a bad thing as outlined by @BigD0g if it can't execute it in a timely manner, you can cause an accident.

The huge speed difference when the car needs to go from slow lane to a fast lane - I have no idea how that would work, it does not accelerate too well from a stand still in the autopilot mode and it probably does not see too far back either (to be verified). I am not super eager to try this last mode in my own car, due to the whole repair parts situation too! ;)
I might try some of the milder scenarios in the coming days (like I said, I don't really want to get into any accidents for waaay too many reasons).

Its funny that you say this but we already see from early reviews that its filled with problems. This is with just acouple people having access to it. What happens when you have 100,000+?

For example attempting to change lane into the emergency shoulder lane and closed lanes and requiring immediate takeover by the driver.

@BigD0g also talks about how it gets "gets confused at times"

like going all the way to left lane on the three lane highway to pass someone, only to realize the exit is coming up in less then 1 mi and it's gotta get all the way to the right, and instead of accelerating trying to break through to the right, it starts slowing and letting cars pass waiting for an opening to get to the right, and it slows more and more drastically as the exit gets closer, so imagine, going 77 down a highway in the left lane, to slow to 50 to get to the right, folks are not very happy with you...

Struggling to handle basic stuff and yet you think it will easily floor the advanced stuff i mentioned? lol okay.

People talk about demo product vs production system but waymo, cruise, etc cars are not DEMO product. They are R&D product. The reason Tesla is $300+ is because of their r&d and what they are working on in the future. Same reason waymo is worth 175 billion. The world value r&d, yet in here r&d is supposed to be laughed at.

Demo product for example is the video Tesla released 2 years ago. But the actual R&D progress is their CA Disengagement report.

If your R&D product goes 30,000 miles without disengagement (which Waymo has as of Dec 2017) and a production system can barely go 50 miles without disengagement handling basic stuff. Who do you think is ahead?

The one who released a beta product at end of 2018 or the one who releases a finished product from their r&d in 2019/2020.

But this is sort of shifting the goal posts already. just a week ago what we have now was a bit of a pipe dream all by itself and if you told me last week that we'll have pretty much fully autonomous drive by nav on pretty much any highway in US and beyond (granted, in mostly simple conditions of no debris/tires and such) - I'd laugh in your face.

Your boy elon certainly didn't think it was laughable. Infact he said Level 5 FSD would be here by end of 2017.
Besides did you really call changing lane into emergency shoulder lane and closed lane fully autonomous?

December 2015: "We're going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years."

January 2016: "In ~2 years, summon should work anywhere connected by land & not blocked by borders, eg you're in LA and the car is in NY"
 
  • Disagree
Reactions: jnuyens
Other cars can do this, for example Waymo
Yeah, I just need to drop by my local Waymo dealership tomorrow to try it out. Oh wait!

There's no hidden futuristic eap/fsd somewhere.
Well, the v9 is the biggest evidence we have that there IS a version in development that's more advanced than what's on customer cars.

And now that leads credence to all the other bits we see in the code but disabled as something that actually is in there in the non-prod versions. It might not be FSD but it's even more than what's in v9.

That's actually a bad thing as outlined by @BigD0g if it can't execute it in a timely manner, you can cause an accident.
Yup, no argument about that other than, people do it too in a very similar manner, they just slow down and slowdown and slowdown and even stop (and even illegally stop) because they want to make their exit (I even saw people going backwards on a highway for an exit they missed).

the one who releases a finished product from their r&d in 2019/2020
Would you mind if I borrow your time machine for a bit? ;)

Your boy elon certainly didn't think it was laughable. Infact he said Level 5 FSD would be here by end of 2017.
You don't need to convince me that Elon lied back in 2016, I know it all by myself. The shocking part is there really WAS some quite advanced development (compared to v8.1) and it appears there's more in the store (who knows when, obviously).
 
People talk about demo product vs production system but waymo, cruise, etc cars are not DEMO product. They are R&D product.

To avoid going down an unnecessary semantic rabbit hole, we should just talk about production systems and non-production systems. There is an important difference between the two, and the best evidence of any company's autonomous driving technology is what they have in production today. There is other evidence, to be sure, but company outsiders have no means of testing technology that isn't in production. That's the key point.

Your boy elon certainly didn't think it was laughable. Infact he said Level 5 FSD would be here by end of 2017.

Sergey Brin actually said the same thing in 2012. Elon is probably the most extreme or at least the most well-known example, but he's definitely not the only one who makes predictions that turn out to be wrong.

You don't need to convince me that Elon lied back in 2016, I know it all by myself.

There is a difference between lying and being wrong. People are wrong a lot more often than they lie, so the baseline probability is that someone is wrong, rather than lying. In Elon's case, it wouldn't serve his self-interest in to lie. He is holding the stock long-term, so any temporary increase doesn't help him.

Moreover, for Elon to have lied, he would have to have known in December 2015/January 2016 that Tesla wouldn't develop full autonomy in "approximately two years" or "~2 years" from that point. Today, do we know what will happen 2 years from now? No, we can only make our best prediction. And people, including experts, are bad at predicting the future. Ask a sample of TMC users or even AI/robotics experts when full autonomy will arrive, and they will disagree vehemently, meaning that logically a good number of people will turn out to be wrong.

It is important not to accuse people of malicious intent when there is no evidence of malicious intent. People are wrong all the time, and they doesn't mean they lied. Plus, it's not clear to me that it's even possible to lie about the future when nobody actually knows how the future will turn out.

If Elon says today, "We will have full self-driving in 2 years"...

...and then someone disagrees and says, "Tesla won't have full self-driving in 2 years"...

...I don't think it's actually possible for either of them to lie. Neither of them actually know what's going to happen in 2 years, so it's impossible for them to intentionally say something they know to be false.

People also genuinely think they know what they don't know. There is false certainty.
 
Last edited:
Sergey Brin actually said the same thing in 2012. Elon is probably the most extreme or at least the most well-known example, but he's definitely not the only one who makes predictions that turn out to be wrong.


No he didnt... there's are huge differences ( heaven and earth gap diff) between geofenced commercial level 4 in a city which google were aiming for. Versus level 5 fairy tale which elon has been fabricating.

Please dont conflate the two.
 
With regard to fleet learning and shadow mode: there is enough ambiguity in everyday language that is room to interpret almost any statement as either true or false, if that's what you want to do. This is why it's so difficult to prove perjury, for example.

Or why a lot of statements are either false or meaningless when you interpret them too literally. For example, the expression "boys will be boys". Taken literally, that's a meaningless tautology. Yes, boys will be boys, and cats will be cats, dogs will be dogs, trees will be trees, chairs will be chairs, and tables will be tables. There is some subtedly required for language to make any sense at all.

I find that when people hold a grudge against someone, they are more likely to interpret their statements as false, and when people like someone, they are more likely to interpret their statements as true. Just as it's exhausting and aggravating to listen to Kellyanne Conway or Sarah Huckabee Sanders to twist Donald Trump's statements far beyond reason with the goal of spinning virtually any statement as being correct, I find it equally exhausting when people twist and nitpick statements beyond reason with the in order to find them to be incorrect. It is a subtle thing to determine where exactly the boundaries of reason are, but you can't use language without doing it.

Fleet learning has a well-understood meaning, and it is not that neural network training is happening under the hood of your car. People might hear a simple description of fleet learning and misunderstand it that way, but it is not too hard to look into the topic further and find out how it actually works. And whether the training occurs in the car or at Tesla HQ, the essential idea is still the same.

Same with shadow mode. If BigD0g is correct that Tesla is "most definitely tracking failure scenarios where you do one thing and the car wanted to do something else", then to me that is shadow mode as Elon described it.
 
there's are huge differences ( heaven and earth gap diff) between geofenced commercial level 4 in a city which google were aiming for. Versus level 5

Sergey Brin was still wrong — that's my point. Company leaders make wrong predictions about their companies' technology all the time. Waymo didn't have robotaxis available to the general public in 2017. That doesn't mean Sergey lied, it just meant he was over-optimistic about how fast progress would happen.
 
  • Like
Reactions: mhan00 and jimmy_d
Sergey Brin was still wrong — that's my point. Company leaders make wrong predictions about their companies' technology all the time. Waymo didn't have robotaxis available to the general public in 2017. That doesn't mean Sergey lied, it just meant he was over-optimistic about how fast progress would happen.

Thats not how it work. Sergey statement was based on actual progress and current status. It's not about being right or wrong.
It's about whether your statement was based on material information or you made it up. This is why the SEC sued and exposed elon as a fraud.

Elon stood up in a conference and said EAP will be done in 3 months and here we are 2 years later and still nothing. He made a statement while Tesla had 0 software written for eap. Without consulting ANYONE. This goes inline with the SEC investigation.

Furthermore all his level 5 statement has been made without consulting anyone and with 0 software being written.

Infact his head of autopilot sterling was shocked when the blog post went out referring to ap2 as fully self driving. This was another elon call without even consulting with his head of autopilot.

There's a difference between being wrong versus fabricating statements.