Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

.48 feels like AP2 finally passed AP1

This site may earn commission on affiliate links.
You sure about that being a problem? Unless you're hauling a trailer behind a MX, I know my MS, and I'm sure yours as well, accelerates fast enough that an on-ramp merge concern is a thing of the past. Maybe don't come to a full stop first? :cool:

Isn't that what the cameras on the front fenders are for? What am I missing?

1. Yes, I've ben uncorked the acceleration is fantastic =). I'm never worried about me merging, I can pivot my head and look and such, my car not so much.

2. So in terms of the front fender cameras they look directly behind the car and out, so if your coming around a curve, your fender cams are basically looking at the backside of that curve, not out towards the road you are approaching, until you hit the straight away, however, some roads where I live, that straight away is almost non existent, so if you don't crane your neck to look, your not going to see that car flying up the sidelines to meet you.
 
I'm a little concerned about that too. Is there enough information to safely change lanes? I mean on parts of German highways there are no speed restrictions, you can go as fast as your car will go. So, if you're driving behind a truck with 80km/h (~50mph) and wanna pass it, it's possible that there's another coming with 250km/h (~155mph) or even a higher speed (yes, sadly there are some "folks" who think they can handle 200mph on a public highway).

I think once your actually "on" the highway it's fine, it's the getting onto the highway, that gives me trepidation.
 
  • Like
Reactions: jimmy_d
2. So in terms of the front fender cameras they look directly behind the car and out, so if your coming around a curve, your fender cams are basically looking at the backside of that curve, not out towards the road you are approaching, until you hit the straight away, however, some roads where I live, that straight away is almost non existent, so if you don't crane your neck to look, your not going to see that car flying up the sidelines to meet you.

I'm sorry, but I'm struggling with your example. Perhaps I'm just dense that way.

You are driving forward on a curved road segment, and suggest that somehow neither rear facing fender camera is capable of seeing a vehicle approaching from behind you. Is that correct? My take on what you're suggesting is that, on a road that curves to the right, the left side camera is not able to look "around" the car to see any approaching traffic, if I understand you correctly. What about the camera on the right front fender?

What is confusing is, you then state the rear facing cameras can not look "out towards the road you are approaching." These two statements are not supportive of your position.

I'm really trying to understand your point, but I can't envision it.
 
Yes, because of no rear facing radar, I figure they need something more then monocular vision looking back, I haven't done the math to know for sure, but I imagine at highway speeds the gaps close pretty quickly if say your coming from an on ramp to the highway thats running full bore at 77 mph.

I think my biggest fear with the current arraignment is merge events and how they will be handled, it seems to me to be a huge blindspot in the system until the straightaway, as there is no radar and there no camera pointing like the b-pillars but towards the rear. But again, I'm not NN or vision guy, this is just my theories based on what I see, and more importantly what I don't see.

The more time goes on, the more I feel swindled, these cars including AP2.5 will never have FSD, it's just not going to happen. We "may" get lvl4 for highway driving, but I don't see this system taking us door to door, at least not without some serious hardware retrofits. I think the camera approach Tesla is trying to take can most certainly work, but for it to really work, they need more coverage of the rear of the vehicle, just not perpendicular to the backside of the car. And, sadly, I do think they need at least 1 rear facing radar, but that's probably for another thread.

Yes, I'm also aware of the depth from context stuff, it's actually the comma.ai job interview. You solve the problem, greater then .4 I believe and your in like Flynn. Thought, I've spent the last 30 minutes or so trying to find it, no luck.

But thank you for your insights, the deep expertise you bring to the community are very englightening!

Man, it's so nice to have a respectable exchange with someone who has a different opinion. Thanks for restoring my faith that people of good conscience can disagree amicably.

I wish there were a way to know what it's going to take to make FSD reality. The physics and computation are complicated enough but then we get all of this company posturing and PR and FUD making it impossible for regular people to know what to think. Even experts are bitterly divided about the essential requirements but mainstream articles are constantly presenting one view or the other as if it's the consensus in the field.

I think that the general sense that, if you had software that matched human capabilities you could get by with just the senses humans have isn't particularly controversial. But how hard that might be is plenty controversial. And then there are folks who argue that we shouldn't settle for 'human level' abilities because you might still get crashes but not a lot of people seem to seriously argue that we should not deploy good systems while waiting for perfect ones.

Of course we don't have human level software yet and we don't know when that's coming. Probably you can drive a car with a lot less than fully human capabilities but we don't know how much less. Adding hardware might make the job easier and there are many, many fans of adding hardware. I say 'might' because some problems definitely get easier in the lab when you add more sensors or more kinds of sensors, but increasing complexity is also a problem and so far it's not clear whether it gets more complicated faster than it gets more easier. About half the pros think the answer to that is a no brainer, but some of them think it's a no brainer that it's too complicated and some think it's a no brainer that more sensors always win. The best funded efforts throw lots of hardware at the problem because they can and they know that they can always cut back later. Of course, when you see all these 'leaders' out there with sensor festooned SUVs it gives the sense that a lot of hardware is needed, but really those guys are just trying out lots of things in parallel. It's still an open question what bits are needed and which bits are superfluous. And of course the 'leaders' want the problem to look hard and expensive to scare away competition.

But then maybe it really is hard and expensive. The jury's still out.

And there's this elephant in the room that nobody really talks about because it's kind of hard to explain, but getting along smoothly with other road users - drivers, bikers, and pedestrians - has turned out to be a much harder problem than anyone was expecting. As a human you can do a pretty good job of predicting the actions of other road users, but can software do that? Several years ago when google first started fielding cars in mountain view they discovered that not hitting anything wasn't good enough. Famously they got rear ended a lot because their vehicle didn't blend with traffic well and so they made this shift from trying to perceive the environment well enough to avoid obstacles to trying to interact smoothly and predictably. They've been focused mainly on that for years now and they still can't do it well enough to field a car in a real urban environment. Cruise tests in SF, which is probably the about the toughest environment that the U.S. has to offer but they have been very tight lipped about their real capabilities and the general sense is that it's not because they're doing unexpectedly *well*.

Eventually we'll have the 'kitty hawk' moment where someone puts something out there that more or less does the job and then the world will have one example of how to do it. And after that things will get better fast. But in 2006 when I was at the DARPA urban challenge I thought for sure we would see kitty hawk before 2016, maybe even before 2011. But now it's 2018 and I'm still waiting. Until we have that moment, and probably for a while after it, we won't really know. I'm still optimistic, but I don't trust my ability to predict it anymore.

The delays in EAP are demoralizing and lack of transparency from Tesla is not helping there. It certainly makes FSD feel really far away. I totally get that and I feel the same way. But I have this other window on the problem that not many people at TMC get to enjoy, and that's a detailed understanding of what's happening in AI right now. It's not an exaggeration to call the pace of improvement in AI techniques shocking and unprecedented. Internally the field is being turned inside out as all these 50 year old obstacles are finally being overcome. Newbies coming into the field are doing stuff that the old timers considered practically impossible just a couple of years ago. Every couple of months I see something happen that just takes my breath away. That 'breathtaking' pace of advancement makes me optimistic that we'll see solutions to FSD. And if the solutions are good enough then the hardware doesn't have to be overwhelming.

Going with cameras alone is definitely, *definitely* a bet that the software is going to get a lot better really soon. When I think about the magnitude of the bet that Musk is placing on this I've got to imagine that he's the bravest guy to ever run a large company - or the craziest. I could never make a bet like that.

It's a bet that might not turn out. But my feeling these days is that if the software doesn't get a lot better it's not going to matter how many sensors you have. Driving in the real world is a really hard problem.
 
Man, it's so nice to have a respectable exchange with someone who has a different opinion. Thanks for restoring my faith that people of good conscience can disagree amicably.

I wish there were a way to know what it's going to take to make FSD reality. The physics and computation are complicated enough but then we get all of this company posturing and PR and FUD making it impossible for regular people to know what to think. Even experts are bitterly divided about the essential requirements but mainstream articles are constantly presenting one view or the other as if it's the consensus in the field.

I think that the general sense that, if you had software that matched human capabilities you could get by with just the senses humans have isn't particularly controversial. But how hard that might be is plenty controversial. And then there are folks who argue that we shouldn't settle for 'human level' abilities because you might still get crashes but not a lot of people seem to seriously argue that we should not deploy good systems while waiting for perfect ones.

Of course we don't have human level software yet and we don't know when that's coming. Probably you can drive a car with a lot less than fully human capabilities but we don't know how much less. Adding hardware might make the job easier and there are many, many fans of adding hardware. I say 'might' because some problems definitely get easier in the lab when you add more sensors or more kinds of sensors, but increasing complexity is also a problem and so far it's not clear whether it gets more complicated faster than it gets more easier. About half the pros think the answer to that is a no brainer, but some of them think it's a no brainer that it's too complicated and some think it's a no brainer that more sensors always win. The best funded efforts throw lots of hardware at the problem because they can and they know that they can always cut back later. Of course, when you see all these 'leaders' out there with sensor festooned SUVs it gives the sense that a lot of hardware is needed, but really those guys are just trying out lots of things in parallel. It's still an open question what bits are needed and which bits are superfluous. And of course the 'leaders' want the problem to look hard and expensive to scare away competition.

But then maybe it really is hard and expensive. The jury's still out.

And there's this elephant in the room that nobody really talks about because it's kind of hard to explain, but getting along smoothly with other road users - drivers, bikers, and pedestrians - has turned out to be a much harder problem than anyone was expecting. As a human you can do a pretty good job of predicting the actions of other road users, but can software do that? Several years ago when google first started fielding cars in mountain view they discovered that not hitting anything wasn't good enough. Famously they got rear ended a lot because their vehicle didn't blend with traffic well and so they made this shift from trying to perceive the environment well enough to avoid obstacles to trying to interact smoothly and predictably. They've been focused mainly on that for years now and they still can't do it well enough to field a car in a real urban environment. Cruise tests in SF, which is probably the about the toughest environment that the U.S. has to offer but they have been very tight lipped about their real capabilities and the general sense is that it's not because they're doing unexpectedly *well*.

Eventually we'll have the 'kitty hawk' moment where someone puts something out there that more or less does the job and then the world will have one example of how to do it. And after that things will get better fast. But in 2006 when I was at the DARPA urban challenge I thought for sure we would see kitty hawk before 2016, maybe even before 2011. But now it's 2018 and I'm still waiting. Until we have that moment, and probably for a while after it, we won't really know. I'm still optimistic, but I don't trust my ability to predict it anymore.

The delays in EAP are demoralizing and lack of transparency from Tesla is not helping there. It certainly makes FSD feel really far away. I totally get that and I feel the same way. But I have this other window on the problem that not many people at TMC get to enjoy, and that's a detailed understanding of what's happening in AI right now. It's not an exaggeration to call the pace of improvement in AI techniques shocking and unprecedented. Internally the field is being turned inside out as all these 50 year old obstacles are finally being overcome. Newbies coming into the field are doing stuff that the old timers considered practically impossible just a couple of years ago. Every couple of months I see something happen that just takes my breath away. That 'breathtaking' pace of advancement makes me optimistic that we'll see solutions to FSD. And if the solutions are good enough then the hardware doesn't have to be overwhelming.

Going with cameras alone is definitely, *definitely* a bet that the software is going to get a lot better really soon. When I think about the magnitude of the bet that Musk is placing on this I've got to imagine that he's the bravest guy to ever run a large company - or the craziest. I could never make a bet like that.

It's a bet that might not turn out. But my feeling these days is that if the software doesn't get a lot better it's not going to matter how many sensors you have. Driving in the real world is a really hard problem.

While this wordy, meandering reply might seem like a nail covered baseball bat I sincerely intend it as an olive branch.
 
or baseball covered nails

baseballnails.jpg
 
While this wordy, meandering reply might seem like a nail covered baseball bat I sincerely intend it as an olive branch.

Not at all, I think it was and is a fantastic reply and very insightful. I come from a software services world and not a vision world, so I try not to pretend like I know what I'm talking about with the NN stuff, I think I understand enough to be dangerous, but after that I'm way out over my ski's as they say.

I look forward to these exchanges! It's never my way, or your way that always wins or is always right, I find the difference lies in the middle somewhere, and if you don't have open honest conversations without personal attacks, you'll never understand the field and you'll certainly never advance your own personal understanding of the problem.
 
@jimmy_d I was having the exact same thoughts while reading James Barrat's "Our Final Invention", on the dangers of AI.

FSD would probably be a piece of cake if cars would only have to communicate with other cars/maps/traffic lights etc., and there were no human drivers for them to also consider. Making super fast rational decisions is what AI is very good at. It's getting a lot better at other things as well (such as translation - but even there it's obvious that AI is better than humans in translating difficult things but makes incredibly stupid mistakes with translations which are easy for us).

One thing is that we don't really know how AI learns - we can't look into its brain. Another thing is that we don't know how the 100 billion neurons of our own brains learn either - a large (subconscious) part is probably somewhat pre-programmed by 200 million years of evolution and we don't know how that works, or how to replicate it. Any two-year old can probably read the facial expression of an animal to understand whether it would like to be patted on the back, or is more likely to bite your nose off. For AI, that's probably much more difficult to understand (and in any case not something it can subconsciously feel).

I avoided an accident a few weeks ago because, approaching an intersection where another driver was supposed to give me a right of way, I saw that he was vehemently arguing with someone over his mobile phone (not even hands free...) and never looked my way, let alone made eye contact with me. So I wisely decided to let him cross :).
 
@jimmy_d I was having the exact same thoughts while reading James Barrat's "Our Final Invention", on the dangers of AI.

FSD would probably be a piece of cake if cars would only have to communicate with other cars/maps/traffic lights etc., and there were no human drivers for them to also consider. Making super fast rational decisions is what AI is very good at. It's getting a lot better at other things as well (such as translation - but even there it's obvious that AI is better than humans in translating difficult things but makes incredibly stupid mistakes with translations which are easy for us).

I don't see that V2V communication would enable FSD. It would certainly help driver assistance, but for it to enable FSD implies a dependency and that would be dangerous, in the same way that blindly trusting turn signals would be dangerous.
 
Interesting observation today.
The swerving in the tunnel (slope down to slope up) was gone today! What has happened?! Still on 50.3...

I've been thinking recently about how the behavior of AP seems to change, even on the same firmware version. I can't decide if it's just normal variation and my sample size is too small, or if maybe there are dynamic changes happening to the maps or some other 'real time downloadable' aspect of AP's driving that are changing.
 
I've been thinking recently about how the behavior of AP seems to change, even on the same firmware version. I can't decide if it's just normal variation and my sample size is too small, or if maybe there are dynamic changes happening to the maps or some other 'real time downloadable' aspect of AP's driving that are changing.
My observation has definitely been that it will improve without the firmware being upgraded.
 
It definitely improves at times and downgrades at time, I think it's environmental factors, like lighting changes, or more soot on the road etc, it doesn't make any sense otherwise, Tesla is not and I repeat not driving based on the maps today, it's all AP1 emulation so far, at least for our cars, who knows what the dev cars are currently doing
 
It definitely improves at times and downgrades at time, I think it's environmental factors, like lighting changes, or more soot on the road etc, it doesn't make any sense otherwise, Tesla is not and I repeat not driving based on the maps today, it's all AP1 emulation so far, at least for our cars, who knows what the dev cars are currently doing

But, and I'm not alone in observing this, when you first get a firmware AP's performance is more variable and (generally) worse than several days later. Why? I agree its difficult to reproduce identical conditions but even under similar conditions it just improves. Something must be causing that improvement but its hard to point at any one thing (other than "calibration" which we know isn't what is going on but feels to be what is happening).
 
  • Helpful
Reactions: AnxietyRanger
But, and I'm not alone in observing this, when you first get a firmware AP's performance is more variable and (generally) worse than several days later. Why? I agree its difficult to reproduce identical conditions but even under similar conditions it just improves. Something must be causing that improvement but its hard to point at any one thing (other than "calibration" which we know isn't what is going on but feels to be what is happening).
I totally observe this too. It's not after EVERY update, it's only after certain updates. Like .42 to .44 and .44 to .48 didn't feel that way, but .28 to .34 and .17 to .28 and .48 to .50.3 all did to me.

It seemed to me like some sort of steering responsiveness calibration. At first, I'd characterize the car as making incorrect steering angles for a given condition. For example, applying too strong of a steering nudge to correct a slight lane shift, or applying too weak of a steering angle that causes departure around curves. This gets gradually better as you allow the car to fail at these conditions.
 
  • Helpful
Reactions: croman
I totally observe this too. It's not after EVERY update, it's only after certain updates. Like .42 to .44 and .44 to .48 didn't feel that way, but .28 to .34 and .17 to .28 and .48 to .50.3 all did to me.

It seemed to me like some sort of steering responsiveness calibration. At first, I'd characterize the car as making incorrect steering angles for a given condition. For example, applying too strong of a steering nudge to correct a slight lane shift, or applying too weak of a steering angle that causes departure around curves. This gets gradually better as you allow the car to fail at these conditions.

Its definitely steering issues but there are also sometimes other effects (like losing sight of cars and lurching or issues with auto lane change that smooth out).
 
I don't know what is in the secret sauce, but I find that I come to appreciate the car most when two scenarios happen... one... I take my car for granted while on autopilot and accidentally lose autopilot functionality by taking the bait with some guy in a BMW trying to get me to race him (not that I would race him but I just want to say, don't even think about it!) but what happens for my hubris? I floor it for a half second while doing 75 and guess what? I can't use autopilot until stopping... nothing make you appreciate autopilot like having to actually steer during a 30 mile drive through Hillsboro Texas because you tried to gun it for a half second! The second scenario is realizing the car is improving not he same firmware update.
 
I posted the following in the „2017.50.3.f3425a1 is out!“ thread but I think it could be of value here too. It‘s a demonstration of how (well) AP 2 is doing on Swiss roads that are not that wide with long straights as in e.g. the States. There are still quirks but IMHO they are predictable if you‘re used to driving with AP.

I just finished and published my first Autopilot video. It shows a ~18 minutes drive on a highway with a construction zone, some town-connecting rural roads and some local roads through towns here in Switzerland. As we are a small and dense country with not that wide streets in general it's maybe a different view on autopilot for those coming from the States.


Here are some situations you'll see in the video:

@0:54 AP engaged in construction zone with red temporary markings (over/besides regular white markings)
@2:10 Auto lane change suspended until "Holding Wheel" confirmed
@5:44 Smooth stop behind car at red light
@7:35 AP recognises bicycle lane
@8:21 Autosteer is heading for curb in a long s-curve
@8:45 Quite good uphill crest handling by AP
@9:14 Some tricky passages with traffic refuge followed by bicycle lane beginnings
@9:45 Safe passing a cyclist on bicycle lane (no slowdown, maybe recognised as not on lane or completely missed by AP)
@10:06 Following a car in a tricky situation
@12:24 Stopping behind a car for a red light directly after a blind bend
@13:46 Passing a road-parked car (wide lane, AP stays to the left) followed by a smooth stop behind car for pedestrian crossing street
@15:52 Autosteer hugs inner line of a wide left curve with lane dividing traffic poles (I think it would have hit some cones if I did not disengage)

Happy to receive feedback ;)
 
Yes, these looks like pretty normal situations where autopilot would fail. .52 is not out, but I doubt .50 would do much better. You'll learn where it is an is not good soon, and anticipate this.

I finally got a call back from a Tesla engineer, who spent quite a while on the phone with me going over the incidents and explaining what was happening from the car's point of view. According to the tech, both incidents were "normal" given the current state of Autopilot, but both are situations they are looking to improve in.

In the first incident, the car had already dropped to the _slowest_ speed Autopilot was willing to go (evidently it thought the speed limit on the curve was still freeway speed) -- the tech referred to this lower limit as "ego speed", and said that the steering wheel had already turned to the maximum extent Autopilot would allow at that speed, and that the car was a split-second away from automatically disengaging when I yanked the wheel. So technically Autopilot would have disengaged a moment before hitting the guardrail, but at that point it would have been too late to react and avoid a collision. At least Autopilot did recognize the guardrail and curve, although it erred on the speed limit.

Another detail he mentioned (common knowledge I think) was that the Autopilot is currently using two of the three forward cameras plus forward radar and ultrasonics; the other six cameras are in "shadow" mode only.

The second incident was a determination by Autopilot that the subtle color change in the pavement should overrule the bright white line and posts in terms of defining the lane, and that left to its own devices the car would have driven into the posts. (It would not have detected a problem and disengaged before then.) I am still pretty surprised by this; I would have thought a bright white lane line on the right (not to mention the posts) should always overrule a subtle pavement change, particularly if there are no obstructions on the left. Have you seen or can you think of a counterexample to this? But that's their current algorithm.

Anyway, very interesting to beta test. Looking forward to many more miles of fun!
 
I finally got a call back from a Tesla engineer, who spent quite a while on the phone with me going over the incidents and explaining what was happening from the car's point of view. According to the tech, both incidents were "normal" given the current state of Autopilot, but both are situations they are looking to improve in.

In the first incident, the car had already dropped to the _slowest_ speed Autopilot was willing to go (evidently it thought the speed limit on the curve was still freeway speed) -- the tech referred to this lower limit as "ego speed", and said that the steering wheel had already turned to the maximum extent Autopilot would allow at that speed, and that the car was a split-second away from automatically disengaging when I yanked the wheel. So technically Autopilot would have disengaged a moment before hitting the guardrail, but at that point it would have been too late to react and avoid a collision. At least Autopilot did recognize the guardrail and curve, although it erred on the speed limit.

Another detail he mentioned (common knowledge I think) was that the Autopilot is currently using two of the three forward cameras plus forward radar and ultrasonics; the other six cameras are in "shadow" mode only.

The second incident was a determination by Autopilot that the subtle color change in the pavement should overrule the bright white line and posts in terms of defining the lane, and that left to its own devices the car would have driven into the posts. (It would not have detected a problem and disengaged before then.) I am still pretty surprised by this; I would have thought a bright white lane line on the right (not to mention the posts) should always overrule a subtle pavement change, particularly if there are no obstructions on the left. Have you seen or can you think of a counterexample to this? But that's their current algorithm.

Anyway, very interesting to beta test. Looking forward to many more miles of fun!

That’s great that you got feedback like that! FWIW they seem to be giving more/better feedback regarding autopilot issues nowadays. There was a time where my emails were answered... quite generically... nowadays the tone for autopilot feedback feels different. Excellent that you had a video! Thanks for posting!