Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla autopilot HW3

This site may earn commission on affiliate links.
Random musing from a dark rain soaked Autopilot commute: Autopilot has no object permanence.

It seems to me like the current state of the software is only analyzing the current frame and reacting to it - hence the vehicles that switch back and forth between classes, and occasionally appear and disappear, and the lane lines that rapidly jump back and forth when a merge happens.

The car is clearly quite good at analyzing and reacting to the current frame, but it seems to me like there is significant value in connecting those together - in comparing the cars and lines it sees in this frame and their locations to the cars and lines it saw in the last frame.

I would expect that this has the potential to make things smoother, to react earlier to some threats, and to help with cases where conditions obscure some of the necessary data.

YUP, this is almost exactly what I tell people the auto pilot is like. It can see and react, but it cannot predict. When you see someone in your rear view coming up next to you way too fast, they are not going to plow into the person in front of them, they are going to change lanes and keep speeding along.

I had a 60 day trial of self driving, and it was terrifying. The car would be in the slow lane, and would not slow down or speed up to allow other people to merge into traffic. The car only cares about what is in its lane. Made me look like a complete jerk

This is why I don't think that the current Teslas will ever truly be self driving> The hardware and computing power needed to see and accurately predict what other drivers will do is not there.
 
  • Disagree
Reactions: Dutchie
I think a leap forward in auto pilot will come about when there is a uniform agreement based on Federal standards for those ubiquitous, plastic, reflective road markers. Different colors and reflectivity could contain information on lane exits, lane mergers, speed limits....you name it.

Elon, if you're reading this, a thank you note will suffice.
 
YUP, this is almost exactly what I tell people the auto pilot is like. It can see and react, but it cannot predict. When you see someone in your rear view coming up next to you way too fast, they are not going to plow into the person in front of them, they are going to change lanes and keep speeding along.

I had a 60 day trial of self driving, and it was terrifying. The car would be in the slow lane, and would not slow down or speed up to allow other people to merge into traffic. The car only cares about what is in its lane. Made me look like a complete jerk

This is why I don't think that the current Teslas will ever truly be self driving> The hardware and computing power needed to see and accurately predict what other drivers will do is not there.

The new FSD (AP3) computer can probably do this.
 
  • Funny
Reactions: motocoder
YUP, this is almost exactly what I tell people the auto pilot is like. It can see and react, but it cannot predict. When you see someone in your rear view coming up next to you way too fast, they are not going to plow into the person in front of them, they are going to change lanes and keep speeding along.

I had a 60 day trial of self driving, and it was terrifying. The car would be in the slow lane, and would not slow down or speed up to allow other people to merge into traffic. The car only cares about what is in its lane. Made me look like a complete jerk

This is why I don't think that the current Teslas will ever truly be self driving> The hardware and computing power needed to see and accurately predict what other drivers will do is not there.

When was this 60 day trial? My Raven definitely slows down to allow folks to merge - more generously than I would prefer in many cases.

It seems premature to say they don't have the computing power, since we really haven't seen what HW3 can do yet. Even if that proves correct, Tesla is already working on another generation, which will presumably also be retrofittable...
 
Random musing from a dark rain soaked Autopilot commute: Autopilot has no object permanence.

It seems to me like the current state of the software is only analyzing the current frame and reacting to it - hence the vehicles that switch back and forth between classes, and occasionally appear and disappear, and the lane lines that rapidly jump back and forth when a merge happens.

The car is clearly quite good at analyzing and reacting to the current frame, but it seems to me like there is significant value in connecting those together - in comparing the cars and lines it sees in this frame and their locations to the cars and lines it saw in the last frame.

I would expect that this has the potential to make things smoother, to react earlier to some threats, and to help with cases where conditions obscure some of the necessary data.
AP will become smoother only after anticipation is built in. That's how human drivers become smooth.
 
...The car would be in the slow lane, and would not slow down or speed up to allow other people to merge into traffic. The car only cares about what is in its lane....

When was this 60 day trial? My Raven definitely slows down to allow folks to merge - more generously than I would prefer in many cases.

I fully back up what Saghost observes as I've had this (my Tesla auto allowing other cars to merge) happen in my recent trip from IL to NC. I drive a 2017 AP 2.0 X.
 
When was this 60 day trial? My Raven definitely slows down to allow folks to merge - more generously than I would prefer in many cases.

It seems premature to say they don't have the computing power, since we really haven't seen what HW3 can do yet. Even if that proves correct, Tesla is already working on another generation, which will presumably also be retrofittable...

The trial was way back in the April timeframe.
 
The trial was way back in the April timeframe.

A lot has changed since April. A lot has changed since my big road trip in July, including Tesla fixing both of the issues that really annoyed me back then.

I actually feel like we're in a pretty dangerous period right now - the car is amazingly good at handling anything that normally happens, good enough it's easy to trust the car completely and get distracted. But it isn't yet smart enough to recognize when something exceptional happens with road debris or deer or potholes - either to deal with it or to alert a distracted driver.

So there's a one or two percent chance on any given drive that folks who trust too much will pay for it. And yet it is so good, it's easy to trust.

The folks who talk about less capable systems being safer because folks know they can't be trusted aren't entirely wrong, but it's a really slippery slope of an argument. The only real answer is for Tesla to get us to Level 3 sooner rather than later - to teach the can enough to know when something exceptional is happening.
 
The car would be in the slow lane, and would not slow down or speed up to allow other people to merge into traffic. The car only cares about what is in its lane.

I have found that the car does in fact yield to cars merging from an on-ramp. In fact I find it's TOO yielding - it will slow down well ahead of the moment that the incoming car would enter the lane, even if the incoming car is currently going slower than I am. This is much to the dismay of drivers behind me - I have had to take over more than once.
 
Random musing from a dark rain soaked Autopilot commute: Autopilot has no object permanence.

It seems to me like the current state of the software is only analyzing the current frame and reacting to it - hence the vehicles that switch back and forth between classes, and occasionally appear and disappear, and the lane lines that rapidly jump back and forth when a merge happens.

The car is clearly quite good at analyzing and reacting to the current frame, but it seems to me like there is significant value in connecting those together - in comparing the cars and lines it sees in this frame and their locations to the cars and lines it saw in the last frame.

I would expect that this has the potential to make things smoother, to react earlier to some threats, and to help with cases where conditions obscure some of the necessary data.
AND incorporate the map to analyze road changes in advance.
You make a solid point!
 
Probably, but it's the range; of the current ultrasonic sensors, cameras (add lack of HD resolution), of front radar, and lack of a rear radar altogether that I feel are now the limiting FSD factors. :)

I share similar concerns. I'm firmly in the camp now that the current sensors (+AP3 computer) will probably be good enough for L4 autonomy but not L5 autonomy. I say this because I think the hardware can probably handle autonomous driving under certain conditions like fair weather (hence L4) but not all conditions needed for L5.
 
I share similar concerns. I'm firmly in the camp now that the current sensors (+AP3 computer) will probably be good enough for L4 autonomy but not L5 autonomy. I say this because I think the hardware can probably handle autonomous driving under certain conditions like fair weather (hence L4) but not all conditions needed for L5.
Not sure I agree as the side cameras have a max range of 60’ feet, I think. Picture AP pulling out onto a road where the speed limit is higher than say 45mph. With a 60’ sight range, the car can’t respond quick enough to as the limited range doesn’t give it enough time. Make sense or not?
 
Not sure I agree as the side cameras have a max range of 60’ feet, I think. Picture AP pulling out onto a road where the speed limit is higher than say 45mph. With a 60’ sight range, the car can’t respond quick enough to as the limited range doesn’t give it enough time. Make sense or not?

The forward looking side cameras actually have a max range of 80 m according to Tesla's diagram. That would give the car about 4 seconds to respond. I would guess that 4 seconds should probably be enough time for the AP3 computer. That is a pretty long time for modern computer chips which can do calculations in a fraction of a second.

I do drive a similar situation on my daily commute and I have thought a lot about whether FSD will be able to handle it. There are mailboxes and telephone poles that partially block the view to the left making it sometimes tricky to see cars coming from the left. I have to make an unprotected left turn to get on to the road. I can do it because I am cautious and inch my way forward and have good vision to recognize incoming cars. So I imagine FSD could do it with good enough vision and a cautious approach. Also, school buses often stop on the side of the road as they pick up kids. So there's that too. I may have to do that part manually for awhile.

Tesla_AP2_Hardware.jpg
 
The forward looking side cameras actually have a max range of 80 m according to Tesla's diagram. That would give the car about 4 seconds to respond. I would guess that 4 seconds should probably be enough time for the AP3 computer. That is a pretty long time for modern computer chips which can do calculations in a fraction of a second.

I do drive a similar situation on my daily commute and I have thought a lot about whether FSD will be able to handle it. There are mailboxes and telephone poles that partially block the view to the left making it sometimes tricky to see cars coming from the left. I have to make an unprotected left turn to get on to the road. I can do it because I am cautious and inch my way forward and have good vision to recognize incoming cars. So I imagine FSD could do it with good enough vision and a cautious approach. Also, school buses often stop on the side of the road as they pick up kids. So there's that too. I may have to do that part manually for awhile.

Tesla_AP2_Hardware.jpg
Time will tell! Hopefully a short amount of time at that!
 
The forward looking side cameras actually have a max range of 80 m according to Tesla's diagram. That would give the car about 4 seconds to respond. I would guess that 4 seconds should probably be enough time for the AP3 computer. That is a pretty long time for modern computer chips which can do calculations in a fraction of a second.

I do drive a similar situation on my daily commute and I have thought a lot about whether FSD will be able to handle it. There are mailboxes and telephone poles that partially block the view to the left making it sometimes tricky to see cars coming from the left. I have to make an unprotected left turn to get on to the road. I can do it because I am cautious and inch my way forward and have good vision to recognize incoming cars. So I imagine FSD could do it with good enough vision and a cautious approach. Also, school buses often stop on the side of the road as they pick up kids. So there's that too. I may have to do that part manually for awhile.

Tesla_AP2_Hardware.jpg
Ya, well that diagram doesn't show blind spots close to the car and for some reason shows overlap on the rear looking side cameras...That isn't possible unless they can see around the car, so I call BS on that diagram. The other question is 'what can they actually see' at 60 or 250m. If a car fits in 1 pixel at that distance I don't think that will work lol. Unfortunately, only heavy T knows these answer...if they need new cameras that's not really a big deal I guess, assuming they don't need MORE or in a different location. I still don't know how they continue to deal with blind spots closer to the car though, so what do I know!
 
Ya, well that diagram doesn't show blind spots close to the car and for some reason shows overlap on the rear looking side cameras...That isn't possible unless they can see around the car, so I call BS on that diagram.

Welll... theoretically they could. The side repeater cameras are outboard of the body so their sightlines would converge eventually (note the dead zone of rear camera only) if they were out far enough (which they aren't on this model https://www.teslarati.com/tesla-dashcam-side-cameras-video/)
Now, if governments drop the side mirror requirements, (why have those in a self driving car) I could see them replacing the mirrors with further out cameras like the semi prototype.
 
  • Like
Reactions: scottf200