Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Telsa AP and the basic speed law

This site may earn commission on affiliate links.
It works fine to use the pedal to exceed the set speed with AP engaged as long as you don't go over 90mph. If you do AP is disabled for the remainder of your trip.

I'd be scared to drive 90 mph. I had the Roadster up to 90 mph once, briefly. The Roadster handles really well, but 90 mph scared me and I backed off after just a few seconds. But the above is good to know. Thanks for posting that.

... it is likely that in fog the camera could not see and AP would disengage? I don't think the car would blindly cruise along with camera obscured?

Agreed. Driving onto a road without lane markings, the car will disengage AP. Clearly, if the cameras get blinded they cannot see lane markings, so the car would disengage AP then also.

Setting it lower does nothing for the issue the AP is taking about, which is momentary sleeping in neighboring lanes. Same thing happens to me on my morning commute and I have to remotely turn AP off because it's dangerous and not acting as a human driver would

Edit: I don't want to have to manually and temporarily adjust my speed but 20-30 mph for adjacent lane monitoring that Tesla can already see/analyze, and just needs to update its rule sets for

A big flick down on the right-hand thumbwheel lowers the set speed by 5 mph. Four in succession would lower it by 20 mph. This takes a few eyeblinks to do. I do it any time conditions warrant a slower speed, and then speed back up. Driving in the mountains it happens often when there's a slower speed limit on curves that is not in the car's data base. It's really easy to slow down and then speed back up.

The software has not reached the stage where it can distinguish between driving by parked cars and driving by a lane where the cars have stopped. Some day it will be there. But this is the most advanced partial self-driving system available to the consumer today, and you have noted one of its limitations. Either manage the speed manually with the thumbwheel, or don't use AP under these conditions.

Another thing I frequently do is disengage AP when I judge that conditions are too complicated for the car, and then re-engage once conditions are back to normal. It's super-easy to do with a flick of the stalk. This is how things are at Level 2 autonomy. I'd rather have true full self-driving, but I'm as happy as a clam to have what we have now. (I can imagine that there are probably people whose driving conditions are so complex that the car cannot handle them, and for whom AP is not worth the cost.)
 
  • Informative
Reactions: hcdavis3
Oh, and by the way, for those of us with HOV/HOT lanes, that very situation happens just about every day. It takes a little self training to do 60 next to 10, but the law doesn't allow cars to cross the double white lanes in Georgia, so they shouldn't move over. And if you don't feel comfortable driving this way, then there are a lot of other lanes that aren't moving as fast.
Hmmm, yes, “they shouldn’t move over”, but that’s not going to protect you if they do. And they most certainly do. I frequently see people do that here – even across HOV/express lane double white separators with a wide gap between the lanes. 60 mph alongside 10 mph is not giving you much time if somebody does something idiotic.
 
There's no way the car can see better than a person. Our resolution, sensitivities, focusing ability, etc., can't be touched by commerical cameras and processing. Maybe if they were using lidar and scanning a range/wavelength we can't see, but they're not.

Human and computer visual systems are so different that it's difficult to compare the two; however, a Tesla's cameras are far superior to human vision in at least one important way: There are multiple Tesla Autopilot cameras that face in different directions. Humans, OTOH, rely on mirrors and head turns to detect what's going on around the car. Even flicking your eye briefly to the rear-view mirror draws your attention away from the forward view of the road for a moment; and of course when you're not looking at the mirrors or over your shoulder, you have no idea of what's happening behind you or far enough out to your side. This fact alone gives the Tesla a huge advantage; it doesn't need to redirect its attention, thus blinding itself to what's happening in most directions. In my experience, most of my driving "close calls" and accidents have happened because I was unaware of something important that was happening outside of my field of view. Tesla's multiple AP cameras ensure that its computers don't suffer from this drawback.

As to resolution, that's comparing apples and oranges. The human eye produces a very clear image in the center of the viewing area, which is roughly equivalent to about 7 megapixels; however, our vision gets blurrier and loses color ability outside of the central foveal area. You'd need only another megapixel or so to build that peripheral vision. In fact, Tesla uses three forward-facing cameras, showing different fields of view, so when the three are combined, they produce a similar effect, with a higher-resolution view in the center and less moving out. The documentation I found suggests that Tesla's cameras are lower in resolution than the human eye (about 1.2MP each, at least for AP 2.0), but I'm not sure that's important -- a vehicle, even in the distance, will still consist of enough pixels to be able to figure out what it is. The fact that Tesla uses eight cameras (for AP 2.5, and presumably also AP 3 once it's available) means that it's processing about as many megapixels as a human does. if not more.

Focusing isn't likely to be an issue. Tesla's cameras all use very small sensors, and most are wide-angle, which means they experience extremely wide depth of field. What's more, most of what they must see is tens or hundreds of feet in the distance, which is effectively infinity focus. We humans need to shift our eyes' focus while driving so that we can check our cars' instruments and then shift back to the road, but the car's cameras don't need to do this.

Tesla's sensors don't include LIDAR, but they do include RADAR and ultrasonic sensors, neither of which we have. Teslas also have instantaneous access to mapping data, which we can access only by shifting attention away from the road and to a cell phone or car screen.

Overall, then, and IMHO, a Tesla's sensors are likely superior to those of a human for the purpose of driving. We likely have an advantage in foveal acuity, which might be important when reading a speed-limit sign in the distance; but in just about every other area, a Tesla's sensor suite blows human eyes out of the water. Where we have an advantage is in how we integrate and process the data we do have. Our visual cortex and higher brain centers evolved over millions of years to help us identify opportunities and avoid dangers, and these abilities enable us to drive at highway speeds or navigate city streets with a level of safety that our society accepts. Reproducing (much less improving on) those abilities in silicon is proving to be challenging. Tesla's approach of using neural networks attempts to replicate what our brains do, vs. a more traditionally programmed system, which is what most competitors are using -- and to accomplish that goal, most of Tesla's competitors have turned to LIDAR in order to get more easily-parsed data into the system. I don't have an opinion on which approach is more likely to succeed. I am confident that, with the current sensors, a Tesla Model 3 could eventually drive better than a human could in 99% of situations, and know enough to hand control back to its human driver for the remaining 1% of the time. Whether Tesla's computers and software will ever be able to reach that state is another matter.
 
  • Like
Reactions: TT97
As others have already said. It’s driver assistance not driver replacement.
If you think it’s going too fast, dial the speed down. If it’s going too slow, press the accelerator. Dead easy.
Don’t fall into the trap of thinking it’s full self drive.
 
Had a nice 20 mile drive today involving one complicated interchange section (multiple highways merging and splitting). NoA handled it well, except one situation where a large bus with a trailer cut in front of me. Had to slam on the brakes. In retrospect, I should have lowered the speed to the flow of traffic which in this complex interchange section was about 20mph slower than my setting and 10-15mph below the speed limit. Totally agree that in its current state it’s the drivers responsibility but eventually in preparation for true FSD, the car should slow down (or speed up) to match the average speed of the cars in adjacent lanes. Maybe some sort of % difference setting would be great, so far won’t go more than 10% faster or slower than average of cars in adjacent lanes.
 
  • Like
Reactions: ThreeDX
Thank you for the clarification that the AP remains incredibly narrowly focused on its tasks. Even with offloading some tasks to hardware in the new revision it seems a remarkable ask to get from today to FSD (sleep to your destination) by end of 2020 as Elon recently discussed publicly. Even in the best of circumstances of weather and roadways there is sooooo much more they need to add to the codebase.
 
Human and computer visual systems are so different that it's difficult to compare the two; however, a Tesla's cameras are far superior to human vision in at least one important way: There are multiple Tesla Autopilot cameras that face in different directions. Humans, OTOH, rely on mirrors and head turns to detect what's going on around the car. Even flicking your eye briefly to the rear-view mirror draws your attention away from the forward view of the road for a moment; and of course when you're not looking at the mirrors or over your shoulder, you have no idea of what's happening behind you or far enough out to your side. This fact alone gives the Tesla a huge advantage; it doesn't need to redirect its attention, thus blinding itself to what's happening in most directions. In my experience, most of my driving "close calls" and accidents have happened because I was unaware of something important that was happening outside of my field of view. Tesla's multiple AP cameras ensure that its computers don't suffer from this drawback.

As to resolution, that's comparing apples and oranges. The human eye produces a very clear image in the center of the viewing area, which is roughly equivalent to about 7 megapixels; however, our vision gets blurrier and loses color ability outside of the central foveal area. You'd need only another megapixel or so to build that peripheral vision. In fact, Tesla uses three forward-facing cameras, showing different fields of view, so when the three are combined, they produce a similar effect, with a higher-resolution view in the center and less moving out. The documentation I found suggests that Tesla's cameras are lower in resolution than the human eye (about 1.2MP each, at least for AP 2.0), but I'm not sure that's important -- a vehicle, even in the distance, will still consist of enough pixels to be able to figure out what it is. The fact that Tesla uses eight cameras (for AP 2.5, and presumably also AP 3 once it's available) means that it's processing about as many megapixels as a human does. if not more.

Focusing isn't likely to be an issue. Tesla's cameras all use very small sensors, and most are wide-angle, which means they experience extremely wide depth of field. What's more, most of what they must see is tens or hundreds of feet in the distance, which is effectively infinity focus. We humans need to shift our eyes' focus while driving so that we can check our cars' instruments and then shift back to the road, but the car's cameras don't need to do this.

Tesla's sensors don't include LIDAR, but they do include RADAR and ultrasonic sensors, neither of which we have. Teslas also have instantaneous access to mapping data, which we can access only by shifting attention away from the road and to a cell phone or car screen.

Overall, then, and IMHO, a Tesla's sensors are likely superior to those of a human for the purpose of driving. We likely have an advantage in foveal acuity, which might be important when reading a speed-limit sign in the distance; but in just about every other area, a Tesla's sensor suite blows human eyes out of the water. Where we have an advantage is in how we integrate and process the data we do have. Our visual cortex and higher brain centers evolved over millions of years to help us identify opportunities and avoid dangers, and these abilities enable us to drive at highway speeds or navigate city streets with a level of safety that our society accepts. Reproducing (much less improving on) those abilities in silicon is proving to be challenging. Tesla's approach of using neural networks attempts to replicate what our brains do, vs. a more traditionally programmed system, which is what most competitors are using -- and to accomplish that goal, most of Tesla's competitors have turned to LIDAR in order to get more easily-parsed data into the system. I don't have an opinion on which approach is more likely to succeed. I am confident that, with the current sensors, a Tesla Model 3 could eventually drive better than a human could in 99% of situations, and know enough to hand control back to its human driver for the remaining 1% of the time. Whether Tesla's computers and software will ever be able to reach that state is another matter.
Appreciate the in depth reply. We're 90% in agreement, so I don't understand why you down-voted me? Fwiw I'm an electrical engineer that deals with microwave and fiber telecom systems