Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model 3 EAP Experience

This site may earn commission on affiliate links.
The kiss of death for ap20 is once Lidar becomes affordable.. Competition (the entire tech and automotive industry are betting on Lidar) will provide solutions that are 100x more reliable and Tesla will have no choice but to create ap30 hardware which will incorporate Lidar, and we will be screwed.

Tesla can hire the best engineers in the planet.. It will be no substitute for what basic engineers can accomplish with Lidar.

Lidar is used for sensing obstacles, especially for city driving. It can’t see lane lines, that’s always done with cameras.
 
i turned on my 3's autopilot last night for about 15 miles, first thing i noticed was the "hold the steering wheel" notification is VERY obvious. i really like how they show it in the 3, hope that big blue gradient can make it into the S. the current little flag at the bottom of the screen can sometimes be missed.
 
  • Informative
Reactions: SmartElectric
i turned on my 3's autopilot last night for about 15 miles, first thing i noticed was the "hold the steering wheel" notification is VERY obvious. i really like how they show it in the 3, hope that big blue gradient can make it into the S. the current little flag at the bottom of the screen can sometimes be missed.

It not bad, but it’s definitely more apparent in night mode, at least to me. Blue on white doesn’t grab the attenton.
 
Tesla may indeed be betting on the wrong sensor suite, but incorporating LIDAR doesn’t make any of this easy. That’s nonsense. You still need machine vision, you still need deep learning.
You still need cameras and radar... but Lidar is a major component.

With Lidar you get a 3D cartography of the environment.

The Josh Brown accident would never have happened with Lidar. Lidar would have immediately picked up a large object at the precise location and stopped the car. It doesn't even need to recognize what the freaking object is unlike the cameras which heavily depend on image recognition.

Google would never, ever leave their autonomous cars on the road without Lidar.
 
You still need cameras and radar... but Lidar is a major component.

With Lidar you get a 3D cartography of the environment.

The Josh Brown accident would never have happened with Lidar. Lidar would have immediately picked up a large object at the precise location and stopped the car. It doesn't even need to recognize what the freaking object is unlike the cameras which heavily depend on image recognition.

Google would never, ever leave their autonomous cars on the road without Lidar.

We shall see. I’ve done some work with machine learning and vision and I am not convinced. The Josh Brown type incident has also been addressed by altering how the radar works.
 
  • Like
Reactions: EinSV and rxlawdude
I just got to try Model 3 AP for the first time on my commute this morning... I have to say, I really really prefer Model S dedicated AP control stalk. When I'm driving the 3, it will be a far more manual driving experience.

Yes, wish they had a dedicated stalk.

Am I the only one who routinely changes the follow distance setting? They buried the follow distance into the autopilot settings requiring multiple touches to adjust from say "5" to "2". Same thing with the speed setting, but at least it's visible and adjustable without going into menus. They should put both of those adjustments right on the main screen.

Ideally, I hope they make the scroll balls on the steering wheel configurable. As of today, the right scroll ball is used for voice commands, ugh..
 
  • Like
Reactions: pilotSteve
You still need cameras and radar... but Lidar is a major component.

With Lidar you get a 3D cartography of the environment.

The Josh Brown accident would never have happened with Lidar. Lidar would have immediately picked up a large object at the precise location and stopped the car. It doesn't even need to recognize what the freaking object is unlike the cameras which heavily depend on image recognition.

Google would never, ever leave their autonomous cars on the road without Lidar.
Sorry but presuming that changing the detection method somehow improves the recognition is extremely naive.
Detection != recognition
Remember that AP1 still had radar which detected the obstacle so detection was never an issue.
Recognizing the obstacle as a semi trailer and not an overhead sign gantry or bridge was/is the real problem - which has nothing to do radar/camera/lidar.
As Tesla have pointed out countless times - humans manage to drive (mostly) without running into things with nothing more than the equivalent of a pair of cameras (eyes). No Lidar or Radar. Just an incredible processing engine behind it.
 
Sorry but presuming that changing the detection method somehow improves the recognition is extremely naive.
Detection != recognition
Remember that AP1 still had radar which detected the obstacle so detection was never an issue.
Recognizing the obstacle as a semi trailer and not an overhead sign gantry or bridge was/is the real problem - which has nothing to do radar/camera/lidar.
As Tesla have pointed out countless times - humans manage to drive (mostly) without running into things with nothing more than the equivalent of a pair of cameras (eyes). No Lidar or Radar. Just an incredible processing engine behind it.

Indeed. The magical thinking around LIDAR is just silly.
 
  • Like
Reactions: rxlawdude
Sorry but presuming that changing the detection method somehow improves the recognition is extremely naive.
Detection != recognition
Remember that AP1 still had radar which detected the obstacle so detection was never an issue.
Recognizing the obstacle as a semi trailer and not an overhead sign gantry or bridge was/is the real problem - which has nothing to do radar/camera/lidar.
As Tesla have pointed out countless times - humans manage to drive (mostly) without running into things with nothing more than the equivalent of a pair of cameras (eyes). No Lidar or Radar. Just an incredible processing engine behind it.
The non Lidar approach:
Radar detects something.. radar is low resolution and has a ton of false positives. Camera is then used for vetting using image recognition... Image recognition! Do you think something like image recognition is 100 percent bulletproof?

Lidar approach: object at coordonate X,Y,Z. Boom. Simple and straightforward.
 
The non Lidar approach:
Radar detects something.. radar is low resolution and has a ton of false positives. Camera is then used for vetting using image recognition... Image recognition! Do you think something like image recognition is 100 percent bulletproof?

Lidar approach: object at coordonate X,Y,Z. Boom. Simple and straightforward.

Wow. No.
 
The non Lidar approach:
Radar detects something.. radar is low resolution and has a ton of false positives. Camera is then used for vetting using image recognition... Image recognition! Do you think something like image recognition is 100 percent bulletproof?

Lidar approach: object at coordonate X,Y,Z. Boom. Simple and straightforward.
You are still conflating detection and recognition.