Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Firmware 9 in August will start rolling out full self-driving features!!!

This site may earn commission on affiliate links.
You mean AP, right (no “E”)? What feature of EAP has been released?

Interesting, what part of EAP has been delivered to you?? ;)

Like i said... All this hand wringing...
EAP and FSD were my choices and i chose them. ;-)
Already enjoying the EAP choice.
Expect to be enjoying the second choice 'real soon now'.
(Anyone else read chaos manor column back in the day... real soon now - Wiktionary. )

I expect the E arguments to evolve into F arguments while I am happily enjoying a modicum of both. ;-)
 
No local auto lane change. Hopefully "soon".

Yeah, this trips me up on every commute, sice I keep trying this on undivided portion of my commute. Been starting to just hang in the right lane though since +5 limit makes it tough to keep up with traffic anyway.

I am getting pretty proficent with signalling for auto lane change on divided portion of commute. I like to half press until i see dotted blue and then full press to commit. It is embarrassing to let go without full press since it aborts even though lane change almost complete.

BTW, it has been a long wait but the 3 is my first Tesla. I kept considering a CPO but even AP car would have been as much as 3 was. So I have no intest in the E arguments.
 
No it's not.

It is, because the first person to mass production ADAS using a monocular camera and computer vision said so.

No they're not.

Then you're calling Amon a liar and eyeq4 a fake. If Amon is a liar i wonder what you call elon and AP then?

Now here I agree with you; Tesla is behind and furthermore fundamentally has the wrong hardware in the vehicles to achieve even L3; they might sorta get L3 but I personally wouldn't trust it.

They can't, there's no camera to ensure the driver is not asleep. Atleast in model x/s.

Well, you've got a lot of wiggle room there only because you included "near". What does "near 0" mean? If it means "for all practical purposes 0; you can pretty much always trust it", then I'm afraid you've been sold a bill of goods. The best systems around today make "near 0" errors only in favorable environments. All of them get it wrong sometimes. Just like humans.

Under what conditions and what is the false negative? False negative is way more interesting in this particular case. What is the minimum pedestrian size? What about on Halloween when all the pedestrians are small and in costumes? What about a 4-year-old dressed as a fire hydrant?
This is where redundant sensors come into play particularly lidar. To take near 0 to absolute zero.

Oh yeah, then it's all easy. Smooth sailing once you've solved detection, right?.
nope, as Amon said and as i reiterated, sensing is easy. driving policy aka path planning is where the struggle is REAL!

People always point to Andrej Karpathy talk and say, see look AMAZING!
I'm like...those are barebone basics. You are amazed at that?
 
sensing is easy. driving policy aka path planning is where the struggle is REAL!
I think the exact opposite as a programmer.

Give me API to a 100% vision system that can tell me the type of objects, its metadata, position in 3d coordinates, rotating data, speed, as well as surface plane data and type of surface. Then I will code you a 100% reliable full self driving system that can handle all possible situations.
 
I think the exact opposite as a programmer.

Give me API to a 100% vision system that can tell me the type of objects, its metadata, position in 3d coordinates, rotating data, speed, as well as surface plane data and type of surface. Then I will code you a 100% reliable full self driving system that can handle all possible situations.

O really, so show us that leet 100% reliable full self driving system in simulators?
Or why ain't you working for Waymo, Intel, BMW or Audi... (honest question)
Or better yet, get you a Mobileye EyeQ4 and show us what you can do.

Sensing is easy, says the company whose prototype runs a red light and then they blame a GoPro in the cabin for “interference”....
Mobileye autonomous vehicle runs red light in Jerusalem

The system was solely using V2I and was wirelessly connected to the traffic lights and has nothing to do with vision sensing.

and no they put stuff on their cars.

Mobileye CEO Amnon Shashua says that the wireless transmitters used by the TV crew cameras created electromagnetic interference that disrupted the transponder on the traffic light. The car’s camera identified the red light but ignored that information and drove through the light.
 
Last edited:
Like how Uber's FSD ignored pending collisions. It totally worked out not having integrated systems for them too. It's not like anyone keep devices that create interference or drive by other things that create interference such as high voltage power lines. ..
 
Sorry, but this is not true. the point that getting quality input as the primary factor is a good one, but your claim of 100% reliable is simply not accurate, especially in machine learning / AI based system. What you would need is to have had enough data combinatorics of those particular objects to replicate every situation... ever, and in enough quantity to be able to determine outcomes based on various choices made.

This is the core of "decision making", knowing the situation is not enough (for a computer), it has to make choices over ultimately millions of combinatorics which define the current context (what those objects are doing, what they were doing, and all their metrics (speed, size, conformity to the road rules, etc.).

Maybe what you mean is that in time, with enough data, having clarity on the inputs can theoretically lead to such reliability given infinite storage and infinite computing power. But we are far from that still.

I think the exact opposite as a programmer.

Give me API to a 100% vision system that can tell me the type of objects, its metadata, position in 3d coordinates, rotating data, speed, as well as surface plane data and type of surface. Then I will code you a 100% reliable full self driving system that can handle all possible situations.
 
  • Like
Reactions: rnortman
O really, so show us that leet 100% reliable full self driving system in simulators?
Or why ain't you working for Waymo, Intel, BMW or Audi... (honest question)
Or better yet, get you a Mobileye EyeQ4 and show us what you can do.



The system was solely using V2I and was wirelessly connected to the traffic lights and has nothing to do with vision sensing.

and no they put stuff on their cars.

You’re both wrong, both parts are difficult and remain unsolved. If he was right, there’d be no simulation competitions and if you were right, nobody would bother with ImageNet.
 
Yeah, this trips me up on every commute, sice I keep trying this on undivided portion of my commute. Been starting to just hang in the right lane though since +5 limit makes it tough to keep up with traffic anyway.

I am getting pretty proficent with signalling for auto lane change on divided portion of commute. I like to half press until i see dotted blue and then full press to commit. It is embarrassing to let go without full press since it aborts even though lane change almost complete.

BTW, it has been a long wait but the 3 is my first Tesla. I kept considering a CPO but even AP car would have been as much as 3 was. So I have no intest in the E arguments.

It’s lacking on more than undivided routes. I have a divided, six-lane state route nearby that still doesn’t do lane changes, while AP1 has no problem with it.
 
  • Informative
Reactions: rnortman
It’s lacking on more than undivided routes. I have a divided, six-lane state route nearby that still doesn’t do lane changes, while AP1 has no problem with it.

Geez, now I understand.

I haven’t been able to change lanes around Naples even though most roads are 6 lane divided such as US 41, Vanderbilt dr, etc....

At first, I was thinking it was the wrap or a Vehicle sensor failure.

But....it worked on a 90 minute trip on I75 from Naples to Sarasota the other night so your comment hits home. This is a DB problem on Tesla’s part (my 2016 x works fine)? Is there a fix coming?
 
Geez, now I understand.

I haven’t been able to change lanes around Naples even though most roads are 6 lane divided such as US 41, Vanderbilt dr, etc....

At first, I was thinking it was the wrap or a Vehicle sensor failure.

But....it worked on a 90 minute trip on I75 from Naples to Sarasota the other night so your comment hits home. This is a DB problem on Tesla’s part (my 2016 x works fine)? Is there a fix coming?
Not sure it should classified as a FIX. AP2 only works on what we call in CA as FREEWAYS. Not sure what is always meant by a "divided" highway. Our large "surface streets" here have 3 lands going each way with a divider in the middle. Is that a divided highway? It also have traffic lights and left/right turn lanes which are it this is the difference. I.E. Not like FREEWAY exchanges and FREEWAY EXITS with no traffic lights.
 
Not sure it should classified as a FIX. AP2 only works on what we call in CA as FREEWAYS. Not sure what is always meant by a "divided" highway. Our large "surface streets" here have 3 lands going each way with a divider in the middle. Is that a divided highway? It also have traffic lights and left/right turn lanes which are it this is the difference. I.E. Not like FREEWAY exchanges and FREEWAY EXITS with no traffic lights.

US 84 in West Texas is a good example of a divided highway. Not controlled access like a freeway, but it seems like AP should handle this the same. There are lights, but only in towns, so you can go for 30 miles without a light easily. Speed limits are 75mph.

9068EC4F-340A-495A-A27B-43F252798F6D.jpeg
 
Makes me wonder why Ap1 and ap2 are different when it comes to lane changes?
As long as you have any divided multi lane road, Ap1 will auto lane change? But ap2 depends on a variable in the adas map tiles. Any anecdotes of ap1 ever allow you to lane change into an opposite lane? Once my ap2 wanted to change me into the dividing wires, ie left of the left lane. A sensing problem or better data?
 
My AP1 vehicle would work on any road and I never experienced a cross over to an oncoming lane (although it did periodically want to visit a home or two off a divided highway :). US 17 in Brunswick County has large stretches of divided highway and then some part of it has homes on sides.

It worked fabulously on local streets here in Naples in which there are a minimum two lanes each direction with most 3 lanes each. There are multiple lights of course and all have dividers.

I must have missed this shortcoming of AP2.

I hope it gets fixed soon......
 
  • Like
Reactions: daktari
Sorry, but this is not true. the point that getting quality input as the primary factor is a good one, but your claim of 100% reliable is simply not accurate, especially in machine learning / AI based system. What you would need is to have had enough data combinatorics of those particular objects to replicate every situation... ever, and in enough quantity to be able to determine outcomes based on various choices made.

This is the core of "decision making", knowing the situation is not enough (for a computer), it has to make choices over ultimately millions of combinatorics which define the current context (what those objects are doing, what they were doing, and all their metrics (speed, size, conformity to the road rules, etc.).

Maybe what you mean is that in time, with enough data, having clarity on the inputs can theoretically lead to such reliability given infinite storage and infinite computing power. But we are far from that still.
Not using any AI / training for path planning and decisionmaking. AI is only needed for visual perception.

Pathing can be entirely solved with "Software 1.0" code. Wrote a huge post here somewhere back with an example of how that can be done. Basically once you have all the data about your environment and detailed observed metadata about them, the rules are simple:
1. Don't kill people
2. Don't damage property (except if you have to for #1)
3. Don't break the law (except if you have to for #1-2)
4. Don't make passengers uncomfortable (except if you have to for #1-3)

To achieve this you:
1. Gather data (difficult part)
2. Evaluate future possible movements based on object types (eg kids take up more space due to unpredictability = slow down to avoid possible intersect).
3. Fill in unverified/blank spots with worst case objects (eg cyclist behind wall corner) and predict their paths too.
4. Plan all possible driving routes/speeds that avoids intersecting these entities ranked by probability of collision.
5. Prioritize according to array above with thresholds for most acceptable risk. If risk was 0% car would probably need to drive really slow everywhere, so must accept some risk (much lower than human). Risk can probably be even safer once all cars are autonomous and roads/other cars can provide info.

Algorithm more detailed in other post.

Number of situations/combinations is not really an issue. Even modern open-world games today have billions of billions of combinations. Programming is not about the combinations, but about defining the generic rules.