caligula666
Member
You mean AP, right (no “E”)? What feature of EAP has been released?I find all of this hand wringing ammusing. I orded FSD in my M3LR and have been expecting to wait a 'while'. Enjoying EAP in the meantime.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
You mean AP, right (no “E”)? What feature of EAP has been released?I find all of this hand wringing ammusing. I orded FSD in my M3LR and have been expecting to wait a 'while'. Enjoying EAP in the meantime.
You mean AP, right (no “E”)? What feature of EAP has been released?
I find all of this hand wringing ammusing. I orded FSD in my M3LR and have been expecting to wait a 'while'. Enjoying EAP in the meantime.
You mean AP, right (no “E”)? What feature of EAP has been released?
Interesting, what part of EAP has been delivered to you??
No local auto lane change. Hopefully "soon".
No it's not.
No they're not.
Now here I agree with you; Tesla is behind and furthermore fundamentally has the wrong hardware in the vehicles to achieve even L3; they might sorta get L3 but I personally wouldn't trust it.
This is where redundant sensors come into play particularly lidar. To take near 0 to absolute zero.Well, you've got a lot of wiggle room there only because you included "near". What does "near 0" mean? If it means "for all practical purposes 0; you can pretty much always trust it", then I'm afraid you've been sold a bill of goods. The best systems around today make "near 0" errors only in favorable environments. All of them get it wrong sometimes. Just like humans.
Under what conditions and what is the false negative? False negative is way more interesting in this particular case. What is the minimum pedestrian size? What about on Halloween when all the pedestrians are small and in costumes? What about a 4-year-old dressed as a fire hydrant?
nope, as Amon said and as i reiterated, sensing is easy. driving policy aka path planning is where the struggle is REAL!Oh yeah, then it's all easy. Smooth sailing once you've solved detection, right?.
I think the exact opposite as a programmer.sensing is easy. driving policy aka path planning is where the struggle is REAL!
I think the exact opposite as a programmer.
Give me API to a 100% vision system that can tell me the type of objects, its metadata, position in 3d coordinates, rotating data, speed, as well as surface plane data and type of surface. Then I will code you a 100% reliable full self driving system that can handle all possible situations.
Sensing is easy, says the company whose prototype runs a red light and then they blame a GoPro in the cabin for “interference”....
Mobileye autonomous vehicle runs red light in Jerusalem
Mobileye CEO Amnon Shashua says that the wireless transmitters used by the TV crew cameras created electromagnetic interference that disrupted the transponder on the traffic light. The car’s camera identified the red light but ignored that information and drove through the light.
I think the exact opposite as a programmer.
Give me API to a 100% vision system that can tell me the type of objects, its metadata, position in 3d coordinates, rotating data, speed, as well as surface plane data and type of surface. Then I will code you a 100% reliable full self driving system that can handle all possible situations.
O really, so show us that leet 100% reliable full self driving system in simulators?
Or why ain't you working for Waymo, Intel, BMW or Audi... (honest question)
Or better yet, get you a Mobileye EyeQ4 and show us what you can do.
The system was solely using V2I and was wirelessly connected to the traffic lights and has nothing to do with vision sensing.
and no they put stuff on their cars.
Yeah, this trips me up on every commute, sice I keep trying this on undivided portion of my commute. Been starting to just hang in the right lane though since +5 limit makes it tough to keep up with traffic anyway.
I am getting pretty proficent with signalling for auto lane change on divided portion of commute. I like to half press until i see dotted blue and then full press to commit. It is embarrassing to let go without full press since it aborts even though lane change almost complete.
BTW, it has been a long wait but the 3 is my first Tesla. I kept considering a CPO but even AP car would have been as much as 3 was. So I have no intest in the E arguments.
It’s lacking on more than undivided routes. I have a divided, six-lane state route nearby that still doesn’t do lane changes, while AP1 has no problem with it.
Not sure it should classified as a FIX. AP2 only works on what we call in CA as FREEWAYS. Not sure what is always meant by a "divided" highway. Our large "surface streets" here have 3 lands going each way with a divider in the middle. Is that a divided highway? It also have traffic lights and left/right turn lanes which are it this is the difference. I.E. Not like FREEWAY exchanges and FREEWAY EXITS with no traffic lights.Geez, now I understand.
I haven’t been able to change lanes around Naples even though most roads are 6 lane divided such as US 41, Vanderbilt dr, etc....
At first, I was thinking it was the wrap or a Vehicle sensor failure.
But....it worked on a 90 minute trip on I75 from Naples to Sarasota the other night so your comment hits home. This is a DB problem on Tesla’s part (my 2016 x works fine)? Is there a fix coming?
Not sure it should classified as a FIX. AP2 only works on what we call in CA as FREEWAYS. Not sure what is always meant by a "divided" highway. Our large "surface streets" here have 3 lands going each way with a divider in the middle. Is that a divided highway? It also have traffic lights and left/right turn lanes which are it this is the difference. I.E. Not like FREEWAY exchanges and FREEWAY EXITS with no traffic lights.
Not using any AI / training for path planning and decisionmaking. AI is only needed for visual perception.Sorry, but this is not true. the point that getting quality input as the primary factor is a good one, but your claim of 100% reliable is simply not accurate, especially in machine learning / AI based system. What you would need is to have had enough data combinatorics of those particular objects to replicate every situation... ever, and in enough quantity to be able to determine outcomes based on various choices made.
This is the core of "decision making", knowing the situation is not enough (for a computer), it has to make choices over ultimately millions of combinatorics which define the current context (what those objects are doing, what they were doing, and all their metrics (speed, size, conformity to the road rules, etc.).
Maybe what you mean is that in time, with enough data, having clarity on the inputs can theoretically lead to such reliability given infinite storage and infinite computing power. But we are far from that still.