Yep, that's the vision, that we/I bought into, and subject to the potential risks. Thanks for linking that. " Full Self-Driving Capability All new Tesla cars have the hardware needed in the future for full self-driving in almost all circumstances. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat. All you will need to do is get in and tell your car where to go. If you don’t say anything, the car will look at your calendar and take you there as the assumed destination or just home if nothing is on the calendar. Your Tesla will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed. When you arrive at your destination, simply step out at the entrance and your car will enter park seek mode, automatically search for a spot and park itself. A tap on your phone summons it back to you. The future use of these features without supervision is dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions. As these self-driving capabilities are introduced, your car will be continuously upgraded through over-the-air software updates. " Basically A-B journey, in most circumstances with no action required by the person in the driver seat. It doesn't say 'unsupervised' in the first part, so that would be L3+ (Somewhere between L3 and L4 as it says "almost all circumstances"). Later it goes on to talk about "without being supervised" which would be the elevation to L4, and caveats that with more risk. The difference between the "supervised" functionality, and then to the "unsupervised", is really just a match of the 9's to prove adequate reliability, rather than providing any additional functionality. (i.e. in Musk's terms, it would be functionally complete when it can do the journey, but then needs to be refined, proven to get to this level). BTW. this isn't a trivial, no-risk. Pretty sure we will get to the "Supervised" vision, which would be fantastic. How far we get into the "unsupervised" territory, is probably where the major risk (and payback) is.
Very interesting. Is anyone keeping track of who is getting picked or even location? Anyone I can bribe? Ha
I wonder if this is why Traffic Control still requires stalk confirmations? It would seem that there are still edge cases that can cause traffic light detection to be unreliable.
I see the wrong color randomly also. Sometimes I see it changing colors like it can't decide. Not sure why it happens sometimes. Its not the sun.
Me too, but I find that it's been getting better over time. For example, incorrect green light notifications are less nowadays compared to a few months ago. I've also never had a situation where it ran a red or overran a yellow (after stalk confirm). I've also never seen the fsd beta run an obvious red. It has run a red before, but not because it didn't recognize it, it was more of a planning issue. In honesty, I think the traffic control recognition feature is better than an avg human right now. I believe it will more consistently stop at the appropriate traffic controls than the avg human statistic, although I have no data from Tesla to back this up lol. This is a good time to bring up this video:
It's not the new version rewrite FSD beta it's the normal FSD and soon all car receiving new 360 FSD 4D see latest post of Elon.
Add very dim lights as well. Eastbound El Camino turning north on Shoreline in Mountain View. The green lights for the two left-most turn lanes are super dim and AP can't detect them.
The HW3 computer has 2 parallel sides of the processor, which can run independently? It's been stated that the existing apps doesn't need the processing power of both sides. Do we know/think that the FSD software may be running on one side collecting data, making predictions, while the other side is running the existing apps (for those not running the Beta program) and doing the control. If possible, seems like a smart way of collecting data, without having the new stuff active.
It has 2 for redundancy. Otherwise your robotaxi crashes if one chip fails. IIRC Green found for a long time the B side didn't do anything but in a late 2019 update it began running an exact copy of the A-side code, presumably to fail-over if the A side crashes.