This article by Forbes completely misses the point of how autopilot development works and essentially ignores the fact that other cars don’t even attempt red light stops or allow users to opt in to testing the feature.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
This article by Forbes completely misses the point of how autopilot development works and essentially ignores the fact that other cars don’t even attempt red light stops or allow users to opt in to testing the feature.
Specifically the following are what I point to.
“Any manufacturer of any product that pushes out a safety critical feature that is not fully developed, tested and validated is guilty of negligence”
It’s not negligence to release this feature unless it is released on-by-default, and without a method to disable it. Also, there are multiple warnings that say not to use the feature in less than ideal annoying ways that impede or affect drivers around them. The majority of Tesla drivers know when and when not to test this.
Their lack of understanding of Teslas tech comes into play when they say “One solution to this is vehicle to infrastructure (V2I) communications. ” and “One of the reasons almost every company developing AD that isn’t Tesla uses high definition maps is so that they know exactly where to look for these signals”. This is GPS, not HD maps - the author proves over and over he doesn’t understand this to a level required to be writing any articles on the subject. It is highly reminiscent of the old idea that we should put magnets in all the streets to guid vehicles - wasting billions retrofitting something computer vision will solve.
“There isn’t another major automaker in the world that sends beta versions of safety critical systems to customers” he admits this is beta and clearly doesn’t know it is not hard-codes to be always on and literally misses the nature of the release.
“Customers vary widely in knowledge and experience and should never, ever be used as testers for these sorts of systems. “ this is an example of a self selecting sample where people who don’t know about these systems will not turn them on or even know how to (assuming they won’t read to the end of both warning statements where it shows how to enable it)
I agree the feature has to be turned on first so Tesla is not forcing an untested feature on unsuspected owners. That would be worse. But the article thinks it is still negligence to even give owners the option to turn on an untested "beta" feature, even with warnings. The article is arguing that the auto maker should fully test and validate the feature and only release when it is reliable and safe to use.
But aren't you approaching it from the assumption that HD maps are useless and that Tesla has the right approach to focus on camera vision only? Not everybody agrees with that approach. The article is expressing the other view that HD maps should be used.
Again, the issue is not that the feature can be opted in. You seem to be taking the view that as long as Tesla owners know that it is "beta" then it is ok to release it as such. The article disagrees. The article is arguing that a "beta" feature should never be released in the first place, regardless of whether it can be opted-in or not.
We have no way of knowing that owners who don't know how to use the system will choose not to turn it on. The point of the article is well taken. Tesla owners have a wide range of knowledge and experience. We can't assume that all owners will be smart about the feature. There are plenty of owners who shoot youtube videos that should never be allowed to use AP if you ask me because they use it very irresponsibly. Heck, I've seen the youtube video of one owner who reads the release notes in the video and still mistakenly thinks that the update is for "city NOA" and keeps expecting the car to make turns automatically. So you cannot assume anything about owners. So the article is simply saying that owners, who are not professional testers and do come in various levels of knowledge, should not be put in the situation of being "beta testers"
I don't think HD maps are useless per se in certain scenarios, and it would definitely not hurt. But this is a crutch that Tesla aims to completely bypass using a more generalized approach that works everywhere, not just within pre-scanned areas. He also doesn't understand that an HD map is only for delta navigation, whereby a car gets the data then matches that dataset's radial segment to its current surroundings to know how to navigate through it. The car then reacts to changes within the dataset to navigate around new obstacles appropriately. Tesla is going after the larger, more difficult problem of a generalized solution that reacts in real time to novel situations. Trying to optimize something that isn't needed (HD maps), is a waste of time because they aim to solve the vision problem. Stop signs can easily be mapped to GPS coordinates without the extra overhead of huge HD maps, but if that stop sign moves, the dataset is out of date. The author clearly doesn't understand that nuanced difference. To his credit, HD maps are indeed a requirement for systems that don't have computer vision as advanced as Tesla's presumably is, and would add to safety for a final system that was built on it.
OEM car systems depending on maps and geofencing to get them to FSD are dead in the water.
As someone with MY on order (no fsd) I actually agree with the article, maybe not with it's tone, but with it's conclusions. There is a difference between a Tesla owner reading the warnings and accepting risks, and other drivers/pedestrians who would be impacted and are unaware of what's going on. Pushing out unfinished automation that can easily kill or injure is irresponsible. We already know there are people who won't read warnings and won't understand the risks.
I dont mind underdeveloped, beta features, as long as they are not safety related. Would anyone accept a car with experimental brakes or air bags or belts? What about experimental child's seat?
In essence that's what we have here.
I would love nothing more than for this tech to succeed, but if it's pushed out irresponsibly, all it's going to do is increase insurance rates for Tesla owners, generate bad publicity for Tesla, and force regulations that will nerf the tech into the ground. Not even mentioning potential injuries.
To add, while it does stop by default on green, drivers behind the Tesla would be making certain assumptions - like car not stopping for green light. They may be forced to drive around it, increasing potential risks to themselves, or even hit it from behind. While technically they will be responsible, the risk of injury to both drivers is still there.
My overall contention is the fact that the feature being released adds protections to the car, it doesn’t take them away. Before the update, the car (and all others with cruise control) would allow it drive right through stop signs and traffic lights, and now will slow to a stop at them by default if the user enables it. The only way the user will know how to enable it, is by reading to the end of the warning stack where it says how to do so. Then as the car drives, it pops up a warning for each intersection stating it will slow to a stop in X feet. I think the confusion here is the behavior of the feature.
How is adding this feature any more dangerous than a car with just basic cruise control that doesn’t steer OR stop at all? One could argue that driver complacency could be an issue, but no more than the former example would be. Complacency with cruise control that has no active monitoring is infinitely worse than a system, beta or not, that is literally programmed to stop by default at traffic control. The article is hyperbolic at best, and having used this system since it’s release, allays the fears presented. And I agree, the tone of it is ridiculous.
Now if I am missing some other feature that is implemented in this beta, I will of course correct my statement. My issue is that people just see the word “beta” or “developmental” and automatically think the worse (justifiably). But in reality, even in the beta form, the target audience wanting to test this in the wild has a higher likelihood to be responsible with it - especially in a new car.
What else should I think about going wrong here?
Couple of things:
1. Slamming on brakes on green, and any other erratic driving behavior, is not safe. The issue of liability aside, its something that can cause injury or confusion on part of other drivers. Sudden stops and erratic behavior is supposed to be justified only for an emergency response, not a common behavior of a vehicle.
2. It sounds that Tesla is not yet confident in 100% detection and accuracy of traffic conditions - that's why it requires confirmation. In that case, it can lead to driver's complacency and may blows past a traffic control device or an intersection.
Accepting and understanding risks is fine if the driver doing acceptance is the only one that will be affected in case of failure. It's not the case here.
I haven't driven any - I'm responding to the Forbes article which references such occurrences.How many miles have you driven on this version? My car does not “slam brakes on green lights.” When it wants to slow it does so gradually with plenty of warning on the screen.
I haven't driven any - I'm responding to the Forbes article which references such occurrences.
Try driving with it before being so hyperbolic. The system is conservative to a fault.
Edit: Is your car HW2.5 or HW3?
1. I am responding to a topic of this discussion - the article in Forbes - specifically, that the article is somehow biased. I'm sure I"m not the only one that agrees with its premise.
2. As I mentioned up-thread, I have a reservation on MY, but have not received it yet. My friend, with M-S and older computer (I think HW1, this is pre-FSD), has mixed experience with AutoPilot, and stopped using it now, after owning the car for many years.
3. I did not order FSD. as I agree with posted article's premise that a beta/ incomplete feature is not acceptable when it can jeopardize lives. I have no intention of using the AutoPilot either, but then most of my driving is on city streets anyway.
So you have no car, no experience with autopilot, and no personal miles driven with this software version (let alone any version).
Is it not obvious to see how an article written by someone with the exact same lack of actual experience can then lead to further bias in their readers when they too have no first hand experience?
Autopilot has plenty of actual faults and rough edges but it sure would be nice if those opining the loudest had actually experienced the product firsthand.