Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Forbes biased article again proves misunderstanding of how Tesla’s betas work.

This site may earn commission on affiliate links.
This article by Forbes completely misses the point of how autopilot development works and essentially ignores the fact that other cars don’t even attempt red light stops or allow users to opt in to testing the feature.

I don't think the article misses the point of how AP development works. The article is not against the feature itself and it is not against rolling out more advanced automated driving systems to consumer cars. But it is arguing that Tesla's way of developing AP is unsafe. It is arguing that it does not matter if owners opt in, that a feature as potentially risky as traffic light response should not be distributed to owners when it is still largely untested. Specifically, the article is arguing that a feature like traffic light response deserves more robust redundancy (like with HD maps) and deserves more validation and testing before wide release. That's all. The article just wants the feature to be safer and tested more before release. You can still release OTA features to customers but just make sure they are reliable and safe first. I think those are sensible arguments to make.
 
Specifically the following are what I point to.
“Any manufacturer of any product that pushes out a safety critical feature that is not fully developed, tested and validated is guilty of negligence”

It’s not negligence to release this feature unless it is released on-by-default, and without a method to disable it. Also, there are multiple warnings that say not to use the feature in less than ideal annoying ways that impede or affect drivers around them. The majority of Tesla drivers know when and when not to test this.

Their lack of understanding of Teslas tech comes into play when they say “One solution to this is vehicle to infrastructure (V2I) communications. ” and “One of the reasons almost every company developing AD that isn’t Tesla uses high definition maps is so that they know exactly where to look for these signals”. This is GPS, not HD maps - the author proves over and over he doesn’t understand this to a level required to be writing any articles on the subject. It is highly reminiscent of the old idea that we should put magnets in all the streets to guid vehicles - wasting billions retrofitting something computer vision will solve.

“There isn’t another major automaker in the world that sends beta versions of safety critical systems to customers” he admits this is beta and clearly doesn’t know it is not hard-codes to be always on and literally misses the nature of the release.

“Customers vary widely in knowledge and experience and should never, ever be used as testers for these sorts of systems. “ this is an example of a self selecting sample where people who don’t know about these systems will not turn them on or even know how to (assuming they won’t read to the end of both warning statements where it shows how to enable it)

The author is 100% correct to be cautious of this release but it’s clear he doesn’t know the nuances behind it to write an informed article that doesn’t deem this “reckless”. Also, the car defaults to stopping and only going forward when the user tells it to. Hardly reckless.

Now had this been released and turned on by default, that would be all but reckless and irresponsible. Tesla released this in a way where people can test out the feature, allow participation in the supervised machine learning training, and further developing this so its works well one day
 
Last edited:
  • Like
Reactions: NHK X and josher32
Specifically the following are what I point to.
“Any manufacturer of any product that pushes out a safety critical feature that is not fully developed, tested and validated is guilty of negligence”

It’s not negligence to release this feature unless it is released on-by-default, and without a method to disable it. Also, there are multiple warnings that say not to use the feature in less than ideal annoying ways that impede or affect drivers around them. The majority of Tesla drivers know when and when not to test this.

I agree the feature has to be turned on first so Tesla is not forcing an untested feature on unsuspected owners. That would be worse. But the article thinks it is still negligence to even give owners the option to turn on an untested "beta" feature, even with warnings. The article is arguing that the auto maker should fully test and validate the feature and only release when it is reliable and safe to use.

Their lack of understanding of Teslas tech comes into play when they say “One solution to this is vehicle to infrastructure (V2I) communications. ” and “One of the reasons almost every company developing AD that isn’t Tesla uses high definition maps is so that they know exactly where to look for these signals”. This is GPS, not HD maps - the author proves over and over he doesn’t understand this to a level required to be writing any articles on the subject. It is highly reminiscent of the old idea that we should put magnets in all the streets to guid vehicles - wasting billions retrofitting something computer vision will solve.

But aren't you approaching it from the assumption that HD maps are useless and that Tesla has the right approach to focus on camera vision only? Not everybody agrees with that approach. The article is expressing the other view that HD maps should be used.

“There isn’t another major automaker in the world that sends beta versions of safety critical systems to customers” he admits this is beta and clearly doesn’t know it is not hard-codes to be always on and literally misses the nature of the release.

Again, the issue is not that the feature can be opted in. You seem to be taking the view that as long as Tesla owners know that it is "beta" then it is ok to release it as such. The article disagrees. The article is arguing that a "beta" feature should never be released in the first place, regardless of whether it can be opted-in or not.

“Customers vary widely in knowledge and experience and should never, ever be used as testers for these sorts of systems. “ this is an example of a self selecting sample where people who don’t know about these systems will not turn them on or even know how to (assuming they won’t read to the end of both warning statements where it shows how to enable it)

We have no way of knowing that owners who don't know how to use the system will choose not to turn it on. The point of the article is well taken. Tesla owners have a wide range of knowledge and experience. We can't assume that all owners will be smart about the feature. There are plenty of owners who shoot youtube videos that should never be allowed to use AP if you ask me because they use it very irresponsibly. Heck, I've seen the youtube video of one owner who reads the release notes in the video and still mistakenly thinks that the update is for "city NOA" and keeps expecting the car to make turns automatically. So you cannot assume anything about owners. So the article is simply saying that owners, who are not professional testers and do come in various levels of knowledge, should not be put in the situation of being "beta testers".
 
I agree the feature has to be turned on first so Tesla is not forcing an untested feature on unsuspected owners. That would be worse. But the article thinks it is still negligence to even give owners the option to turn on an untested "beta" feature, even with warnings. The article is arguing that the auto maker should fully test and validate the feature and only release when it is reliable and safe to use.

To some extent I agree that there is some risk that someone may enable the feature then go out and fall asleep while the car goes out on the roads. But realistically, no one in their right mind is going to test out a beta feature that drives this roughly without their foot hovering over the brake or pedal to take control immediately. My only assertion is that this is an internally tested feature that needs a bunch more polish and real world supervised fleet data to get better faster. A soccer mom with a car full of kids is not going to be testing this out.

But aren't you approaching it from the assumption that HD maps are useless and that Tesla has the right approach to focus on camera vision only? Not everybody agrees with that approach. The article is expressing the other view that HD maps should be used.

I don't think HD maps are useless per se in certain scenarios, and it would definitely not hurt. But this is a crutch that Tesla aims to completely bypass using a more generalized approach that works everywhere, not just within pre-scanned areas. He also doesn't understand that an HD map is only for delta navigation, whereby a car gets the data then matches that dataset's radial segment to its current surroundings to know how to navigate through it. The car then reacts to changes within the dataset to navigate around new obstacles appropriately. Tesla is going after the larger, more difficult problem of a generalized solution that reacts in real time to novel situations. Trying to optimize something that isn't needed (HD maps), is a waste of time because they aim to solve the vision problem. Stop signs can easily be mapped to GPS coordinates without the extra overhead of huge HD maps, but if that stop sign moves, the dataset is out of date. The author clearly doesn't understand that nuanced difference. To his credit, HD maps are indeed a requirement for systems that don't have computer vision as advanced as Tesla's presumably is, and would add to safety for a final system that was built on it.

Again, the issue is not that the feature can be opted in. You seem to be taking the view that as long as Tesla owners know that it is "beta" then it is ok to release it as such. The article disagrees. The article is arguing that a "beta" feature should never be released in the first place, regardless of whether it can be opted-in or not.

Not only do users know it is in beta, but they also know the implications of relying upon it while commuting almost immediately by how it reacts and tells you what it is doing before it does so. I agree to some extent with the article in saying releasing a beta feature could prove dangerous, as its use cannot be controlled, but enabling this feature puts the user on the highest of alerts. And in the worst case scenario, someone could forget it is on then get in to trouble in an edge case, and that person will learn a lesson very quickly. I have personally tried the feature and there is no way I will be testing it out with family in the car or in dense traffic due to how often it stops/slows, but was clearly tested internally, negating the assertion that it was just released with zero testing. Ostensibly, a car stopping by itself for traffic control vs a car that does not even have the feature, arguably makes it safer than the car released without it, yet the no-stop car was released anyway. I get his point of view, but he is being pretty hyperbolic about its dangers - clearly not researching how it defaults to slowing down and stopping, unless the user inputs otherwise. He has not even experienced the feature yet. The user still needs to pay attention just like in any and all scenarios, but he ignores, that even in its beta form, this adds a layer of safety above the feature not being available at all.

We have no way of knowing that owners who don't know how to use the system will choose not to turn it on. The point of the article is well taken. Tesla owners have a wide range of knowledge and experience. We can't assume that all owners will be smart about the feature. There are plenty of owners who shoot youtube videos that should never be allowed to use AP if you ask me because they use it very irresponsibly. Heck, I've seen the youtube video of one owner who reads the release notes in the video and still mistakenly thinks that the update is for "city NOA" and keeps expecting the car to make turns automatically. So you cannot assume anything about owners. So the article is simply saying that owners, who are not professional testers and do come in various levels of knowledge, should not be put in the situation of being "beta testers"

Agreed, we also don't know if irresponsible owners will put their phones down and not text while driving, or even pay attention at all while any car is on cruise control.Tesla didn't release a beta feature that accelerates when it sees people crossing the road, or one that races to beat trains at railroad crossings here. The feature literally tries to capture traffic control, and stop by default. If the user gives no input, they will be stopped at a green light, and highly annoyed by the people honking behind them. The incentive to pay attention highly outweighs the urge not to - beta testing veteran or not. We have to draw the line somewhere, so if someone is really dumb enough to skip all the warnings, manually enable the feature (ignoring another warning), and not pay attention to all the flashing warnings popping up while using it, they probably shouldn't be driving anyway (and I have seen the same idiotic Youtube videos which are vastly outnumbered by responsible users' videos who don't want their cars damaged, so don't take unnecessary risks for a few online clicks).
 
Last edited:
  • Informative
Reactions: pilotSteve
As someone with MY on order (no fsd) I actually agree with the article, maybe not with it's tone, but with it's conclusions. There is a difference between a Tesla owner reading the warnings and accepting risks, and other drivers/pedestrians who would be impacted and are unaware of what's going on. Pushing out unfinished automation that can easily kill or injure is irresponsible. We already know there are people who won't read warnings and won't understand the risks.
I dont mind underdeveloped, beta features, as long as they are not safety related. Would anyone accept a car with experimental brakes or air bags or belts? What about experimental child's seat?
In essence that's what we have here.
I would love nothing more than for this tech to succeed, but if it's pushed out irresponsibly, all it's going to do is increase insurance rates for Tesla owners, generate bad publicity for Tesla, and force regulations that will nerf the tech into the ground. Not even mentioning potential injuries.

To add, while it does stop by default on green, drivers behind the Tesla would be making certain assumptions - like car not stopping for green light. They may be forced to drive around it, increasing potential risks to themselves, or even hit it from behind. While technically they will be responsible, the risk of injury to both drivers is still there.
 
Last edited:
Having used the traffic light feature now I will say that Tesla has certainly taken some precautions for safety. There are of course multiple warnings about the feature being beta and the driver has to opt in. Also, the speed is restricted to the speed limit to ensure the car is going slower so that the driver has more time to react if necessary. And there are also plenty of notifications on the screen to tell the driver what the car is doing. Having said, I still agree that releasing "beta" features to the general public that could potentially cause harm or death is very risky and unwise.
 
It’s hilarious and pathetic that people will write about or argue about something they have less than zero experience with.

I’ve extensively been testing this feature since my car received it and it works exceptionally well. In any cases where the car wasn’t entirely sure, it chose to err on the side of caution and began to slow down.

Guess what? I also as a reasonably intelligent person using common sense began testing the capabilities of the new firmware by doing so in minimal to no traffic situations until I had a full understanding.

No, the car does not brake so suddenly and so hard that a reasonably responsible driver would hit you from behind. Never mind the FACT that the driver behind is responsible for NOT hitting the car in front of it via rules of the road such as not tailgating and paying attention EVEN IF the car ahead brakes suddenly.

So much ridiculousness in the human race. I’m ashamed on all our behalf’s.
 
I complained a bunch when Tesla released the lane-departure feature that always opted on and yet was clearly not ready. It was to the point I held back all software upgrades until it could be defaulted off. Just 1 opinion.

I would normally agree that proper information about the beta nature of the software should be sufficient but then I was (not completely really) surprised at the number of people who did the Tide-Pod whatever and then just in the last days, reported to poison control centers having tried to disinfect themselves internally.

It's a really tough call when common sense is so uncommon and a product manufacturer has to deal with such a wide range of users. The reality is that any producer of a product that has the potential to cause harm even if due to incompetence/inattention/naivety of it's user has a pretty high bar they have to meet in today's world.

I don't know if I'd call it negligence but it may be reckless in it's potential blow-back to the company's image (not that the media is fair in any way with Tesla it seems). I just know that if I was in charge, I'd be a bit more cautious with some features. And I do agree a lot with:

"Any system that mandates human supervision is neither autonomous nor self-driving. It is at best partially automated. "

My opinion, at least today with the current system capabilities.
 
I don't think HD maps are useless per se in certain scenarios, and it would definitely not hurt. But this is a crutch that Tesla aims to completely bypass using a more generalized approach that works everywhere, not just within pre-scanned areas. He also doesn't understand that an HD map is only for delta navigation, whereby a car gets the data then matches that dataset's radial segment to its current surroundings to know how to navigate through it. The car then reacts to changes within the dataset to navigate around new obstacles appropriately. Tesla is going after the larger, more difficult problem of a generalized solution that reacts in real time to novel situations. Trying to optimize something that isn't needed (HD maps), is a waste of time because they aim to solve the vision problem. Stop signs can easily be mapped to GPS coordinates without the extra overhead of huge HD maps, but if that stop sign moves, the dataset is out of date. The author clearly doesn't understand that nuanced difference. To his credit, HD maps are indeed a requirement for systems that don't have computer vision as advanced as Tesla's presumably is, and would add to safety for a final system that was built on it.

The author is an idiot and here’s why. The ‘beta’ version has already proven itself in this regard.

I drove through a non-traffic lighted intersection (two way stop - stopping on the crossroads, not in my direction). Car drove through the intersection on TACC like it should.

FOUR hours later I drove back the other way to discover traffic lights were being erected. Only half the light poles were up and of course none of the traffic lights were working.

At first it seemed like the car would just go through the intersection again, but at 300ft it started to ‘compute’. By 100ft it had decided there was enough ‘evidence’ in the incomplete, non-working set of traffic lights to indicate it should slow for a ‘traffic condition’.

OEM car systems depending on maps and geofencing to get them to FSD are dead in the water.
 
As someone with MY on order (no fsd) I actually agree with the article, maybe not with it's tone, but with it's conclusions. There is a difference between a Tesla owner reading the warnings and accepting risks, and other drivers/pedestrians who would be impacted and are unaware of what's going on. Pushing out unfinished automation that can easily kill or injure is irresponsible. We already know there are people who won't read warnings and won't understand the risks.
I dont mind underdeveloped, beta features, as long as they are not safety related. Would anyone accept a car with experimental brakes or air bags or belts? What about experimental child's seat?
In essence that's what we have here.
I would love nothing more than for this tech to succeed, but if it's pushed out irresponsibly, all it's going to do is increase insurance rates for Tesla owners, generate bad publicity for Tesla, and force regulations that will nerf the tech into the ground. Not even mentioning potential injuries.

To add, while it does stop by default on green, drivers behind the Tesla would be making certain assumptions - like car not stopping for green light. They may be forced to drive around it, increasing potential risks to themselves, or even hit it from behind. While technically they will be responsible, the risk of injury to both drivers is still there.

My overall contention is the fact that the feature being released adds protections to the car, it doesn’t take them away. Before the update, the car (and all others with cruise control) would allow it drive right through stop signs and traffic lights, and now will slow to a stop at them by default if the user enables it. The only way the user will know how to enable it, is by reading to the end of the warning stack where it says how to do so. Then as the car drives, it pops up a warning for each intersection stating it will slow to a stop in X feet. I think the confusion here is the behavior of the feature.

How is adding this feature any more dangerous than a car with just basic cruise control that doesn’t steer OR stop at all? One could argue that driver complacency could be an issue, but no more than the former example would be. Complacency with cruise control that has no active monitoring is infinitely worse than a system, beta or not, that is literally programmed to stop by default at traffic control. The article is hyperbolic at best, and having used this system since it’s release, allays the fears presented. And I agree, the tone of it is ridiculous.

Now if I am missing some other feature that is implemented in this beta, I will of course correct my statement. My issue is that people just see the word “beta” or “developmental” and automatically think the worse (justifiably). But in reality, even in the beta form, the target audience wanting to test this in the wild has a higher likelihood to be responsible with it - especially in a new car.

What else should I think about going wrong here?
 
Last edited:
The thing with this feature is that it stops by default and if you override it and a light turns red it ignores your override.

This rollout requires a LOT of attention by design. Ignoring it would result in a car permanently stopped at a traffic junction. It inherently fails safe.

I can’t think of a lower-risk rollout option. If anything it forces the driver to pay more attention than they normally would without any ADAS features.
 
  • Like
Reactions: AI1337Tech
My overall contention is the fact that the feature being released adds protections to the car, it doesn’t take them away. Before the update, the car (and all others with cruise control) would allow it drive right through stop signs and traffic lights, and now will slow to a stop at them by default if the user enables it. The only way the user will know how to enable it, is by reading to the end of the warning stack where it says how to do so. Then as the car drives, it pops up a warning for each intersection stating it will slow to a stop in X feet. I think the confusion here is the behavior of the feature.

How is adding this feature any more dangerous than a car with just basic cruise control that doesn’t steer OR stop at all? One could argue that driver complacency could be an issue, but no more than the former example would be. Complacency with cruise control that has no active monitoring is infinitely worse than a system, beta or not, that is literally programmed to stop by default at traffic control. The article is hyperbolic at best, and having used this system since it’s release, allays the fears presented. And I agree, the tone of it is ridiculous.

Now if I am missing some other feature that is implemented in this beta, I will of course correct my statement. My issue is that people just see the word “beta” or “developmental” and automatically think the worse (justifiably). But in reality, even in the beta form, the target audience wanting to test this in the wild has a higher likelihood to be responsible with it - especially in a new car.

What else should I think about going wrong here?

Couple of things:
1. Slamming on brakes on green, and any other erratic driving behavior, is not safe. The issue of liability aside, its something that can cause injury or confusion on part of other drivers. Sudden stops and erratic behavior is supposed to be justified only for an emergency response, not a common behavior of a vehicle.
2. It sounds that Tesla is not yet confident in 100% detection and accuracy of traffic conditions - that's why it requires confirmation. In that case, it can lead to driver's complacency and may blows past a traffic control device or an intersection.

Accepting and understanding risks is fine if the driver doing acceptance is the only one that will be affected in case of failure. It's not the case here.
 
  • Like
Reactions: Msjulie
Couple of things:
1. Slamming on brakes on green, and any other erratic driving behavior, is not safe. The issue of liability aside, its something that can cause injury or confusion on part of other drivers. Sudden stops and erratic behavior is supposed to be justified only for an emergency response, not a common behavior of a vehicle.
2. It sounds that Tesla is not yet confident in 100% detection and accuracy of traffic conditions - that's why it requires confirmation. In that case, it can lead to driver's complacency and may blows past a traffic control device or an intersection.

Accepting and understanding risks is fine if the driver doing acceptance is the only one that will be affected in case of failure. It's not the case here.

How many miles have you driven on this version? My car does not “slam brakes on green lights.” When it wants to slow it does so gradually with plenty of warning on the screen.
 
Try driving with it before being so hyperbolic. The system is conservative to a fault.

Edit: Is your car HW2.5 or HW3?

1. I am responding to a topic of this discussion - the article in Forbes - specifically, that the article is somehow biased. I'm sure I"m not the only one that agrees with its premise.
2. As I mentioned up-thread, I have a reservation on MY, but have not received it yet. My friend, with M-S and older computer (I think HW1, this is pre-FSD), has mixed experience with AutoPilot, and stopped using it now, after owning the car for many years.
3. I did not order FSD. as I agree with posted article's premise that a beta/ incomplete feature is not acceptable when it can jeopardize lives. I have no intention of using the AutoPilot either, but then most of my driving is on city streets anyway.
 
  • Funny
Reactions: mikes_fsd
1. I am responding to a topic of this discussion - the article in Forbes - specifically, that the article is somehow biased. I'm sure I"m not the only one that agrees with its premise.
2. As I mentioned up-thread, I have a reservation on MY, but have not received it yet. My friend, with M-S and older computer (I think HW1, this is pre-FSD), has mixed experience with AutoPilot, and stopped using it now, after owning the car for many years.
3. I did not order FSD. as I agree with posted article's premise that a beta/ incomplete feature is not acceptable when it can jeopardize lives. I have no intention of using the AutoPilot either, but then most of my driving is on city streets anyway.

So you have no car, no experience with autopilot, and no personal miles driven with this software version (let alone any version).

Is it not obvious to see how an article written by someone with the exact same lack of actual experience can then lead to further bias in their readers when they too have no first hand experience?

Autopilot has plenty of actual faults and rough edges but it sure would be nice if those opining the loudest had actually experienced the product firsthand.
 
So you have no car, no experience with autopilot, and no personal miles driven with this software version (let alone any version).

Is it not obvious to see how an article written by someone with the exact same lack of actual experience can then lead to further bias in their readers when they too have no first hand experience?

Autopilot has plenty of actual faults and rough edges but it sure would be nice if those opining the loudest had actually experienced the product firsthand.

This exactly illustrates my point to a sharp T. The article basically has this underlying negative tone to it that is as loud as the authors lack of experience. The numbers and stats not only show autopilot to be very safe, but well-tested and vetted, while continually improving.

Also, this latest stopping for traffic control is so overtly naggy (by design) in its first iteration, it was obvious that it was meant to be implemented with training as a byproduct for users to opt in to by choice. I will also add that had the editor known anything about autopilot (or how it's A.i.-based development compares to other systems), he would not have mentioned that a "few engineers should have tested it out first" (which they did and always do). 20 engineers vs tens of thousands of in-the-wild users in a relatively controlled testing environment isn't even close to adequate, as the wider user base can train the system exponentially faster - and safely. Using the 20 engineers method is what legacy auto does, and its why they are so far behind. There are responsible ways to do this, and it is exactly how Tesla is doing it.