Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Forbes biased article again proves misunderstanding of how Tesla’s betas work.

This site may earn commission on affiliate links.
Couple of things:
1. Slamming on brakes on green, and any other erratic driving behavior, is not safe. The issue of liability aside, its something that can cause injury or confusion on part of other drivers. Sudden stops and erratic behavior is supposed to be justified only for an emergency response, not a common behavior of a vehicle.
2. It sounds that Tesla is not yet confident in 100% detection and accuracy of traffic conditions - that's why it requires confirmation. In that case, it can lead to driver's complacency and may blows past a traffic control device or an intersection.

Accepting and understanding risks is fine if the driver doing acceptance is the only one that will be affected in case of failure. It's not the case here.

I have been using this feature since release, and it has never once slammed on it's brakes, even for last second red lights. It gives a 600 foot warning, then at around 200 feet starts slowing down, then comes to a soft stop (albeit not as smoothly as I would personally do it, but not jerky at all)

For point 2, the system is not any different than someone driving with simple cruise control going into complacency not paying attention on autopilot, or just simple cruise control. For that person, they would be crashing anyway. And for someone complacent enough to drive like this with a total disregard for all the warnings, every stoplight/sign nagging, and performing the work to turn it on, is not likely to be someone in the target group of testers willing to participate in this - it is a self-selecting-sample group so to speak. Tesla is of course not 100% confident in these systems, which is why we give the systems input so it learns over time. This is what machine learning is, and as the confidence level goes up, the supervision goes down.20 engineers aren't going to solve this without tons of data.

If the third point about driver acceptance held any water, no cruise control system would ever have been released from any manufacturer. Imagine that meeting. "You want to release a feature where the user just taps a button and the car just drives forward without steering/stopping/monitoring the user's state of consciousness, or even verifying the user is going the sped limit and not on a windy road?" At some point the user has to take responsibility for the 4000 pound box they are piloting. Tesla's system, even in beta form, adds a layer of protection above the everyday "dumb and blind" cruise control systems on other cars that no one seems to have a problem with.
 
  • Like
Reactions: mikes_fsd
The author is an idiot and here’s why. The ‘beta’ version has already proven itself in this regard.

I drove through a non-traffic lighted intersection (two way stop - stopping on the crossroads, not in my direction). Car drove through the intersection on TACC like it should.

FOUR hours later I drove back the other way to discover traffic lights were being erected. Only half the light poles were up and of course none of the traffic lights were working.

At first it seemed like the car would just go through the intersection again, but at 300ft it started to ‘compute’. By 100ft it had decided there was enough ‘evidence’ in the incomplete, non-working set of traffic lights to indicate it should slow for a ‘traffic condition’.

OEM car systems depending on maps and geofencing to get them to FSD are dead in the water.

There aren't enough zero's in the universe that I can type to portray how much I agree on this point, so I will stop at 100% agreement. The authors ignorance is absolutely glaring and shows he is still massively tied to the old world and methods of doing this. If humans don't need HD maps, neither do cars (but couldn't hurt as long as it was lightweight enough and used only as a suggestion, later validated by what the the car sees and interprets).

Reacting to novel situations in a generalized fashion is how this gets solved, and Tesla is doing exactly that. HD maps are cool and all (and GPS-based event triggers are too), but a smart car needs none of that. Its in the same crutch category as lidar on cars and magnets in the roads from way back when. Computer vision is what sets Tesla ahead of the pack over the spinning toilet paper tube nonsense of their "competition"
 
I complained a bunch when Tesla released the lane-departure feature that always opted on and yet was clearly not ready. It was to the point I held back all software upgrades until it could be defaulted off. Just 1 opinion.

I would normally agree that proper information about the beta nature of the software should be sufficient but then I was (not completely really) surprised at the number of people who did the Tide-Pod whatever and then just in the last days, reported to poison control centers having tried to disinfect themselves internally.

It's a really tough call when common sense is so uncommon and a product manufacturer has to deal with such a wide range of users. The reality is that any producer of a product that has the potential to cause harm even if due to incompetence/inattention/naivety of it's user has a pretty high bar they have to meet in today's world.

I don't know if I'd call it negligence but it may be reckless in it's potential blow-back to the company's image (not that the media is fair in any way with Tesla it seems). I just know that if I was in charge, I'd be a bit more cautious with some features. And I do agree a lot with:

"Any system that mandates human supervision is neither autonomous nor self-driving. It is at best partially automated. "

My opinion, at least today with the current system capabilities.

Agreed about the common sense (read "basic/default sense") is lacking in people. The system is naggy now while developing and learning, and eventually will not prompt when a customer is in the car. The way they implemented it here is actually very good and errs on the side of absolute caution - no input = slowing to a stop. I guess we should have all the manufacturers take away "dumb" cruise control because a few idiots ghost ride their whips?

"If you try too hard to protect idiots, they will just make a better, more advanced idiot."
 
1. I am responding to a topic of this discussion - the article in Forbes - specifically, that the article is somehow biased. I'm sure I"m not the only one that agrees with its premise.
2. As I mentioned up-thread, I have a reservation on MY, but have not received it yet. My friend, with M-S and older computer (I think HW1, this is pre-FSD), has mixed experience with AutoPilot, and stopped using it now, after owning the car for many years.
3. I did not order FSD. as I agree with posted article's premise that a beta/ incomplete feature is not acceptable when it can jeopardize lives. I have no intention of using the AutoPilot either, but then most of my driving is on city streets anyway.

If you have ever used traffic aware cruise control on any car, and have not crashed due to understanding its limitations, you are not part of the groups of morons who abuse the systems by sitting on their hoods and hanging outside the doors etc. Why on earth would this article not once mention that smarter cruise control is infinitely better than just allowing a car to drive on its own because a button was pushed? That is what is dangerous, this release for Tesla is wildly naggy (as it is how it is at first while training), and will gradually get amazing. Wouldn't you consider cruise control on all cars massively incomplete? I have driven cars with zero lane keep ability or awareness of the car/objects in front of them - yet no complaints. Baffles my mind how that works. Once you see this feature in action, I think your fears will lessen.
 
Tesla has certainly taken some precautions for safety.
....
Having said, I still agree that releasing "beta" features to the general public that could potentially cause harm or death is very risky and unwise.
1000% disagree with the final sentence.
This is why Waymo is in the state that it is, they are so petrified of the risk that they will take a few more decades to get a solution to market (if there was no competition pushing the boundaries).

Slamming on brakes on green, and any other erratic driving behavior, is not safe. The issue of liability aside, its something that can cause injury or confusion on part of other drivers.

It does not "slam on brakes" for green, it starts slowing down and it is pretty gradual. Before it starts slowing down the notification is already on the screen.
Find a friend with a Tesla w/ FSD and ask to test the feature, all this fear will start dissolving after a 15 minute drive.
To me, the notifications are WELL in advance and are difficult to ignore, if you ignore the visual notification then the car starts gently slowing down -- giving you a physical notification.

If you can't respond to both of those, then you should not be on the road anyway.
 
1000% disagree with the final sentence.
This is why Waymo is in the state that it is, they are so petrified of the risk that they will take a few more decades to get a solution to market (if there was no competition pushing the boundaries).

What you call being "petrified of the risk" others might call being responsible and safe. And no, it will not take Waymo decades to get a solution to market. It will happen sooner than that. The difference is that when Waymo releases a product to market, it won't kill anybody.
 
  • Funny
Reactions: mikes_fsd
Anytime you get in your car and drive, be it on “beta” autopilot or manual driving, you incur some sort of risk not only for yourself, but also for people around you. There is no driving that never poses any risk to people sharing the road with you.