Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
Speaking of which, @verygreen - I presume from experience that's how the NAND is setup. Any way you can confirm?
something along those lines. they have dm-verity measured FW A and FW B partitions that are readonly and then separate /var for logs and /home for various snapshots and /maps for maps (on infotainment they have /var/log for logs and /home is for settings and media cache and whatnot)
 
  • Like
Reactions: helvio
Almost 9 months ago you were touting that Tesla will have L5 in 6-9 months with ~150k miles per disengagement on avg and that it was "game over" for the competition... Now you are saying that Tesla solved vision with <10 miles per safety disengagement on avg

The mental gymnastics of Tesla fans never ceases to amaze me.

I think @powertoold is competing with Elon for most missed FSD predictions.
 
The challenge for Tesla is to develop NNs with patterns that safely gets a robotaxi through a high enough percentile of situations to be financially viable.
this. Andy is spot on. The key is to be "financially viable." Once that occurs, a virtuous cycle begins. Namely, a self-reinforcing effect that makes more resources (including capital, mind share, general brand recognition, user base and other network effects) available to improve on the solution.

However, as I have suggested elsewhere in the tmc forum, Tesla has a natural advantage over pure robotaxi plays. In contrast to Waymo and other competitors, Tesla achieves this financially viable FSD virtuous cycle even before robotaxis are viable. This is by including those for whom the purchase of FSD already provides enough perceived benefit even in the pre-robotaxi stage of improvement. And that earlier entry point into a virtuous cycle increases their likelihood to also dominate robotaxi.
 
  • Like
Reactions: AndyH
Comments like these tell me you don't understand the enormity of what v9 is accomplishing. They're rewritten the way the car understands the world from the ground up. That it's better out of the the gate than the previous instance tells you that it's a rock-solid foundation to build upon in the near future via machine learning.
declaring its rock solid is a bit premature, dontchathink?

each increment is just that. some increments are forklift upgrades (rewrites). some are tweaks. some are adjustments to changes in hardware (cough, cough).

the evolution of self driving software is not even midway in where it needs to go.

as for machine learning, it remains to be seen if this is really the ultimate way to accomplish this. some mix of it, I suspect, but perhaps a lot less than most here seem to have bought into.
 
If there's something there, I don't care what it is. I don't want to hit it. Object detection should be used to determine whether it is something to go around or something that is likely to move on its own. But if an object is detected, with almost zero exceptions (e.g. an empty plastic bag blowing in the air, a small pile of paper trash, etc.) those are the only two choices. "Run right straight into it" is never an option unless you're playing a role in the next Cannonball Run movie.
in the big picture, the philosophy ('trolley problem') must be well documented and agreed upon.

I'm not aware of any firm statements from any vendor who is willing (or to be honest, has thought thru the trolley problem well enough) to take a stand on what the decision tree should be for various situations.


in fact, I don't see much discussion in this site about what the right thing to do is, in those situations. of course, you can take the self centered view and say 'my car should never hit anything or damage itself' but what if 10 or 100 lives can be saved?

this is the 'last mile' (so to speak, lol) that has to be solved before we have true level 5. there will be valuations on who should die and who should get hurt, if SOMEONE has to (cars to the left and right and you have to do something to avoid the brick wall, etc).

btw, I'd love to understand how 'vision' can be used to determine which life is worth more, etc. I'm not even sure a full sensor array is enough, but that's for another thread ;)
 
Yes, idiots exist. They always will. Cars go up to, I don't know, 200+mph today. Why? Should we revoke those, because there are idiots who will street race?

My answer is no. Let's give people what they want. Most are not idiots. And just like how fast your car can go, FSD is just another feature that requires common sense and responsibility.
one thing the last few years taught me - americans, as a whole, are much dumber than we thought. just look at the pandemic and how we have people actively working against their own best interests.

to 'child proof' a mass market item is a challenge. it has to be done for 'self driving' too and you cannot assume or demand expert drivers. you can design prototypes for your expert users, but once you make it general purpose and buyable, that's a whole other level.

the stupidity of the userbase will always surprise you. just when you thought you'd seen the lowest, some new user comes along and proves the bar is lower, still.

yes, this is the riskiest time for tesla. turning a bunch of yahoos loose on this tech can have irreversable consequences. everyone is watching, so the pressure is on. and tesla tends to take chances that not everyone thinks is prudent.

betting on users to 'do the right thing' often backfires. just saying.
 
To clarify, what I was proposing there (assuming it's not a PLC with dedicated wiring for talking to a single device) was taking a pretty fast CAN bus device and turning it into a single-digit-bits per minute device that literally gets a boolean flag sent once in a while to switch between "enable camera 3" mode and "enable camera 1" mode. :D The high-speed data would go through the existing camera input.
a low speed control signal to switch sources? I didn't get that on the first read, thanks.

if that's what you have in mind, that would not be a huge problem, network-wise. that was my fear; swapping out one device with one traffic nature for another with a radically different one.
 
in the big picture, the philosophy ('trolley problem') must be well documented and agreed upon.

I'm not aware of any firm statements from any vendor who is willing (or to be honest, has thought thru the trolley problem well enough) to take a stand on what the decision tree should be for various situations.


in fact, I don't see much discussion in this site about what the right thing to do is, in those situations. of course, you can take the self centered view and say 'my car should never hit anything or damage itself' but what if 10 or 100 lives can be saved?
Realistically, there is never a situation where slamming on the brakes should cause harm unless the person behind you is doing something stupid, in which case that person is at fault. So the "don't react at all" response is always a mistake. Always.

Whether to swerve or not does, in fact, depend on what's next to you, but it is almost always possible to combine either braking or acceleration to cut into a gap.

More importantly, though, if you can see an obstacle early enough to avoid it, traffic is unlikely to be so dense that you'd be prevented from adjusting your speed rapidly and slipping into a gap in traffic. If traffic is dense enough for you do be unable to steer around something, odds are either A. it happened suddenly, so you don't have time react at all, B. everybody else has already hit it and survived, so you can safely ignore it, or C. everybody else is steering around it, so traffic is already slow.

In practice, the situations where the only option is to slam on your brakes while traveling at a high speed are rare. Most of those sorts of problems are caused by driver inattention, rather than an actual sudden problem.
 
Realistically, there is never a situation where slamming on the brakes should cause harm unless the person behind you is doing something stupid,
Do you believe Tesla's current FSD could handle the car in front going full ABS brakes all the way to a stop in every case without hitting the car in front, or is FSD "stupid"?

A follow distance of 2 seconds in many areas of the USA will end up with multiple cars moving into that space. It's 176 feet at 60 MPH. To say that it should always be safe to just jam your brakes and anyone that hits you is "stupid" is not a very honest view of driving. People drive closer than this because they have done it for decades with no issues, because humans do not drive this way. This is why the automotive industry and NHTSA actually considers inadvertent brake application to be quite dangerous, instead of fully safe like you argue.

All of this is the same thing that will happen with AP if it only crashes once every 50,000 miles. People will get complacent and pay less attention, and then it will steer into a barrier one day, and you'll call them "stupid." But it's not stupid, it's human nature, and good systems designed for humans take human nature into account. You're effectively saying that everyone that has ever been harmed by complacency was stupid. Yet if you search google for "safety systems complacency" you find articles on how to battle this aspect of human nature, not just a bunch of safety experts calling users stupid.
 
in the big picture, the philosophy ('trolley problem') must be well documented and agreed upon.

I'm not aware of any firm statements from any vendor who is willing (or to be honest, has thought thru the trolley problem well enough) to take a stand on what the decision tree should be for various situations.


in fact, I don't see much discussion in this site about what the right thing to do is, in those situations. of course, you can take the self centered view and say 'my car should never hit anything or damage itself' but what if 10 or 100 lives can be saved?

this is the 'last mile' (so to speak, lol) that has to be solved before we have true level 5. there will be valuations on who should die and who should get hurt, if SOMEONE has to (cars to the left and right and you have to do something to avoid the brick wall, etc).

btw, I'd love to understand how 'vision' can be used to determine which life is worth more, etc. I'm not even sure a full sensor array is enough, but that's for another thread ;)
Trolley problem already discussed in other threads. It's brought up a lot really, but the capabilities of self driving cars at this point (and also how a human would react in the first place) make it irrelevant at this point.
Autonomous Car Progress
 
I'd actually kind of go further; there needs to be 2 people in the car if fsd beta is being tested. sorry, but I feel that way about this and so should lots of you.

one ENTHUSIAST who is trying to show tesla and his new shiny toy in the best light is not paying enough attention to ANYTHING. they are overwhelmed with inputs and not only that, they are talking at the same time. trying to advertise their channels and all that. dont like this aspect of the public beta at all. personally, no one should be ENCOURAGED to run cameras and be a youtuber. this is not the way to run a safe public beta test on public roads, dammit.

sorry to rain on y'alls parade, here; and I know I'm in the minority here. everyone wants to SEE this evolve. I want to, too; but not like this.

a responsible ceo would stop this behavior and run proper beta tests. yes, with less publicity. its how it should be done.

eventually, one of the YTers is gonna hit someone and that will be the end. lawmakers may even go quite far to put us all backwards. the red states would just LOVE to see this experiment fail.

I hope tesla rethinks how this is being done and makes serious changes. this is NOT an advertising stunt, guys. grow the hell up, tesla. stop being frat boys.
I agree that there are numerous issues with how its being tested, but I don't think they're needs to be 2 people in the car. The main reason for two people in a vehicle being tested for L4 autonomous driving is that there is too much work to be done for a single person to handle. A single person can't handle both the task of being situationally aware of the driving environment, seeing what the car is doing and is seeing along with documenting everything necessary.

But, L2 driving doesn't require seeing what the car is doing or any documentation. All a person driving an L2 vehicle has to do is to make sure the vehicle is doing what they want the vehicle to do.

They have ONE job to do, and that's it.

The biggest danger is complacency that sets in when an autonomous system performs well. It's exceptionally hard to oversee a system that works well 99% of the time. Sometimes this happens unconsciously when a person starts to lose situational awareness, but isn't purposely distracted. In other cases like the Uber fatality the safety driver completely ignored their responsibility. In any case FSD Beta is no where near a point of inducing complacency. I simply can't see anyone of sound mind trusting it at all.

As to the FSD beta I think the entire thing needs to be revamped.
  • The FSD Beta testing shouldn't be tied to the early access program at all. That's fine for things like UI changes or SW features, but not driving features.
  • The internal testing needs to be expanded considerably even to the point of contracting out to FSD beta owners to have them do testing. Sure some will find this problematic because there is only one person in the vehicle.
  • Make a major effort at fixing maps, and navigation issues. Where this is released separately from an FSD beta where it allows people to report issues with navigation, NoA, etc. Where the person reporting can track to see if the issue has been fixed or is in the process of being fixed.
  • Limit the capabilities of FSD Beta both in capability of what it can do, and in where it works. Where most of the focus is on improvements to NoA, Rest Stop offramps, reverse summons, etc. NoA is already Geofenced so expand from it to include different road types connected to roads that already work with NoA.
 
  • Like
Reactions: diplomat33
Do you believe Tesla's current FSD could handle the car in front going full ABS brakes all the way to a stop in every case without hitting the car in front, or is FSD "stupid"?
You're ignoring the possibility that the answer is "both". :D

A human takes, on average, 2.3 seconds to start braking. The Tesla should take, on average, maybe 100 milliseconds or so. So it should be possible for the Tesla to handle another car panic braking without too much trouble, whereas a human driver would probably cause a rear-end collision if not leaving a proper following distance.


A follow distance of 2 seconds in many areas of the USA will end up with multiple cars moving into that space. It's 176 feet at 60 MPH. To say that it should always be safe to just jam your brakes and anyone that hits you is "stupid" is not a very honest view of driving.
The law says that if you rear-end me, it is your fault, period, with zero exceptions unless there was inadequate time for you to slow down between when I got into the lane and when I slammed on the brakes. So from a liability perspective, I have no liability if I bury it in the brakes to avoid hitting someone in the road, whereas if I allow myself to hit someone in the road, I have 100% liability.

The correct choice is always "evade if safe; if not, slow down as much as humanly possible to minimize the damage and injuries". Period. That's what the law says. You can disagree with the law if you want to, but that's what the law says, and I happen to think that it's the correct call. Better to have a hundred vehicles piling up with a small speed differential between each of the two cars than to have a single collision with a 70 MPH speed differential into an immovable object, a person, or just about anything else that could go flying off and hit someone's windshield.


People drive closer than this because they have done it for decades with no issues, because humans do not drive this way. This is why the automotive industry and NHTSA actually considers inadvertent brake application to be quite dangerous, instead of fully safe like you argue.
Inadvertent. That's the key word. That's not the same thing as using the brakes intentionally to avoid a legitimately unsafe situation. People hate Tesla's phantom braking because it is stupid.

If a car going 3 MPH slower than you intrudes into your lane, it slows down as though it thought a car going 3 MPH intruded into your lane. In those circumstances, it is never necessary to slow down to a speed slower than the vehicle in question, but AP frequently does.

And it confuses things outside of the lane (e.g. guard rails) with cars, and suddenly thinks that part of the car in front of you has stopped, despite that being largely nonsensical (because part of a car doesn't generally suddenly stop).

Those problems are bugs that need to be fixed. But that is independent of whether the vehicle should stop for or avoid any actual object in the road unless it is impossible to do so safely because of time between detection and impact.

The critical thing is to react quickly, reassess quickly, and then adjust quickly. If you think there's something wrong that requires braking, start braking a little bit, verify your data in the next frame, and either increase or decrease braking in a rapid feedback loop. Don't brake at 100% instantly, because 1/30th of a second to verify the data with another frame is almost never critical, but don't wait to start ramping the brakes in case it's real. Then adjust quickly, and if you were wrong, get quickly back up to speed.


All of this is the same thing that will happen with AP if it only crashes once every 50,000 miles. People will get complacent and pay less attention, and then it will steer into a barrier one day, and you'll call them "stupid." But it's not stupid, it's human nature, and good systems designed for humans take human nature into account. You're effectively saying that everyone that has ever been harmed by complacency was stupid. Yet if you search google for "safety systems complacency" you find articles on how to battle this aspect of human nature, not just a bunch of safety experts calling users stupid.
Agreed. That's very much human nature. I wouldn't be surprised if AP occasionally acted like it was going to do something stupid (but then didn't) entirely to scare drivers into paying attention. Maybe that's the real story behind phantom braking. :)
 
Last edited:
  • Disagree
Reactions: rxlawdude
And yet we assume and demand expert drivers to drive vehicles without AP/FSD.
We don't.

We design cars and roads for normal humans that have learned how to operate a car in a relatively short period of time. We have a realistic view of how good drivers are after this process. Our driving tests don't test for "expert" levels of driving, just basic competency. You can argue if we should have higher standards, but we clearly don't, so we do not demand expertise out of all drivers, and we should not design systems which can only be safely used by experts unless those systems somehow limit themselves to experts via additional licensing or other processes. We already have this at some level with commercial drivers licenses, but even that probably doesn't raise to the level of "expert." We're talking race car drivers here, who know the art of vehicle control at a level that most of us will never come close to.

If we only allowed experts to drive cars, Elon would not have been able to say only experts get FSD, as that would be every owner. Speed limits would/could be much higher. Fewer road signs would be needed. Cars might not need stability control or ABS.
 
Apparently I’m not welcome here anymore because vaccines are political not depending on what you think about them, but when you state people have all sorts of beliefs about them.

So have fun. I hope you achieve very much debating safety 1st to 6th. And I’ll share my thoughts and insights somewhere else without an agenda beyond Teslas/EVs/AVs.
 
  • Funny
Reactions: scottf200
And yet we assume and demand expert drivers to drive vehicles without AP/FSD. As has been stated, people can do plenty of stupid things with a car.
we are long past the point of considering cars 'experimental'. we are not going back to horses. even though there are horses on the latest software visuals, from what I've seen in some threads ;)

but we're not past the point of this level of automation being a 'given' and its not at all clear that it will be here to stay. if enough bad things happen during the most public testing period, it might be an uphill battle just to get back to where we are, now. its a very real possibility.

we're not yanking back the car revolution and horses are not going to be fertilizing our streets any time soon. but forward progress in higher iso levels is not at all guaranteed; even if the tech is able, people may be the stopper (which is kind of ironic).
 
I agree that there are numerous issues with how its being tested, but I don't think they're needs to be 2 people in the car. The main reason for two people in a vehicle being tested for L4 autonomous driving is that there is too much work to be done for a single person to handle. A single person can't handle both the task of being situationally aware of the driving environment, seeing what the car is doing and is seeing along with documenting everything necessary.

But, L2 driving doesn't require seeing what the car is doing or any documentation. All a person driving an L2 vehicle has to do is to make sure the vehicle is doing what they want the vehicle to do.
let me clarify, then. if someone is making a video, I would really prefer seeing 2 people in the car so that the video guy can be the passenger and the driver can be 100% on the driving (road, some screen and hands near the wheel). let the guy who does not have the wheel do the yapping.

I think its fair. it puts a responsible driver behind the wheel and lets someone else entirely concentrate on talking and pointing at the LCD display, etc.

yeah, twice the staff. less of that sweet sweet ad-money if split multi ways. oh, such burdensome first-world problems.