Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Off topic galore

This site may earn commission on affiliate links.
Happy to see Gary Gensler massively screwing up on twitter post regarding BTC ETF launch. What a fool. What a sham. The “protector of investors” winds up doing perhaps millions of dollars in damage to investors.

Teslas BTC investment will be going great in 12 months time
 
  • Disagree
Reactions: replicant
I would like to see some of those studies and evidence of Teslas crashing more because AP is now so much better.

Why are you asking for studies on something I never claimed?

You took what I actually said and used it to make up a bunch of numbers and specifics I said nothing remotely like.


Additionally AP (nor even the wide release FSDb is remotely as near-but-not-totally automated as anything we were actually discussing.... which as a reminder, the standard suggested in the post my reply was actually written for was

V12 FSD intervention rates down to rare cases, essentially one intervention per week or one intervention per every 3 days for a high frequency user.

ANYWAY, I'd again suggest you head to the FSD forum for actual discussion, but since you seem to want some reading- here's some mostly about the car stuff- though it does point out this is a field that's been studied for decades in OTHER areas-- the more rare intervention becomes, the WORSE the human gets at being available for the rare times intervention is both needed AND safety critical.

So systems eventually approach the most dangerous convergence (when a safety critical task is VERY VERY CLOSE to fully automated- but not QUITE at 100% and still requires a human to be vigilant full time--- but actually needs intervention so rarely the human gets very poor at that task)



behavioral adaptation may begin to occur and overreliance and over-trust in the automation features potentially develop, including a greater willingness to look away from the forward road (as shown in the VCC data set).

(note: most quotes below from that link are from various DIFFERENT studies on this topic- references to each at the link)
When target stimuli are rare and the task is performed infrequently, then vigilance decrements are likely to occur
The driver experiences a vigilance decrement during PAD because scenarios of monotonous PAD cause a state of cognitive underload in the drive

A state of cognitive underload is especially common during continuous, monotonous, and low demand driving scenarios
the driver is performing such a low demand task, and is experiencing so little arousal, that they disengage from the task and their task performance actually suffers because of it. This is the vigilance decrement which arises from a state of cognitive underload, which inhibits their ability to supervise and intervene with the automation in an emergency

In addition to passive fatigue, vigilance decrements may be a result of mind-wandering
during PAD, drivers are more likely to use a smartphone or engage in other non-driving activities behind the wheel
Responses to this safety critical event were slower during PAD compared to manual driving

comparing PAD to manual driving, responses to safety critical events also tend to slow over the course of an entirely PAD scenario
 
  • Informative
Reactions: EVCollies
Why are you asking for studies on something I never claimed?

You took what I actually said and used it to make up a bunch of numbers and specifics I said nothing remotely like.


Additionally AP (nor even the wide release FSDb is remotely as near-but-not-totally automated as anything we were actually discussing.... which as a reminder, the standard suggested in the post my reply was actually written for was



ANYWAY, I'd again suggest you head to the FSD forum for actual discussion, but since you seem to want some reading- here's some mostly about the car stuff- though it does point out this is a field that's been studied for decades in OTHER areas-- the more rare intervention becomes, the WORSE the human gets at being available for the rare times intervention is both needed AND safety critical.

So systems eventually approach the most dangerous convergence (when a safety critical task is VERY VERY CLOSE to fully automated- but not QUITE at 100% and still requires a human to be vigilant full time--- but actually needs intervention so rarely the human gets very poor at that task)





(note: most quotes below from that link are from various DIFFERENT studies on this topic- references to each at the link)
Oh okay, so your counter point doesn't apply to Tesla but for whatever reason ended up here. Applies to hypothetical events that may result in a lack arousal from the babysitter but has no real data on crashes, injuries, or deaths. Also doesn't apply if the car has any counter measures to prevent complacent drivers from crashing. Pretty much a moot point.
 
Oh okay, so your counter point doesn't apply to Tesla but for whatever reason ended up here.

Your inability to read the words I actually wrote and make up your own story instead is truly impressive.

Here's the forum any "discussion", should you decide to actually attempt any, about what I actually said belongs in BTW (and there's many threads where it's already been covered- so maybe read that and get back to me over there after)

 
Your inability to read the words I actually wrote and make up your own story instead is truly impressive.

Here's the forum any "discussion", should you decide to actually attempt any, about what I actually said belongs in BTW (and there's many threads where it's already been covered- so maybe read that and get back to me over there after)


Guys, before the mods step in, let’s keep this thread on track of “Being the House” and options selling.

IMG_4194.gif