Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Push your hand through the steering wheel, spot noted in yello or the opposite side, and literally hang your arm on your wrist
1706037867766.png


The weight of your arm will keep nags away
But must have eyes on rhe road
 
  • Funny
Reactions: AlanSubie4Life
I think what Mistral AI is doing is important. They're addressing the "why" question in LLMs. Helps the model understand context and reasoning, relationships and meaning. More than just memorizing patterns. It's called "concept-based learning."

Concept based learning breaks down text into core concepts and then teaches the model to understand the relationships between the concepts. Ipso facto logic and reasoning.

Mistral is also working on "causal inference." Helps the model understand cause-and-effect between actions and events.

To me, this is the bridge between LLMs and FSD. Baby AGI.

I'd be very surprised if Tesla doesn't have similar capability.
Why would you be surprised to find out that Tesla doesn't have any causal-inference mechanisms? Have you ever watched AI day or any FSD presentations? What has ever been said to make you believe that FSD's machine learning is based on any higher-level abstractions than simple correlations? "Photons in - steering and acceleration out" neural network(s), all trained from video sourced from the fleet, has been repeated over and over again from the lead members of the FSD team. No mention of AGI or causal-inference networks or any other high-level learning systems. I think you are assigning an imprecise quip from Elon regarding "baby AGI" way too much weight.

If someone ever creates an AGI with reasoning and conceptual understanding, then, sure, it will learn to drive much faster than FSD has developed. But, FSD is not that at all.
 
  • Like
Reactions: uscbucsfan
It can make very sudden moves and having a secure hold of the wheel on the spokes ensures this will result in near immediate disengagement
Yeah, hands on wheel could have prevented this sudden steering jerk to the left and subsequent straightening:
12.1.2 park jerk.jpg


Seems like 12.1.2 got confused by the destination just past the intersection where perhaps initially it has learned that some people pull over earlier if it looks like there isn't enough curb access to park, but then as it got closer, it could see there was an available spot immediately after the intersection. At least I would have expected the car to drive straight to approach the intersection and not suddenly swing left.
 
When the idea of overfitting to the Bay Area first came up (I think in an Elon post)
Maybe it was this conversation?

@WholeMarsBlog: Drove around San Francisco on FSD Beta 9.2 with @kimpaquette and she said it definitely seems to work better in California than it does in Rhode Island @elonmusk​
@elonmusk: In general, we overfit to SF Bay Area​
Maybe I'm misunderstanding the usage, but I believe there's a difference between FSD Beta driving better in California vs FSD Beta neural networks making predictions as if the roads/vehicles were in SF Bay Area as the underlying reason for why it might drive poorly in other areas. For example, I believe California typically uses solid white lines to indicate left-turn lanes whereas other places might switch from dashed to solid white lines just before the intersection even for straight lanes, and an overfit 12.x neural network could incorrectly change lanes out of what it believes is a California left-turn-only lane when actually in a not-California straight lane.

california and not lane lines.jpg


Presumably 12.x has plenty of diverse training to realize the solid white lines could be an indication of turn lanes in some areas but to look for other signals like painted road markings and signage and even map data when those are available. Of course these signals can be complicated by other vehicle traffic occluding the view or different weather and lighting conditions, so I wonder if it's a reasonable training approach with end-to-end to start with "cleaner" data then add in trickier scenarios versus training from scratch with all the easy and hard scenarios mixed together.
 
  • Like
Reactions: lzolman and JB47394
How many total strikes have you received since owning your car?
I think 6. 2 were my fault - I was focused on the car negotiating a turn and didn’t notice the warning beeps until it was too late. The remaining 4 were all unprovoked, on 3 of them I was actively holding the steering wheel. One I had just silenced a nag a few seconds before and was taking a sip of my coffee. At least 3 of them came with no warning whatsoever.
Because hands on the wheel are critical for controlling the car. It can make very sudden moves and having a secure hold of the wheel on the spokes ensures this will result in near immediate disengagement. There’s no way around this. You cannot just be looking at the road. It also makes disengagement if anything unusual occurs less likely and slower which can be dangerous.

Anyway I tried today with hands off in my lap (a very bad place for them) and everything seemed fine. Occasional prompts to torque the wheel. Just can’t look at the screen - it can directly warn you to pay attention and I think it may increase likelihood of wheel torque input requirement (not sure about that).
FSD is controlling the car - I’m monitoring it and need to be ready to take over at any time, which I am. My hands are less than an inch away from the wheel which I can grab in an instant (and have on many occasions). I’ve tried holding the wheel continuously and I’ve been unable to find a position that prevents nags but is still comfortable For any period of time. Every time I’ve tried it I find most positions to be either ineffective or more fatiguing because I’m too tense from trying to prevent disengagements.
 
So why is Ford allowed to use a camera but Tesla isn’t?
I have this same question and mentioned it in the V11 thread a few months back. My son has a MachE and he is able to drive pretty much hands free on the highways. Even if the reason Tesla is not allowing full use of the cabin camera because of FSD city street driving, why not just allow hands free using Cabin Camera on highways?
 
I have this same question and mentioned it in the V11 thread a few months back. My son has a MachE and he is able to drive pretty much hands free on the highways. Even if the reason Tesla is not allowing full use of the cabin camera because of FSD city street driving, why not just allow hands free using Cabin Camera on highways?
Because the car can make turns at will. Blue Cruise cannot.

Blue Cruise is basically EAP without NoA, but it can make prompted turns from the user.

It's not an Apples to Apples comparison.
 
Push your hand through the steering wheel, spot noted in yello or the opposite side, and literally hang your arm on your wrist
View attachment 1011552

The weight of your arm will keep nags away
But must have eyes on rhe road
This seems imprecise but probably will serve the purpose of immediate disengagement at least 50% of the time.

I don’t like the idea of having my arm through the wheel in the event of an unexpected collision though; seems like a good way to get shoulder dislocation or a fractured arm.

And it seems less comfortable to have that wrist friction than just having hands at 9 and 3.

I’ve tried holding the wheel continuously and I’ve been unable to find a position that prevents nags but is still comfortable For any period of time.
What did you do before ADAS? I would just do that, if you don’t have physical impairments of some form.

At least try it to see if you can get better nag performance in the situations you describe. Remember there is an expectation to keep both of your hands on the wheel (as it says when you engage, and in the manual “Your hands must be on the steering wheel at all times while Full Self-Driving (Beta) is engaged”), so it may function better that way in some situations where it perceives higher risk or uncertainty.
 
I think it's both.

It's possible, but I'm just saying there's no evidence that Tesla is hiring drivers to go out and provide training videos for V12. This is because:

1) there's already "free" data from millions of cars
2) if Tesla is hiring drivers to, for example, stop to 0mph at stop signs, I think Elon would have mentioned this, as he often shares ironic/silly regulatory things that Tesla and SpaceX have to do
3) the vehicle operator position mongo posted seems to be related to testing out vehicle systems and hardware (like suspension tunings, brake / Regen, vehicle software updates relating to driving/features, etc.)
4) there are different Tesla positions relating to ADAS and FSD testing, but those are likely to test V12/fsd/AP, not to create training dataset videos
 
It's possible, but I'm just saying there's no evidence that Tesla is hiring drivers to go out and provide training videos for V12. This is because:

1) there's already "free" data from millions of cars
2) if Tesla is hiring drivers to, for example, stop to 0mph at stop signs, I think Elon would have mentioned this, as he often shares ironic/silly regulatory things that Tesla and SpaceX have to do
3) the vehicle operator position mongo posted seems to be related to testing out vehicle systems and hardware (like suspension tunings, brake / Regen, vehicle software updates relating to driving/features, etc.)
4) there are different Tesla positions relating to ADAS and FSD testing, but those are likely to test V12/fsd/AP, not to create training dataset videos
What a strange hill to die on. Along with the job postings, there are literally videos of drivers repeating Chuck's turn.

Your only argument is that Tesla hasn't directly stated they are doing this.

The most logical path is not what you are on...and I'm not saying it's 100%, just the most logical conclusion.

To each their own.
 
The most logical path is not what you are on...and I'm not saying it's 100%, just the most logical conclusion.

The most logical path is actually that Tesla just curates the data from the fleet.

The test drivers have been testing out Chuck's turn since before V12 was ever mentioned.

They're more likely there to test out the heuristics of a V11 version (like stopping in the median) than providing training data videos for V12
 
The most logical path is not what you are on...and I'm not saying it's 100%, just the most logical conclusion.
I don’t really care who “wins” this discussion. But wouldn’t it be hard to generate enough training data for v12 from such an effort? Just seems like not that many clips could be generated and you’d still have to curate them for quality since it would not be guaranteed. That being said, I don’t know how exactly they select video clips to use for training even in the absence of these drivers, or how many they actually use of the total amount of clips available. So not clear to me whether or not this would be helpful (I am not saying it would not be - saying I don’t know).

Just no clue how it is all done and what sort (or amount) of clips would be sufficient.
 
  • Like
Reactions: zoomer0056