Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Suggestions for Enhanced Verbal Feedback in FSD

This site may earn commission on affiliate links.
I wanted to share some ideas that I believe could further enhance the FSD experience, particularly relating to verbal feedback.
I worked this out with the ChatGPT AI so the following is its summary of my ideas that it compiled for me.

**1. Enhanced Verbal Cues**

- Implement real-time verbal cues explaining the FSD system’s actions. For example:

- “Another car is coming.”

- “Changing lanes to avoid slower vehicle.”

- “Slowing down for pedestrian crossing.”

- Provide contextual awareness cues such as:

- “I see a cyclist on the right side.”

- “There’s a car approaching from behind.”

- “The traffic light ahead is red, preparing to stop.”



**2. Customizable Feedback Options**

- Offer different levels of feedback detail: Verbose, Standard, Minimal, and Silent.

- Allow drivers to adjust the feedback level based on their comfort and familiarity with the FSD system.

- Enable personalization options for selecting different voices and tones.

- Implement adaptive feedback where the system learns from driver behavior and adjusts feedback automatically.

- Support user profiles for different drivers with preferred settings.



**Benefits**

- Improved comfort and trust: Clear verbal cues build transparency and help drivers make informed decisions.

- Reduction of unnecessary interventions: Verbal explanations prevent unnecessary overrides, enhancing safety.

- Collaborative driving experience: Creates a sense of working together with the FSD system, reducing surprises and making driving more predictable.



I believe these enhancements would significantly improve the FSD experience for all users. I hope Tesla comes to the realization that FSD should, in the early stages anyway, be a working relationship between FSD and the “Supervisor”.
 
No matter how much or how detailed info you feed it the blackhole can always devourer more.

Screenshot 2024-06-29 at 8.02.05 AM.png
 
- Implement adaptive feedback where the system learns from driver behavior and adjusts feedback automatically.
I understand the intent of your ideas and think it's needs to be very simple for it to work and be adopted. So many variables and so many distractions. Elon would say no distraction is a good distraction.

Adaptive feedback for example, needs to be simple. How would you envision this one idea be implemented?

Just teasing here ... maybe not:
Imagine FSD and my wife giving verbal directions at the same time.
 
Last edited:
  • Funny
Reactions: APotatoGod
This feedback would naturally occur in proportion to the density of the traffic situation. This occurs with the navigation which, I believe, we all accept. What I’m saying is that in a dense/busy driving situation we allow for the fact that verbal navigation cues are rather closely spaced. On long expressway travel we hear little from the navigation (voice). So too would we not hear much if anything from the FSD feedback.
One, this feature is tunable and a ‘normal’ setting would only say things like: “Going around the stopped vehicle”.

The other day:
i was in the left lane of 2 lanes in a busy area. FSD was engaged. I came up behind a car stopped for a left turn. The car slowed down (where I would have just waited for the car to make its turn) but then it quickly swerved to the right lane, passed the stopped car and swerved back into the left lane and proceeded to continue down the road. It was done suddenly, quickly, with out me being mentally prepared. The danger here is if someone would have been approaching from behind in the right lane and my car had swerved into it. I’m “mostly” confident that the car’s excellent situational awareness meant that there was, in fact, no danger of a collision but gee wiz I would have liked to have been warned!
 
I wouldn’t mind seeing additional feedback displayed as messages on the screen, but I don’t want the car talking to me constantly. How would you listen to music or other audio content if the car was constantly interrupting?
I think this could work if we think of frequency and triggers for these cues the same as the visual alerts on the screen. Those visual cues aren’t considered annoying, but if the argument for visual only cues is because we don’t notice them, then the argument for verbal cues is: why have visual cues that you don’t notice? Audio cues would be just as frequent as visual ones if the visual ones are at all helpful. In other words, if we’re willing to have visual cues, then we should be willing to have audio cues, especially if we’re able to reduce or turn them off.
 
The danger here is if someone would have been approaching from behind in the right lane and my car had swerved into it. I’m “mostly” confident that the car’s excellent situational awareness meant that there was, in fact, no danger of a collision but gee wiz I would have liked to have been warned!
In this situation how would you envision your new adaptive alert system (NAAS) to have worked? Break it down for us. What would you have done if NAAS said "ego will be slowing down and then quickly swerving to the right lane"?
 
Is using Voice Commands the only way to provide FSD feedback ? Found this in the Tesla Model 3 Owners Manual under "Consumer Information" (see Note below on using Voice Commands).

View attachment 1060890
That's good info. However, the OP is talking about verbal ques to the driver from FSD. Not how the driver can give feedback to Tesla about FSD.
 
Tesla has many more important things than most of these ideas. In particular, making an end to end driving model supply explanations of its actions is not at all trivial. Large AI models don't know (per se) why they do things - they aren't logic based, like traditional software, so you can't just add some code to some part of the system to also show a message on screen when some conditions occur.
 
  • Informative
Reactions: zoomer0056
Tesla has many more important things than most of these ideas. In particular, making an end to end driving model supply explanations of its actions is not at all trivial. Large AI models don't know (per se) why they do things - they aren't logic based, like traditional software, so you can't just add some code to some part of the system to also show a message on screen when some conditions occur.
Yes. But there may be methods used by Tesla's AI team, for providing supplemental end-to-end model training in the form of video clips with negative or positive consequences. To enforce a rule-based approach, such supplemental training videos could be AI-generated from script,
 
In this situation how would you envision your new adaptive alert system (NAAS) to have worked? Break it down for us. What would you have done if NAAS said "ego will be slowing down and then quickly swerving to the right lane"?
FSD: “It is clear to go around now — proceeding…” or “Waiting for it to become clear to go around” or if in ‘chill’ mode “Waiting for obstructing vehicle to clear the way”
For an unprotected turn (which really makes me nervous) — FSD: “waiting for an opportunity” or “going after this vehicle”
 
Tesla has many more important things than most of these ideas. In particular, making an end to end driving model supply explanations of its actions is not at all trivial. Large AI models don't know (per se) why they do things - they aren't logic based, like traditional software, so you can't just add some code to some part of the system to also show a message on screen when some conditions occur.
‘Flinging’ my car around an obstructing vehicle seems a whole lot like a single “unit of work” to me. The AI watched traffic from behind, decided to go, swerved from behind the vehicle after initially stopping behind it, passing the vehicle and, just as quickly, swerving back in front of the just passed vehicle.
This is what I would call a “Unit if work” that was executed. I believe it could have verbally informed me that it was looking for an opportunity to go around and when that opportunity arrived, would inform me that it was going around. If it was in ‘Chill’ mode for example, it would say something like: “Holding till clear”. These would reassure me of its intentions so I would be less surprised.
 
I’m not disagreeing with usefulness of a feature like this, but rather the feasibility of it in E2E AI.
Somehow, even with end-to-end neural net AI, they have managed to get it (the AI) to drive the on-screen visualizations. This visualization includes the “worm” projecting from the front of the illustrated vehicle showing its intent/thinking.
Also, they have to be able to ‘control’ the AI with rules of the road knowledge that is not derived from training videos. IMHO there is only so much knowledge you can impart showing any intelligence training videos. At some point, there has to be language based training of the actual rules.