Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD rewrite will go out on Oct 20 to limited beta

This site may earn commission on affiliate links.
The Q3 Earnings Report also has a pic of the intersection prediction NN that I think is new:
sVjz9nK.png


We can see a lot of info from this pic. We can see the road limits in red dots. We can see lane lines and crosswalks in white dots. We can also see 3D bounding boxes for cars in purple. And we see planning too. We can see green dots that show the possible paths the Tesla can take depending on what lane it needs to turn into.
 
That Neural Net vision has a set of photos right above of what they are seeing with the cameras.
Here it is together.
upload_2020-10-21_16-27-9.png


This is from SpaceX HQ or Hawthorne Design Studio (that overhead bridge is to the parking deck that SpaceX uses/built)

There is A LOT of detail to process in that Neural Net view.
Hard to tell, but cars going left are in blue 3D bounding boxes cars going right are in magenta 3D bounding boxes.
The person is also indicated
The traffic controls are in yellow boxes.
The possible paths the car can take are listed with green and grey options.
 
Last edited:
We can also see 3D bounding boxes for cars in purple.

I think there is more to it than that. The blue dots/bounding boxes are cars. The purple dots/bounding boxes are cars that will potentially be in the path of the Tesla. (At least that is what I get from it.)

And we see planning too. We can see green dots that show the possible paths the Tesla can take depending on what lane it needs to turn into.

I think the grey dots show possible paths as well. With green being most likely? What are the white dots in the middle of the intersection? The predicted path of a car waiting at the turn light? (But there is no bounding box that I can see.)
 
It's interesting, because that seems to be the prevailing opinion here, but to me it still seems possibly unethical.

I don't mean to get all yicky/off topic here I'll give you a similar situation and then you tell me if it's ethical or not:
You're about to have sex with someone. They tell you to put on a condom. You agree to this by putting on the condom, then turn out the lights and remove them condom, then proceed to have sex. Ethical? To my mind, telling someone you won't engage AP then doing so is no different than this scenario. The other scenario then is, you're in bed with someone. They don't give explicit permission to have sex with them, but they don't say no. You proceed to have sex with them. Ethical? I see this as very similar to engaging AP while driving without the passengers' explicit consent. Why? Because both activities involve some amount of risk that the second party might not wish to take.

First off, your example is a moral, not ethical, decision. Second, it's way off the point. When you drive the car, you have a moral (not ethical, though there is some overlap here) responsibility to drive safely, as you are deemed responsible, as the driver, for the safety of yourself, your passengers, and those in other cars around you (and on foot). With this responsibility also comes am assumption of authority in making decisions about how to execute on your responsibility. This is an outcome of the your moral responsibility; you cannot be blamed for something over which you have no control, and so you must also have the authority of choice when driving.

It is therefore for you, not others, to choose how you drive. If they don't like it, they can choose not to go with you, or to drive themselves. They may advise you, or indicate their preferences, but ultimately you are in the driving seat, literally and figuratively. This applies equally to your choice to use any and all driver assists, from the windshield wipers to Autopilot. You may decide, for familial harmony, to modify some driving habits to accommodate your passengers comfort, but that does not alter the basics of the situation. And that is not a moral issue; it would only become one if you agreed not to use AP and then sneakily used it and hoped no-one noticed.
 
  • Like
Reactions: mhan00 and helvio
I think there is more to it than that. The blue dots/bounding boxes are cars. The purple dots/bounding boxes are cars that will potentially be in the path of the Tesla. (At least that is what I get from it.)

I think the grey dots show possible paths as well. With green being most likely? What are the white dots in the middle of the intersection? The predicted path of a car waiting at the turn light? (But there is no bounding box that I can see.)

It is also worth noting that if you look closely you can see little vectors on the dots indicating direction of motion.

The grey dots seem to be possible paths that are not safe or not likely.
 
  • Love
Reactions: willow_hiller
First off, your example is a moral, not ethical, decision. Second, it's way off the point. When you drive the car, you have a moral (not ethical, though there is some overlap here) responsibility to drive safely, as you are deemed responsible, as the driver, for the safety of yourself, your passengers, and those in other cars around you (and on foot). With this responsibility also comes am assumption of authority in making decisions about how to execute on your responsibility. This is an outcome of the your moral responsibility; you cannot be blamed for something over which you have no control, and so you must also have the authority of choice when driving.
Agree with all that. But I'm fairly certain our collective Mothers-in-law aren't questioning our authority nor are their questioning our responsibility to drive safely. They're questioning whether or the AI we're delegating some driving responsibilities to is safe. They just don't trust it. You and I both know we're still (mostly) in control when AP is engaged. But others don't, and fear the AI driver.
 
From Earnings Call: Elon says that the release of FSD beta will be slow and careful because "world is messy". He hopes to release the FSD beta to more people this weekend and next week with wide release going out by the end of this year. He emphasized that with a 1M cars, Tesla can catch crazy edge cases that you could never find with a simulation. He also expects a feed back loop of gathering data that will continue to improve the NN. Lastly, he emphasized that this is a general solution using neural networks, so there is no need for HD maps, cell connection or seeing the area before. A Tesla with no cell connection can go to an area that no Tesla has ever been before and AP can still work.
 
Im too lazy to google right now, but its my understanding that Tesla's driven with AP engaged are FAR safer than when driven without. Like, exponentially so.

No, that's not what the data says. The data provided by Tesla allow no conclusion to be drawn about relative safety of Teslas with and without autopilot in use, nor does it allow meaningful comparison to other vehicles.

All it does is provide accident rates in 3 or 4 different categories:
1) Tesla Rate with AP on
2) Tesla Rate with AP off
3) Tesla Rate with AP off and no enhanced safety features (or something like that...this is from memory).
4) Rate for all other vehicles.

There's no way to compare these numbers in any meaningful way. A whole bunch of other information would have to be provided.
 
Last edited:
  • Like
Reactions: Cheburashka
That Neural Net vision has a set of photos right above of what they are seeing with the cameras.
Here it is together.
View attachment 600924

This is from SpaceX HQ or Hawthorne Design Studio (that overhead bridge is to the parking deck that SpaceX uses/built)

There is A LOT of detail to process in that Neural Net view.
Hard to tell, but cars going left are in blue 3D bounding boxes cars going right are in magenta 3D bounding boxes.
The person is also indicated
The traffic controls are in yellow boxes.
The possible paths the car can take are listed with green and grey options.
Even considering the car is first in line at the light and the camera views are not obstructed by adjacent vehicles, this doesn't seem possible.

I am ready to eat humble pie if the current sensor suite is up to the task.
 
  • Funny
Reactions: mikes_fsd
The grey dots seem to be possible paths that are not safe or not likely.
It is actually a little different.
The Green dots with vector arrows are the "desired" direction of the planned route.
The Grey dots with vector arrows seem to suggest the "alternative" paths that are still legal/safe in the current environment.
 
So what SW version is the mythical FSD beta beast in? No one in all of teslafi land is one of the safe drivers? 40.8 is all that is showing for mostly MY range extensions. It will be HDW 3 but maybe only full color cams (M3,MYs, newer S and Xs? . Hmm . I keep telling my self patience is a virtue.