Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I don’t think this is true. There are a lot of sensors and pretty sure they have talked about multimodal inputs (might not just be sensors).

Well, our cars have an internal GPS if you count that as a sensor. And they might also have an accelerometer sensor. But in terms of perception sensors, our Teslas only use cameras right now. In the future, Tesla might add a radar according to some rumors.
 
  • Like
Reactions: powertoold
It takes input from the GPS, and IMU, and maps, and the routing engine, and presumably what mode (Chill, Average, Assertive) is selected, etc.

So it is not only taking cameras as an input.

Of course, the cars have GPS for localization and nav maps for routing. But Elon said V12 is "photons in, control out". So the end-to-end is trained on camera data only.
 
But in terms of perception sensors
Perception is not the same as vision.

The vehicle has tons of sensors as described above.
So the end-to-end is trained on camera data only.
I don’t think we know that. It’s pretty common to train on multimodal inputs. What Tesla is doing I have no idea.

Elon understandably focuses on vision since that is what people think about, but his statements don’t explicitly rule out other sensor use.
 
Obviously, an oversimplification, as that could never work. If it has no input for a route it would never know when to change lanes or make a turn.

Obviously, Tesla needs a router to know where to go. But I know in most AVs, the router is separate from the perception-planner stack. In other words, the perception stack sees the world. The router is separate and informs the planner where the car needs to go. So I imagine it would be similar with V12. The end-to-end takes in camera data and does the perception-planner tasks and the router is a separate module which tells the end-to-end where the car needs to go.
 
The router is separate and informs the planner where the car needs to go. So I imagine it would be similar with V12. The end-to-end takes in camera data and does the perception-planner tasks and the router is a separate module which tells the end-to-end where the car needs to go.
Huh? o_O You seem to be contradicting yourself.

How can the router tell the end-to-end where to go, if it isn't providing input to the end-to-end?
 
Well, our cars have an internal GPS if you count that as a sensor. And they might also have an accelerometer sensor. But in terms of perception sensors, our Teslas only use cameras right now. In the future, Tesla might add a radar according to some rumors.
They also have a compass - FSD actively uses both the GPS and compass in it's navigation.
 
Huh? o_O You seem to be contradicting yourself.

How can the router tell the end-to-end where to go, if it isn't providing input to the end-to-end?

it does provide input to the end-to-end, it's just separate input from the cameras. The end-to-end gets input from 2 sources: cameras and the map/router. But router input is navigational input, not perception input.

EDIT: My earlier point is that I don't think the end-to-end is trained with the router input, it is trained only on the camera input. The router input is separate input.
 
it does provide input to the end-to-end, it's just separate input from the cameras. The end-to-end gets input from 2 sources: cameras and the map/router. But router input is navigational input, not perception input.
Now we are getting somewhere. This all started because someone suggested adding a, non-perception, AP vs. FSD input to the planner, and you said it wouldn't be possible as it only takes camera input. Which seems like it could be possible, but it might make more sense to have separate networks for AP, EAP, and FSD. (Or more likely AP and EAP stay with the procedural code they have now and only FSD gets the end-to-end NN.)
 
I don’t think this is true. There are a lot of sensors and pretty sure they have talked about multimodal inputs (might not just be sensors).
I believe that what Goose66 meant was to provide an input to the self driving E2E NNs for what mode the car is using (TACC, Autosteer, FSD) and have that part of the training that determines what capabilities the car uses. Then, Tesla can ditch the old AP software.

This is likely in the plan, assuming that V12 pans out and becomes The Way.

And, the NNs do take more than camera inputs. There are speed sensors, GPS locations, accelerometers to name a few that all are likely inputs. There are other user inputs as well such as the autospeed selections and, I presume, Aggressive, Average, Chill settings still. All these need to be inputs to the NN system.
 
  • Like
Reactions: AlanSubie4Life
I don’t think this is true. There are a lot of sensors and pretty sure they have talked about multimodal inputs (might not just be sensors).
LOL, what sensors? They removed everything else from the cars. The radar on S/X was test and is not really used afaik. GPS isn't a sensor and the one of the car doesn't have helpful precision. The speed is known and is also not a sensor.
 
LOL, what sensors? They removed everything else from the cars. The radar on S/X was test and is not really used afaik. GPS isn't a sensor and the one of the car doesn't have helpful precision. The speed is known and is also not a sensor.
Torque sensor
Inside cabin temperature sensor
Outside temperature sensor
Seatbelt fasten sensor
Ambient light sensor
Crumple zone airbag deployment sensor
Accelerator sensor
GPS sensor
Etc... 😂
 
LOL, what sensors? They removed everything else from the cars. The radar on S/X was test and is not really used afaik. GPS isn't a sensor and the one of the car doesn't have helpful precision. The speed is known and is also not a sensor.
Accelerometers, wheel speed sensors, microphones, etc.

I have no idea what they are actually using. But clearly the car has tons of sensors which would be useful for understanding what the car is doing and how to adjust what FSD is doing accordingly.

For example, it is important to know whether the car is going downhill or uphill.

What they specifically make use of, I make no claims about. They may take some processed output of these sensors rather than just feeding the raw data. Or they may not make use of any of that and leave that up to a separate subsystem. I have no idea.

It seems silly to not use some combination of sensor inputs regardless of whether one is doing end to end or whatever. But I have no idea what they are using.
 
  • Like
Reactions: FSDtester#1
Torque sensor
Inside cabin temperature sensor
Outside temperature sensor
Seatbelt fasten sensor
Ambient light sensor
Crumple zone airbag deployment sensor
Accelerator sensor
GPS sensor
Etc... 😂
So which ones of these do you sugest Tesla are using for FSD? The ambient light sensor? The airbag deployment sensor will be used alot if they remove the driver, but not for driving.. 😂