I'm aware of the following:
Goals:
Here's what I'm currently doing for each instance of my logger:
As indicated earlier, I'm trying to run multiple of the above and I think combined they are making the server unhappy by making paired requests at step 2 (one for each running logger) rather than sharing the same login attempt. As I restructure for that presumed issue, I want to see if others have some suggestions about additional tuning I might want to do.
Thanks in advance for assistance.
- We have a gigantic thread about the REST API.
- We have a Wiki that has links to some implementations.
- The telemetry API changed/extended at some point and I never updated my logger.
- I have a logger that I've been running for approximately two years with very little changes that mostly works, but sometimes "upsets" the server (such that it starts rejecting my logins for 24hr or so).
- Running my logger in parallel for two different cars triggers the upset of #4 almost consistently (meaning it's more rejected logins than accepted ones).
- I'd like to revisit my logger (and perhaps update it for the API changes).
- I'm hopeful the community might have some best practices to recommend.
Goals:
- 24x7 logging for multiple vehicles as allowed by general internet stability, uptime for my Azure VM, uptime for Tesla's servers, and 3G/wifi reliability for the vehicles
- Follow expressed or implied "rules" Tesla has for its telemetry API, as a "well behaved" client app.
- Obtain "snapshot" command data (see step 8 below) at least a few times an hour.
- Obtain streaming data continuously (when possible) while the vehicle is not parked.
- (Bonus) Obtain streaming data continuously (when possible) while supercharging.
Here's what I'm currently doing for each instance of my logger:
- fetch the login info and vehicle index from command-line args
- start a new log file; post via https to portal.vn.teslamotors.com/login; look for user_credentials in the cookies
- if step 2 fails, wait 15 seconds and return to step 2
- get via https from portal.vn.teslamotors.com/vehicles; fetch the id, vehicle_id, and tokens for the index-th vehicle
- if id is missing from step 4, wait 15 seconds and return to step 2
- get via https from portal.vn.teslamotors.com/vehicles/<id>/mobile_enabled
- if step 6 fails, wait 15 seconds and return to step 2
- invoke 5 commands using get via https from portal.vn.teslamotors.com/vehicles/<id>/command/<command> for each of the following: charge_state, climate_state, drive_state, gui_settings, and vehicle_state
- if step 4 found a token, then get via https from streaming.vn.teslamotors.com/stream/<token> in a while loop until it fails
- if step 4 found a vehicle_id, then get from https from portal.vn.teslamotors.com/vehicles/<id>/command/wake_up [Sidenote: not sure why my code cares about _vehicleId here rather than _id, that seems like a bug -- as it is using id not vehicle_id for the URL.]
- if step 10 fails, wait 15 seconds
- go to step 2
As indicated earlier, I'm trying to run multiple of the above and I think combined they are making the server unhappy by making paired requests at step 2 (one for each running logger) rather than sharing the same login attempt. As I restructure for that presumed issue, I want to see if others have some suggestions about additional tuning I might want to do.
Thanks in advance for assistance.
Last edited: