Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

TeslaLog.com - Your hosted Tesla Data Logger - Announcement / Support threads

This site may earn commission on affiliate links.
The more user there is, the cheaper it get for everyone (to a certain point).

I've began contacting Tesla, so I'll let you know what the outcome out of it.

As I write this, I notice my datalogger was overloading (running of a 5$ google cloud instance), so I started another one with a little bit more punch (barely, a 15$ one)

It was already made so I can scale multiple server. Before, Tesla limited the number of query per IP and not just per account, so that forced me to have to setup LOT of different instance if I wanted to support multiple user, so multi logger instance was written from the start (well, from v2)).

The process of data logging and the web server both require very little resources. The database is the one that require the most (right now).

I run off google cloud, so I can easily scale. But it does become 'expensive' quickly .

Deleting older data is an option, but like other that been using the tracker for months now, what interesting is really historical data. Though some user will only be interested by the business mileage tracking...

I do know of way to optimize it so the db remain smaller with more 'precompiled' content, but that require time to do, and only push back the inevitable that the db will grow and demand will increase.

To run it just for me, it pretty much free... I got VM that I can run this off and take very little resources, and I can store data for a very long time before the data increase become an issues.

I got offer for donating server / hosting but I don't want to do that. That would mean your data get on someone else hardware that I don't really control, and you put trust into me keeping this as secure as possible. Which it why I run it off Google Cloud right now. Sure google could be hacked, sure google could access the data, but there is some trust in a corporation like they are that it not going to be in their best interest to do so.

I might be paranoiac, but this does store information you don't want anyone to get.

As for fee, I'm thinking to maybe do a base fee with 'free' mileage recorded which would act as a trial in some way. Then different cost bracket for the amount of mileage data recorded. Just so it fare for the people who don't have lot of mileage on their car versus someone who drive a lot.

I see there is interest which is good... I'll probably send a survey to all who have car tracking enabled about it and gage a bit more.

Just to appease some of your worried, if I ever charge for the service, I'll have a way for you to download your currently recorded data at no cost. I'm not here to screw anyone !

But in a way I'm the one getting screwed right now, and I just want to make it fair for me as well !!!

- - - Updated - - -

Some hooligans have made off with my wheels!!

Also, my odo and rated range are in KMs, not miles.

Really nice app. Thanks!

View attachment 105783

I've tried to understand as much of Tesla Car Code as possible, but I've didn't tracked them all... Also there is new one that I haven't checked, AND I'm off a old dump of the android app images, so missing lot of car colors...

Yes it in metric there... This page doesn't have my standard code for metric / dark ages format conversion... But I'll get to it soon ;)
 
Some hooligans have made off with my wheels!!

LOL.

I don't know how I feel about charging extra for long term data access... to me, that is a core feature of the service (note I am not against a fee, just not specifically one geared to how long your data stays there).

A couple other thoughts FWIW...

Assuming your usage grows... would it be possible for a someone looking to buy a used Tesla to pay a one time fee to you to then leverage historic data about that same VIN, to show some tidbits that might be helpful to a buyer (ie what has the battery 'full' charge looked like over time)? Of course, if it is helpful to the buyer (ie Mr. Seller, I see the battery is degrading faster than normal, sell it for less), the user of your site/seller of the car may not be to happy... but just food for thought.

Linking the service to other internet systems (IFTTT, etc) could allow for some interesting opportunities, and would be a great feature to charge for (as Pushbullet learned, trying to take features away that were initially free doesn't sit well with users, although I think you might be in a more forgiving situation for the next short while).
 
Im curious how you got to this calculation and what your data storage architecture is.

Twitter handles more than 500 million tweets per day. Even just 1% of twitter volume is an absolutely massive architecture. Do you expect to be storing 5 million records per day in a year from now? How many users are you projecting to have?

With about 30 car tracked in early December, I'm at about half a million entry recorded per day. I have over 100 car tracked now, so already in the 1.5million entry per day. Though, that depend on the actual usage of the new user.

Every minute your driving, the car will generate about 240 entry. So if 100 of you drive for 1 hours per day, that 1.4million data entry right there.

The statistics from the DB since I started tracking it this : The average driving time per day for all the car I track is 42minutes traveling 22km per day.

There is more than that being recorded, but it the bulk of it.
 
Every minute your driving, the car will generate about 240 entry. So if 100 of you drive for 1 hours per day, that 1.4million data entry right there.

So that's 4 times a second. Do you need to poll and store that level of resolution? I would think maybe once every 5 or 10 seconds would be adequate for the information generated. Or maybe just store new key frames and durations whenever a metric changes by x%. (just brainstorming here). So if I'm sitting at a traffic light for 5 minutes not moving, that's not generating 1200 new entries where one entry would probably work just as well.
 
LOL.
Assuming your usage grows... would it be possible for a someone looking to buy a used Tesla to pay a one time fee to you to then leverage historic data about that same VIN, to show some tidbits that might be helpful to a buyer (ie what has the battery 'full' charge looked like over time)? Of course, if it is helpful to the buyer (ie Mr. Seller, I see the battery is degrading faster than normal, sell it for less), the user of your site/seller of the car may not be to happy... but just food for thought.

If I do have the data, yes, but I would require permission from the actual Tesla owner... If it owned by Tesla itself, then I would need a ok from them.

This policy might change, but until there is a clear one that most of you are happy with, I'm not going to share your data.

I'll do parse though the data to get interesting statistics of the user base and share them from time to time like I did in my previous post though.

- - - Updated - - -

So that's 4 times a second. Do you need to poll and store that level of resolution? I would think maybe once every 5 or 10 seconds would be adequate for the information generated. Or maybe just store new key frames and durations whenever a metric changes by x%. (just brainstorming here). So if I'm sitting at a traffic light for 5 minutes not moving, that's not generating 1200 new entries where one entry would probably work just as well.

If I start to average information, I lost precision on data... As for being stop at a light, the data entry does reduce in quantity in general.

For example, to be able to get the acceleration log, I can't average any information received. If I want to get the Wh/km(miles), I have to get as much information as possible so I can get that value to be the most accurate as possible. A solution I was talking about is to precompile some of these data into separate entity and get the actual raw data out. Reparsing that raw data latter on will be more expensive though. Because who know what information in the end we might find useful.

Keeping that data is precious so we can go back in time and extrapolate theories. I wish I actually had MORE data on the car from Tesla !!!

When you look at my video of my car, you can see lot of interesting informations, that you won't on your car with very little data in it... It going to take lot of charge cycle and driving to be able to see interesting trends. So if I add a new feature, I rather that you get to see your historical data than just from now on.

I do have solutions, but as time goes, what really going to be more costly is actual work time on optimisation rather than hosting cost, though they both add up.

- - - Updated - - -

So that's 4 times a second. Do you need to poll and store that level of resolution? I would think maybe once every 5 or 10 seconds would be adequate for the information generated. Or maybe just store new key frames and durations whenever a metric changes by x%. (just brainstorming here). So if I'm sitting at a traffic light for 5 minutes not moving, that's not generating 1200 new entries where one entry would probably work just as well.

If I start to average information, I lost precision on data... As for being stop at a light, the data entry does reduce in quantity in general.

For example, to be able to get the acceleration log, I can't average any information received. If I want to get the Wh/km(miles), I have to get as much information as possible so I can get that value to be the most accurate as possible. A solution I was talking about is to precompile some of these data into separate entity and get the actual raw data out. Reparsing that raw data latter on will be more expensive though. Because who know what information in the end we might find useful.

Keeping that data is precious so we can go back in time and extrapolate theories. I wish I actually had MORE data on the car from Tesla !!!

When you look at my video of my car, you can see lot of interesting informations, that you won't on your car with very little data in it... It going to take lot of charge cycle and driving to be able to see interesting trends. So if I add a new feature, I rather that you get to see your historical data than just from now on.

I do have solutions, but as time goes, what really going to be more costly is actual work time on optimisation rather than hosting cost, though they both add up.
 
If I start to average information, I lost precision on data... As for being stop at a light, the data entry does reduce in quantity in general.

For example, to be able to get the acceleration log, I can't average any information received. If I want to get the Wh/km(miles), I have to get as much information as possible so I can get that value to be the most accurate as possible.

I wasn't suggesting averaging. Just lowering the sample rate. If, say, you sample once per second instead of four, how much less accuracy are you going to have? Would it even be measurable or noticeable on the graphs? If you did that, you'd cut your data needs by 75%! I can't imagine that four samples per second offers significantly more accuracy than once per second, or even once every other second (a data savings of nearly 90%). I fully understand wanting to keep every single data point, but at that data rate, your site and project will become unsustainable very, very quickly! (I work in this space everyday, so I'm not just saying that.)
 
I wasn't suggesting averaging. Just lowering the sample rate. If, say, you sample once per second instead of four, how much less accuracy are you going to have? Would it even be measurable or noticeable on the graphs? If you did that, you'd cut your data needs by 75%! I can't imagine that four samples per second offers significantly more accuracy than once per second, or even once every other second (a data savings of nearly 90%). I fully understand wanting to keep every single data point, but at that data rate, your site and project will become unsustainable very, very quickly! (I work in this space everyday, so I'm not just saying that.)

It actually does make a huge difference for certain graphs. Especially Wh usage calculation. Also for acceleration graph, per second would pretty much be useless. Even 250ms is pretty crappy accuracy.

- - - Updated - - -

I wasn't suggesting averaging. Just lowering the sample rate. If, say, you sample once per second instead of four, how much less accuracy are you going to have? Would it even be measurable or noticeable on the graphs? If you did that, you'd cut your data needs by 75%! I can't imagine that four samples per second offers significantly more accuracy than once per second, or even once every other second (a data savings of nearly 90%). I fully understand wanting to keep every single data point, but at that data rate, your site and project will become unsustainable very, very quickly! (I work in this space everyday, so I'm not just saying that.)

It actually does make a huge difference for certain graphs. Especially Wh usage calculation. Also for acceleration graph, per second would pretty much be useless. Even 250ms is pretty crappy accuracy.
 
Great app! Very impressed. Thanks!

Will you, at some point, be able to release aggregated stats from all the cars your app is following? E.g. Rated range at full charge vs. odometer, Wh/km vs speed, outside temperature, etc. That would be very interesting.
 
I'm experimenting with a darker theme for the site (Once logged in). Let me know what you think !

- - - Updated - - -

Will you, at some point, be able to release aggregated stats from all the cars your app is following? E.g. Rated range at full charge vs. odometer, Wh/km vs speed, outside temperature, etc. That would be very interesting.

I got 2 car that have considerable time and mileage on the tracker so far. It will take some time till I get more data from other car to give a general idea from every one stats (on battery stats).
 
It actually does make a huge difference for certain graphs. Especially Wh usage calculation. Also for acceleration graph, per second would pretty much be useless. Even 250ms is pretty crappy accuracy.

- - - Updated - - -



It actually does make a huge difference for certain graphs. Especially Wh usage calculation. Also for acceleration graph, per second would pretty much be useless. Even 250ms is pretty crappy accuracy.

Just a thought if storage becomes an issue: Couldn't you run a batch every night to throw away data intelligently? I understand it's important to grab frequent data for acceleration. Can't you can backscan the data and, when not accelerating (or seeing rapid data changes), save only 1/4 or even 1/8th samples?
 
If you wanted to save on storage space... Once per month reduce down to just the derived data... I.e. what is shown on the graphs... That is all averages, totals and logs of some interesting events... 99% of the events are likely not that. The only downside is when you develop new statistics you may not be able to back propagate them.
 
Even if I were to drop lot of the data, it would still require server load increase since it not just a matter of lot of data of streaming. Again like I said, I could clean up some data out, and have preparsed stats... But creating new statistic or fixing math in previously generated one going to be system intensive. So whatever path I take, it will require more resources. That either for more servers, or more time to optimize the code to handle the extra load (whatever solution it maybe).
 
@mochouinard you've obviously spent a lot of time figuring out how to read & interpret the Tesla data. Thanks very much for doing this. The data and graphs are very insightful, and will certainly get even better as time goes by and additional data is collected. Keep up the good work!
 
It looks like it does not reset battery charging counters when several charging sessions happen without driving in between.

This screenshot explains:

Screen Shot 2016-01-04 at 01.09.37.png