Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

TeslaMate [megathread]

This site may earn commission on affiliate links.
Mines up and running now.

Imported all Teslafi data (Sept 19 to now) each file seemed to be around 25mb (that took some time uploading and then installing!)
Set backup - all working now Thx @DaveW - gdrive shows a file in a folder.
Imported Suc & Destination chargers - one to be careful on is the destinations list, it does not like it and crashes the SSH. I had to do it in small batches to get it to work.

Will monitor to see if i get the same errors as @simon.c but so far so good.


Now i have also installed Tasker (£3.19) and the Tesla addon (£1.89) for Android to do schedule tasks

Thanks both @DaveW & @simon.c and not forgetting @Durzel who started this adventure off!
 
  • Like
Reactions: Roy W. and DaveW
Mines up and running now.

Imported all Teslafi data (Sept 19 to now) each file seemed to be around 25mb (that took some time uploading and then installing!)
Set backup - all working now Thx @DaveW - gdrive shows a file in a folder.
Imported Suc & Destination chargers - one to be careful on is the destinations list, it does not like it and crashes the SSH. I had to do it in small batches to get it to work.

Will monitor to see if i get the same errors as @simon.c but so far so good.


Now i have also installed Tasker (£3.19) and the Tesla addon (£1.89) for Android to do schedule tasks

Thanks both @DaveW & @simon.c and not forgetting @Durzel who started this adventure off!

Glad it's all working

You might like to have some nice icons for your shortcuts then :) - Tesla shortcut icons - TeslaEV.co.uk
 
Hi All! I wonder if someone may be able to help, I've increased the GC instance to small to import my TeslaFi data, it's managed to import around 10, but I now get the following icon. I've tried docker-compose stop then docker-compose start but still get the error icon. It looks like a "skipped" icon however the data from the files are no within Teslamate
 

Attachments

  • Teslamate skip.JPG
    Teslamate skip.JPG
    78.7 KB · Views: 63
Hi All! I wonder if someone may be able to help, I've increased the GC instance to small to import my TeslaFi data, it's managed to import around 10, but I now get the following icon. I've tried docker-compose stop then docker-compose start but still get the error icon. It looks like a "skipped" icon however the data from the files are no within Teslamate
Sorry not one I've seen in my many adventures! Mine just show as rotating grey until they go to green ticks - didn't see any do that icon. I think if the import has gone wrong you may be best off just restarting the process from scratch. Maybe just start off with a "small" from the outset so you don't have to stop and restart the whole machine. I would also suggest leaving a few minutes between using the docker-compose down and sudo docker-compose up -d commands each time just to allow things to settle. Once it is all running you could try changing down to a micro, however...

On the subject of my resource warning of course after I said teslamate and grafana didn't seem to be affected of course it then stopped working! I stopped the instance and changed it up to a small as recommended by the google console and on starting it again everything seems to be working ok again. If I leave it as is then obviously I won't ever get through the google account credit before it expires, but long term it will end up having a monthly cost.

I decided to install the monitoring agent to see just how much memory it is using but it is a bit confusing for someone of my amateur-level interest! On the face of it I think I may be better off switching to an e2-micro machine type instead as although it still isn't a persistently free instance it is about half the monthly cost of an n1-small. An e2-micro has 2 vCPUs and 1GB memory vs the 1 vCPU and 614MB / 1.7GB of an n1 micro / small respectively. I have switched over to the e2-micro for the time being so I will see how it gets on over the next few days.
 
Last edited:
Sorry not one I've seen in my many adventures! Mine just show as rotating grey until they go to green ticks - didn't see any do that icon. I think if the import has gone wrong you may be best off just restarting the process from scratch. Maybe just start off with a "small" from the outset so you don't have to stop and restart the whole machine. I would also suggest leaving a few minutes between using the docker-compose down and sudo docker-compose up -d commands each time just to allow things to settle. Once it is all running you could try changing down to a micro, however...

Cheers Simon, I have tried this process within the CG on 3 separate instances and also on a raspberry pi twice and they all fail with the same icon/issue and no one seems to know what the icon means which is a tad frustrating. I even asked in the Teslamate discord channel but they didn't have a clue either.

Hopefully teslamate will have the ability to pull information directly from Teslafi going forward which may resolve this issue, but for the time being it looks like I'm sticking with Teslafi
 
I’d say the reason is three fold:

1) Total cost
2) Total number of charges and kwh added
3) OCD!

In an ideal world I’d also be able to log the 75 miles I didn’t log on Day 1 ownership. I presume the data produced by the API is transient (hence the need for real-time loggers)? ie. once it is gone it is gone and there is no way of interrogating historical data from it?

I'm a bit time limited but have taken a look at what the absolute minimum is to achieve this. You might have also seen I posted a picture of the TeslaMate schema earlier in this thread. I've tried this on a test database and taken screenshots so you can see what you do and don't get. Depends on how much your OCD on charge data is nagging you!

The good news is that a charging_processes table record has no outward dependency on records in the charges table, the latter has to refer to a parent charging_processes record but not vica versa.

If all you want is the Charging Stats dashboard to have the right numbers for Total Cost, Total Number of Charges and Total kWh added then you can achieve that with manually inserting a single record for each missing charge session. This will make the Charging Stats dashboard totals reflect your manually added charge(s), no entries will be shown in the Charges dashboard though. The Charges dashboard entry is possible but will need more work on your side.

WARNING: make sure you take a backup of your database before you try any of this just in case you need to revert back. No warranty given or implied :D so do at your own risk.

Example of faked charging session for the following desired values:
29th June 2020, 23:00 - 23:30, 30 mins duration (SQL statement below inserts time/date in UTC so 1 hour offset to BST)
Charge added = 2.75kWh
Cost = 0.12
Assumes
only a single car in TeslaMate (car_id = 1)
Assumes charge is linked to first position ever logged in TeslaMate (position_id = 1), you can set this to a different value if you find the id of your desired position in the database but haven't described that here.
Only populates values in the table that cannot be left null as per the database design plus those values needed to get your charge costs correct.

Code:
INSERT INTO public.charging_processes(
 id,
 start_date,
 end_date,
 charge_energy_added,
 duration_min,
 car_id,
 position_id,
 cost
)
VALUES(
 nextval('public.charging_processes_id_seq'),
 TIMESTAMP '2020-06-29 22:00:00',
 TIMESTAMP '2020-06-29 22:30:00',
 2.75,
 30,
 1,
 1,
 0.12
);

Before view of Charging Stats dashboard
Screenshot 2020-07-04 at 13.52.47.png


Screenshot 2020-07-04 at 13.53.04.png



After view of Charging Stats dashboard with changes highlighted in red
1 charge session added
30 minutes added to total duration
Total charged kWh increased, TeslaMate always rounds to whole kWh value
Total charging cost increased
Screenshot 2020-07-04 at 13.53.21.png


Faked charging session defaulting to a DC based charge.
No named location based on using position_id = 1 in my example database.
As we've not created any corresponding charge data stream records in the public.charges table there is no SOC data shown.
You won't be able to drill into the faked charging session, it only appears on the Charging Stats dashboard and contributes to maintaining your total number of charges, kWh, duration and costs.
Screenshot 2020-07-04 at 13.53.41.png
 
Further to the above, I have an entry in my “charging_processes” table but no corresponding entries in the “charges” table (for some reason), and it DOESN’T appear on the “Charges” Grafana page, if that matters.
 
Further to the above, I have an entry in my “charging_processes” table but no corresponding entries in the “charges” table (for some reason), and it DOESN’T appear on the “Charges” Grafana page, if that matters.

Yes, the Charges dashboard seems robust to these type of situations from the experimentation I did.

There are details in the docs for deleting a charge.

I did try to create a manual charge_processes record with the minimum matching corresponding charges records. From what I could determine, the minimum needed in the charges table for it to work is two records - one for the start time and one for the end time. There are then a number of mandatory values that must be created in each of the charges records to conform to the database design.

The code to create a charge_processes record and start/end charges records is something like this, note that the code refers to some sequence generators so need to run this as a single code block and also when car is NOT charging:
Code:
INSERT INTO public.charging_processes(
 id,
 start_date,
 end_date,
 charge_energy_added,
 duration_min,
 car_id,
 position_id,
 cost
)
VALUES(
 nextval('public.charging_processes_id_seq'),
 TIMESTAMP '2020-06-29 22:00:00',
 TIMESTAMP '2020-06-29 22:30:00',
 2.75,
 30,
 1,
 1,
 0.12
);

INSERT INTO public.charges(
 id,
 date,
 charge_energy_added,
 charger_power,
 ideal_battery_range_km,
 charging_process_id
)
VALUES(
nextval('public.charges_id_seq'),
TIMESTAMP '2020-06-29 22:00:00',
2.75,
7,
235,
currval('public.charging_processes_id_seq')
);

INSERT INTO public.charges(
 id,
 date,
 charge_energy_added,
 charger_power,
 ideal_battery_range_km,
 charging_process_id
)
VALUES(
nextval('public.charges_id_seq'),
TIMESTAMP '2020-06-29 22:30:00',
2.75,
7,
235,
currval('public.charging_processes_id_seq')
);

I did see some strange behaviour when doing this though, may have just been my test database. Sometimes I was finding the entry didn't appear in the Charges dashboard the first time I ran the code above after a database restore. If I ran it a second time, without restoring the database, and tweaking the start/end times so I could spot the difference, it appeared in the Charges dashboard OK. Didn't have time to investigate further.

Given you won't realistically have the data stream for the individual charge session I don't think it's worth trying to do that anyway. An entry in the charge_processes table seems good enough to get the main stats updated if someone is that bothered.
 
Yes, the Charges dashboard seems robust to these type of situations from the experimentation I did.

There are details in the docs for deleting a charge.

I did try to create a manual charge_processes record with the minimum matching corresponding charges records. From what I could determine, the minimum needed in the charges table for it to work is two records - one for the start time and one for the end time. There are then a number of mandatory values that must be created in each of the charges records to conform to the database design.

The code to create a charge_processes record and start/end charges records is something like this, note that the code refers to some sequence generators so need to run this as a single code block and also when car is NOT charging:
Code:
INSERT INTO public.charging_processes(
 id,
 start_date,
 end_date,
 charge_energy_added,
 duration_min,
 car_id,
 position_id,
 cost
)
VALUES(
 nextval('public.charging_processes_id_seq'),
 TIMESTAMP '2020-06-29 22:00:00',
 TIMESTAMP '2020-06-29 22:30:00',
 2.75,
 30,
 1,
 1,
 0.12
);

INSERT INTO public.charges(
 id,
 date,
 charge_energy_added,
 charger_power,
 ideal_battery_range_km,
 charging_process_id
)
VALUES(
nextval('public.charges_id_seq'),
TIMESTAMP '2020-06-29 22:00:00',
2.75,
7,
235,
currval('public.charging_processes_id_seq')
);

INSERT INTO public.charges(
 id,
 date,
 charge_energy_added,
 charger_power,
 ideal_battery_range_km,
 charging_process_id
)
VALUES(
nextval('public.charges_id_seq'),
TIMESTAMP '2020-06-29 22:30:00',
2.75,
7,
235,
currval('public.charging_processes_id_seq')
);

I did see some strange behaviour when doing this though, may have just been my test database. Sometimes I was finding the entry didn't appear in the Charges dashboard the first time I ran the code above after a database restore. If I ran it a second time, without restoring the database, and tweaking the start/end times so I could spot the difference, it appeared in the Charges dashboard OK. Didn't have time to investigate further.

Given you won't realistically have the data stream for the individual charge session I don't think it's worth trying to do that anyway. An entry in the charge_processes table seems good enough to get the main stats updated if someone is that bothered.

@NickName When you say you tweaked the start/end times the second time you tried the Charges dashboard, did you update the timestamp values up or down? I'm wondering if the query that pulls the data for the Charges dashboard is using a > or < when joining the charges and charging_processes tables. Just a thought I had based on your description of what you did to get it to work, I didn't test this theory in any way and truth be told I'm really just sticking my nose in somewhere it doesn't belong. What can I say, as a DBA I saw some database code and was intrigued.
 
@NickName When you say you tweaked the start/end times the second time you tried the Charges dashboard, did you update the timestamp values up or down? I'm wondering if the query that pulls the data for the Charges dashboard is using a > or < when joining the charges and charging_processes tables. Just a thought I had based on your description of what you did to get it to work, I didn't test this theory in any way and truth be told I'm really just sticking my nose in somewhere it doesn't belong. What can I say, as a DBA I saw some database code and was intrigued.
@BlackCatt I know what you’re getting at. Actually tried up and down just in case, also tried a quick reindex. I was also inserting with start/end time stamps that were in between the existing range of records covering several weeks. My tweaks were a couple of hours either way rather than something that would take it outside any queries date time stamp range.

I didn’t spend much time looking at it, suspect something dead simple to do with my test set up. The charging_processes INSERT is the only one that matters in my view. What I would look at next is the actual query in Grafana as I know the data is in the underlying DB.

Not a problem I have, only looked into it as another member posted it was something they’d like to do.
 
Out of interest, since there are presumably a few people who have managed to get their GCP up and running now - anyone happily running on the (N1 series) f1-micro without any memory warnings on their VM instances console page? I've left mine on an e2-micro for the time being and according to the cloud monitor agent I have a stable 40% memory free (out of the 1GB I get on an e2-micro), which would be very close to fully using the 614MB you get on the f1-micro on the N1 series. There does seem to be a bit of variation in memory usage across the machine types, and the monitor agent itself does use a small amount of resource too. I'm tempted to try an experimental day or so back on the f1-micro to see if I get the warnings again.
 
Out of interest, since there are presumably a few people who have managed to get their GCP up and running now - anyone happily running on the (N1 series) f1-micro without any memory warnings on their VM instances console page? I've left mine on an e2-micro for the time being and according to the cloud monitor agent I have a stable 40% memory free (out of the 1GB I get on an e2-micro), which would be very close to fully using the 614MB you get on the f1-micro on the N1 series. There does seem to be a bit of variation in memory usage across the machine types, and the monitor agent itself does use a small amount of resource too. I'm tempted to try an experimental day or so back on the f1-micro to see if I get the warnings again.

Mine lasted a few days without warnings, just checked now and have one.

FABDD2B6-300F-4AFC-B6A1-30CBAC3DC1E3.jpeg
 
Out of interest, since there are presumably a few people who have managed to get their GCP up and running now - anyone happily running on the (N1 series) f1-micro without any memory warnings on their VM instances console page? I've left mine on an e2-micro for the time being and according to the cloud monitor agent I have a stable 40% memory free (out of the 1GB I get on an e2-micro), which would be very close to fully using the 614MB you get on the f1-micro on the N1 series. There does seem to be a bit of variation in memory usage across the machine types, and the monitor agent itself does use a small amount of resource too. I'm tempted to try an experimental day or so back on the f1-micro to see if I get the warnings again.
Not yet, but seeing @DaveW post above will monitor and see if anything pops up. I noticed that cpu usage is in the low 4 to 6% so it's hardly touching that bit