Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

TeslaMate [megathread]

This site may earn commission on affiliate links.
Thanks again DaveW, I've now imported all those in too, though just be aware that the bulk list of Superchargers on your site have ended up with incorrectly formatted ' which the database throws an error up at (at least on my computer/browser anyway). The ones in your post above and the Destination chargers seem to be ok - it's just the bulk list of Superchargers which have ended up as ‘Edinburgh Supercharger’ instead of 'Edinburgh Supercharger', for example (if you need a magnifying glass the database doesn't like the "left/right" versions - it has to be the neutral straight up/down one). I just ended up putting the list into a text file and doing a find/replace all on them.

Thanks for that, must have broke the format when I pasted over them, I’ll sort them tonight
 
For importing TeslaFi data into your GCP TeslaMate instance...

After ensuring that TeslaMate and Grafana are working correctly with live data, SSH into your instance and shut them down using

Code:
docker-compose down

Then in TeslaFi go to Settings > Account and scroll down to Download TeslaFi Data and you will then need to download the data month by month - you will get a .csv file for each.

You then need to get the files into your GCP instance - I believe there are other ways which may be more efficient but since I only had 7 files to worry about I just uploaded them using the SSH browser window. To do so just click the cog in the top right, select "Upload file" and select each of the .csv files in turn (unfortunately you can only do them one at a time this way). These files will be put into your home directory. You then need to move them into the import directory which should already exist.

Code:
sudo mv *.csv import/
ls import/
That should move the files and then list the contents of the import directory to ensure they are there.

With the files in place it is now time to restart the teslamate service, but since the micro machine type seems to struggle with the import process I would recommend temporarily upgrading to a "small" machine type. To do so go to your GCP console, from the Navigation window go to the Compute Engine>VM instances page (if you are not already on it) and click on the name of your instance. You can then click on the "STOP" entry at the top of the page, acknowledge any warning and then once it has stopped click on "EDIT" and change the Machine type to "g1-small", Save and then click "START". With the instance back up and running you can reconnect your SSH browser window and restart Teslamate.

Code:
sudo docker-compose up -d

Allow a couple of minutes for it to get going and then go to your teslamate.domain.com address where you should be greeted with a list of your import files. Just ensure the timezone is correct then click Import and grab a cup of tea. If all is well the files should gradually one by one get a green tick by them. Once they all have green ticks go to your SSH window and enter:

Code:
cd import
sudo rm *.csv
cd ..
docker-compose down

Once all the services are confirmed down, go back to the GCP console and just do the reverse of earlier - Stop, edit the machine type back to micro, then Start again, reconnect the SSH window and one final

Code:
sudo docker-compose up -d

After a few minutes you should then be able to view all your historic TeslaFi data in TeslaMate/Grafana.
 
  • Helpful
Reactions: davidmc
For importing TeslaFi data into your GCP TeslaMate instance...

After ensuring that TeslaMate and Grafana are working correctly with live data, SSH into your instance and shut them down using

Code:
docker-compose down

Then in TeslaFi go to Settings > Account and scroll down to Download TeslaFi Data and you will then need to download the data month by month - you will get a .csv file for each.

You then need to get the files into your GCP instance - I believe there are other ways which may be more efficient but since I only had 7 files to worry about I just uploaded them using the SSH browser window. To do so just click the cog in the top right, select "Upload file" and select each of the .csv files in turn (unfortunately you can only do them one at a time this way). These files will be put into your home directory. You then need to move them into the import directory which should already exist.

Code:
sudo mv *.csv import/
ls import/
That should move the files and then list the contents of the import directory to ensure they are there.

With the files in place it is now time to restart the teslamate service, but since the micro machine type seems to struggle with the import process I would recommend temporarily upgrading to a "small" machine type. To do so go to your GCP console, from the Navigation window go to the Compute Engine>VM instances page (if you are not already on it) and click on the name of your instance. You can then click on the "STOP" entry at the top of the page, acknowledge any warning and then once it has stopped click on "EDIT" and change the Machine type to "g1-small", Save and then click "START". With the instance back up and running you can reconnect your SSH browser window and restart Teslamate.

Code:
sudo docker-compose up -d

Allow a couple of minutes for it to get going and then go to your teslamate.domain.com address where you should be greeted with a list of your import files. Just ensure the timezone is correct then click Import and grab a cup of tea. If all is well the files should gradually one by one get a green tick by them. Once they all have green ticks go to your SSH window and enter:

Code:
cd import
sudo rm *.csv
cd ..
docker-compose down

Once all the services are confirmed down, go back to the GCP console and just do the reverse of earlier - Stop, edit the machine type back to micro, then Start again, reconnect the SSH window and one final

Code:
sudo docker-compose up -d

After a few minutes you should then be able to view all your historic TeslaFi data in TeslaMate/Grafana.
Brilliant, thanks for this

@DaveW add this to your how to guide?
 
Yeah of course go ahead. Feel free to trim it down as you see fit.

I'm not sure how necessary the change of machine type is as my previous attempts had been on Debian 10 which appears to have been causing problems in general so it is possible the micro with Debian 9 will manage the Teslafi import ok, but considering it is a fairly intensive process it is probably worth doing it on the "small" rather than "micro" just to be safe. It only adds a few extra minutes to the process for the shutting down and changing over machine type.
 
Yeah of course go ahead. Feel free to trim it down as you see fit.

I'm not sure how necessary the change of machine type is as my previous attempts had been on Debian 10 which appears to have been causing problems in general so it is possible the micro with Debian 9 will manage the Teslafi import ok, but considering it is a fairly intensive process it is probably worth doing it on the "small" rather than "micro" just to be safe. It only adds a few extra minutes to the process for the shutting down and changing over machine type.
Hi @simon.c i have noticed that when i chaged to the g1-small the external ip changes and therefore your sub domain becomes inactive. I changed the ip but i belive the DNS flush takes some time as i can no longer connect using my teslamate.*****.com. I can ping the external ip fine but not the address. Not sure if its something im doing wrong?
 
Hi @DaveW Just sorting out the backup to GDrive and hit a snag. Noticed your setup is for the Pi and not the Google platform. and all went well upto the below point creating a folder with the backup & upload script. Here is the log below:

root@tesladata:~# mkdir /home/pi/tmbackup
mkdir: cannot create directory ‘/home/pi/tmbackup’: No such file or directory

root@tesladata:~# ls -l
total 0

(I then decided to create the folder path as was not existing)

root@tesladata:~# cd /home
root@tesladata:/home# mkdir /pi
root@tesladata:/home# cd /pi
root@tesladata:/pi# mkdir /tmbackup
root@tesladata:/pi# cd /tmbackup

root@tesladata:/tmbackup# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
root@tesladata:/tmbackup# nano tmbackup.sh

(Copied and pasted your code in here for the script as the $PATH was the same as yours)

root@tesladata:/tmbackup# chmod +x tmbackup.sh

root@tesladata:/tmbackup# ./tmbackup.sh
./tmbackup.sh: line 3: cd: /home/pi/tmbackup: No such file or directory
./tmbackup.sh: line 4: /home/pi/tmbackup/teslamate.bck_Friday: No such file or directory
2020/07/03 10:12:26 ERROR : : error reading source directory: directory not found
2020/07/03 10:12:26 ERROR : Attempt 1/3 failed with 1 errors and: directory not found
2020/07/03 10:12:26 ERROR : : error reading source directory: directory not found
2020/07/03 10:12:26 ERROR : Attempt 2/3 failed with 1 errors and: directory not found
2020/07/03 10:12:26 ERROR : : error reading source directory: directory not found
2020/07/03 10:12:26 ERROR : Attempt 3/3 failed with 1 errors and: directory not found
2020/07/03 10:12:26 Failed to copy: directory not found
root@tesladata:/tmbackup#

I maybe doing something stupid! :)
 
@davidmc

The directory is a little different on the Google Cloud setup

This is what you'll need (replacing your home folder name with whatever it might be, mines firstname_surname)

Code:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
now=`date +"%A"`
cd /home/YOURHOMEFOLDERNAME/tmbackup
sudo /usr/local/bin/docker-compose exec -T database pg_dump -U teslamate teslamate > /home/YOURHOMEFOLDERNAME/tmbackup/teslamate.bck_${now}
rclone copy --max-age 24h /home/dave_witchalls/tmbackup --include 'teslamate.*' gdrive:TeslaMate
 
  • Like
Reactions: davidmc
@davidmc

The directory is a little different on the Google Cloud setup

This is what you'll need (replacing your home folder name with whatever it might be, mines firstname_surname)

Code:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
now=`date +"%A"`
cd /home/YOURHOMEFOLDERNAME/tmbackup
sudo /usr/local/bin/docker-compose exec -T database pg_dump -U teslamate teslamate > /home/YOURHOMEFOLDERNAME/tmbackup/teslamate.bck_${now}
rclone copy --max-age 24h /home/dave_witchalls/tmbackup --include 'teslamate.*' gdrive:TeslaMate
I had the light bulb moment and then realised what i did, then saw this post and confirmed my thoughts! lol

thx again @DaveW
 
  • Like
Reactions: DaveW
Hi @simon.c i have noticed that when i chaged to the g1-small the external ip changes and therefore your sub domain becomes inactive. I changed the ip but i belive the DNS flush takes some time as i can no longer connect using my teslamate.*****.com. I can ping the external ip fine but not the address. Not sure if its something im doing wrong?
Ah that's a good point - looks like you've got round this anyway by just updating the A record and waiting for the re-propagation but you can also set a fixed external IP within GCP too and just get your compute instance to use that...

In GCP Console, Navigation Pane > VPC network > External IP addresses
Click on Reserve Static Address, give it a name (can be same as your instance if you like) and probably best to change the region to match that of your instance (not sure if that bit is strictly necessary or what/if any impact it will have if you don't) and then click Reserve. The external IP will then be fixed and you can point your A records at that.

Then in you VM instance setup either when you first set it up:

Just below where you tick to allow http and https, click "Management, security, disks, networking, sole tenancy" then click the Networking tab, then click "default default" under Network interfaces and under External IP change the selection from Ephemeral to the fixed IP reservation you set and click Done and continue creating the VM instance.

Or if you already have your instance up and running, just use the Edit page for your instance and under Network interfaces click on your current connection and change the External IP from Ephemeral to your reserved one.

I'm not 100% on the costs around fixing your external IP - I know it mentions there being a fee if you leave it fixed and unused, but once your instance is using it I can't see anywhere that suggests it will cost more than the Ephemeral one it uses by default. Guess I'll see in next month's GCP invoice, but it will all be coming out of the free credit at this stage anyway.
 
  • Like
Reactions: davidmc
Having just logged into my Console this morning to check on the above steps I have noticed that I am getting a recommendation to upgrade to the small based on memory usage, but I'm not sure what will happen if I just ignore it - I don't appear to be having issues at the moment accessing the data. I may try installing the monitoring agent to see if it tells me how bad the situation is.
 
Last edited:
  • Informative
Reactions: davidmc
Ah that's a good point - looks like you've got round this anyway by just updating the A record and waiting for the re-propagation but you can also set a fixed external IP within GCP too and just get your compute instance to use that...

In GCP Console, Navigation Pane > VPC network > External IP addresses
Click on Reserve Static Address, give it a name (can be same as your instance if you like) and probably best to change the region to match that of your instance (not sure if that bit is strictly necessary or what/if any impact it will have if you don't) and then click Reserve. The external IP will then be fixed and you can point your A records at that.

Then in you VM instance setup either when you first set it up:

Just below where you tick to allow http and https, click "Management, security, disks, networking, sole tenancy" then click the Networking tab, then click "default default" under Network interfaces and under External IP change the selection from Ephemeral to the fixed IP reservation you set and click Done and continue creating the VM instance.

Or if you already have your instance up and running, just use the Edit page for your instance and under Network interfaces click on your current connection and change the External IP from Ephemeral to your reserved one.

I'm not 100% on the costs around fixing your external IP - I know it mentions there being a fee if you leave it fixed and unused, but once your instance is using it I can't see anywhere that suggests it will cost more than the Ephemeral one it uses by default. Guess I'll see in next month's GCP invoice, but it will all be coming out of the free credit at this stage anyway.

Good shout that, when you already have an existing VM you go into to the VPC section and change the IP you've already got to static :)

Having just logged into my Console this morning to check on the above steps I have noticed that I am getting a recommendation to upgrade to the small based on memory usage, but I'm not sure what will happen if I just ignore it - I don't appear to be having issues at the moment accessing the data. I may try installing the monitoring agent to see if it tells me how bad the situation is.

Wonder if that's based on volume of data? Mine doesn't have much in yet and isn't complaining so far