Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

TeslaMate [megathread]

This site may earn commission on affiliate links.
I’ve heard that Backblaze is great for backing up your entire machine, something in need to checkout at some point!

Backblaze B2 is not the same service as Backblaze for the computer. Same company, but B2 is an S3 storage compatible service from them. It rocks. And you only pay for what you use, of course. For the small backups of Teslamate it's essentially nothing for the size of the bucket.
 
  • Like
Reactions: DaveW
Good shout, I’ll stick that in there at some point over the weekend :)

The Google Drive backup solution does work well, good for free too

Maybe I sound paranoid, but most of us went with Teslamate to avoid hosting on a third party service and enhance privacy. Keep in mind that you are now throwing up a "naked" database backup to Google Drive. You might want to add one step to your backup script to password protect the file as the last step before kicking off the rcloner part. Something like:

$ zip -P passw0rd tmbackup.zip tmbackup.bck

or if you want to get really fancy you can encrypt it with openssl. You get the idea.
 
I've put together the guide in one place now

How to setup and run TeslaMate on a Digital Ocean Droplet server - TeslaEV.co.uk

The restore step is fairly painless, it's all working on mine at the moment, I've reinstalled the automated backup and monitoring too.
I have my DO instance up and running but the restore is stuck saying:

psql: error: could not connect to server: FATAL: role "teslamate" does not exist

after I executed

docker-compose exec -T database psql -U teslamate << .
drop schema public cascade;
create schema public;
create extension cube;
create extension earthdistance;
CREATE OR REPLACE FUNCTION public.ll_to_earth(float8, float8)
RETURNS public.earth
LANGUAGE SQL
IMMUTABLE STRICT
PARALLEL SAFE
AS 'SELECT public.cube(public.cube(public.cube(public.earth()*cos(radians(\$1))*cos(radians(\$2))),public.earth()*cos(radians(\$1))*sin(radians(\$2))),public.earth()*sin(radians(\$1)))::public.earth';
.
 
I have my DO instance up and running but the restore is stuck saying:

psql: error: could not connect to server: FATAL: role "teslamate" does not exist

after I executed

docker-compose exec -T database psql -U teslamate << .
drop schema public cascade;
create schema public;
create extension cube;
create extension earthdistance;
CREATE OR REPLACE FUNCTION public.ll_to_earth(float8, float8)
RETURNS public.earth
LANGUAGE SQL
IMMUTABLE STRICT
PARALLEL SAFE
AS 'SELECT public.cube(public.cube(public.cube(public.earth()*cos(radians(\$1))*cos(radians(\$2))),public.earth()*cos(radians(\$1))*sin(radians(\$2))),public.earth()*sin(radians(\$1)))::public.earth';
.

Are you using the same version as before for psql? 12 is what I am on I believe.
 
I have my DO instance up and running but the restore is stuck saying:

psql: error: could not connect to server: FATAL: role "teslamate" does not exist

after I executed

docker-compose exec -T database psql -U teslamate << .
drop schema public cascade;
create schema public;
create extension cube;
create extension earthdistance;
CREATE OR REPLACE FUNCTION public.ll_to_earth(float8, float8)
RETURNS public.earth
LANGUAGE SQL
IMMUTABLE STRICT
PARALLEL SAFE
AS 'SELECT public.cube(public.cube(public.cube(public.earth()*cos(radians(\$1))*cos(radians(\$2))),public.earth()*cos(radians(\$1))*sin(radians(\$2))),public.earth()*sin(radians(\$1)))::public.earth';
.

Worth trying docker compose up again, before only stopping the TeslaMate instance
 
Are you using the same version as before for psql? 12 is what I am on I believe.

Where can I check this?

Worth trying docker compose up again, before only stopping the TeslaMate instance

Tried that but still complaining: psql: error: could not connect to server: FATAL: role "teslamate" does not exist

I also tried to create a backup first returning a similar error:
pg_dump: error: connection to database "teslamate" failed: FATAL: role "teslamate" does not exist
What is this role "teslamate" that does not exist?
 
Where can I check this?



Tried that but still complaining: psql: error: could not connect to server: FATAL: role "teslamate" does not exist

I also tried to create a backup first returning a similar error:
pg_dump: error: connection to database "teslamate" failed: FATAL: role "teslamate" does not exist
What is this role "teslamate" that does not exist?

It would be in your docker-compose file under this line for database:

image: postgres:12

Check the old source file and then the new one on DO

If it is different, you can use this procedure:

Upgrading PostgreSQL to a new major version | TeslaMate
 
this is what the logs say:

root@docker-s-1vcpu-1gb-ams3-01:~# docker-compose logs

Attaching to root_teslamate_1, root_database_1, root_grafana_1, root_mosquitto_1, root_proxy_1

database_1 |

database_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization

database_1 |

database_1 | 2020-09-11 23:13:32.861 UTC [1] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit

database_1 | 2020-09-11 23:13:32.877 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432

database_1 | 2020-09-11 23:13:32.877 UTC [1] LOG: listening on IPv6 address "::", port 5432

database_1 | 2020-09-11 23:13:32.888 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"

database_1 | 2020-09-11 23:13:32.974 UTC [24] LOG: database system was shut down at 2020-09-11 23:13:03 UTC

database_1 | 2020-09-11 23:13:33.003 UTC [1] LOG: database system is ready to accept connections

database_1 | 2020-09-11 23:18:08.627 UTC [58] FATAL: role "teslamate" does not exist

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Starting Grafana" logger=server version=6.7.4 commit=8e44bbc5f5 branch=HEAD compiled=2020-05-26T17:35:38+0000

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.data=/var/lib/grafana"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.logs=/var/log/grafana"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana-plugins"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.provisioning=/etc/grafana/provisioning"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.log.mode=console"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana-plugins"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_SERVER_ROOT_URL=https://dograf.example.com"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_ANALYTICS_REPORTING_ENABLED=FALSE"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_SECURITY_ADMIN_USER=admin"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_SECURITY_ADMIN_PASSWORD=*********"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_SECURITY_DISABLE_GRAVATAR=true"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_SECURITY_ALLOW_EMBEDDING=true"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_USERS_ALLOW_SIGN_UP=false"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_AUTH_ANONYMOUS_ENABLED=false"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_AUTH_BASIC_ENABLED=true"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana-plugins

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="App mode production" logger=settings

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing SqlStore" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Starting DB migration" logger=migrator

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing HTTPServer" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing BackendPluginManager" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing PluginManager" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Starting plugin search" logger=plugins

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Registering plugin" logger=plugins name="Pie Chart"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Registering plugin" logger=plugins name="Map Panel"

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Registering plugin" logger=plugins name=Discrete

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Registering plugin" logger=plugins name=TrackMap

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing HooksService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing OSSLicensingService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing InternalMetricsService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing RemoteCache" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing RenderingService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing AlertEngine" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing QuotaService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing ServerLockService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing UserAuthTokenService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing DatasourceCacheService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing LoginService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing SearchService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing TracingService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing UsageStatsService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing CleanUpService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing NotificationService" logger=server

grafana_1 | t=2020-09-11T23:13:31+0000 lvl=info msg="Initializing provisioningServiceImpl" logger=server

grafana_1 | t=2020-09-11T23:13:32+0000 lvl=info msg="Initializing Stream Manager"

grafana_1 | t=2020-09-11T23:13:32+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=http subUrl= socket=

grafana_1 | t=2020-09-11T23:13:32+0000 lvl=info msg="Backend rendering via phantomJS" logger=rendering renderer=phantomJS

grafana_1 | t=2020-09-11T23:13:32+0000 lvl=warn msg="phantomJS is deprecated and will be removed in a future release. You should consider migrating from phantomJS to grafana-image-renderer plugin. Read more at Image rendering" logger=rendering renderer=phantomJS

mosquitto_1 | 1599866012: mosquitto version 1.6.12 starting

mosquitto_1 | 1599866012: Config loaded from /mosquitto/config/mosquitto.conf.

mosquitto_1 | 1599866012: Opening ipv4 listen socket on port 1883.

mosquitto_1 | 1599866012: Opening ipv6 listen socket on port 1883.

mosquitto_1 | 1599866012: mosquitto version 1.6.12 running

mosquitto_1 | 1599866030: New connection from 172.20.0.6 on port 1883.

mosquitto_1 | 1599866030: New client connected from 172.20.0.6 as TESLAMATE_302E3732323 (p2, c1, k60).

proxy_1 | time="2020-09-11T23:13:32Z" level=info msg="Configuration loaded from flags."

teslamate_1 | 01:13:38.069 [info] Already up

teslamate_1 | 01:13:48.313 [info] Version: 1.19.4

teslamate_1 | 01:13:49.224 [info] Refreshed api tokens

teslamate_1 | 01:13:49.224 [info] Scheduling token refresh in 40d

teslamate_1 | 01:13:49.232 [info] Running TeslaMateWeb.Endpoint with cowboy 2.7.0 at :::4000 (http)

teslamate_1 | 01:13:49.233 [info] Access TeslaMateWeb.Endpoint at http://dobeta.g358xh.nl

teslamate_1 | 01:13:50.068 [info] Starting logger for '** BATTERY LOW **'

teslamate_1 | 01:13:50.089 [info] MQTT connection has been established

teslamate_1 | 01:13:51.007 car_id=1 [info] Start / :eek:ffline

teslamate_1 | 01:13:52.014 [info] tzdata release in place is from a file last modified Wed, 11 Sep 2019 19:35:17 GMT. Release file on server was last modified Fri, 24 Apr 2020 04:15:20 GMT.

teslamate_1 | 01:13:53.902 [info] Tzdata has updated the release from 2019c to 2020a
 
this is what the logs say:
teslamate_1 | 01:13:53.902 [info] Tzdata has updated the release from 2019c to 2020a

Well, assuming that you DO have a valid backup you can still use to restore it wouldn't hurt to just completely purge the database and delete it and start over. You can use commands to do that here even though you are not upgrading the database.

Upgrading PostgreSQL to a new major version | TeslaMate

Delete the database, then do:

docker-compose up

Let it run for a bit BEFORE you restore your backup and watch the console for errors on a virgin install. If all looks good:

docker-compose down

Then do the restore of your backup and

docker-compose up -d

Hope it works out!
 
Not an issue for those running Teslamate docker-compose installs in the UTC timezone, but I found that my Grafana dashboards were only displaying UTC time even though the environment variable string is correctly set for Europe/Zurich. To resolve this the only way I found was to go to Grafana settings and modify directly. For some reason it is not picking up the TM-TZ string in the .env file. I even modified this in the docker-compose file to add this string to the grafana section. Kind of puzzled why it won't pick it up, but at least setting it manually fixes the time in the dashboards. If anyone knows how to resolve this in docker-compose that would be ideal, but it works for now.
 
Not an issue for those running Teslamate docker-compose installs in the UTC timezone, but I found that my Grafana dashboards were only displaying UTC time even though the environment variable string is correctly set for Europe/Zurich. To resolve this the only way I found was to go to Grafana settings and modify directly. For some reason it is not picking up the TM-TZ string in the .env file. I even modified this in the docker-compose file to add this string to the grafana section. Kind of puzzled why it won't pick it up, but at least setting it manually fixes the time in the dashboards. If anyone knows how to resolve this in docker-compose that would be ideal, but it works for now.

I should note that I am living on the "edge" and running the edge build to get latest pushes to github. Not the latest release from master.
 
Thanks Roy! Can't see anything though?
Very odd. The link works for me! Here’s the page content:


Import from TeslaFi (BETA)
#

  • CREATE A BACKUP OF YOUR DATA‼️

  • If you have been using TeslaMate since before the 1.16 release, the docker-compose.yml needs to be updated. Add the following volume mapping to the teslamate service:

    services:
    teslamate:
    # ...
    volumes:
    - ./import:/opt/app/import
  • Export your TeslaFi data (for one car) as CSV by month: Settings -> Account -> Download TeslaFi Data.
    • If you have a ton of TeslaFi data and don't want to deal with the UI, you can run this python script to export all data: Export from TeslaFi #563
#
  1. Copy the exported CSV files into a directory named import next to the docker-compose.yml:

    .
    ├── docker-compose.yml
    └── import
    ├── TeslaFi82019.csv
    ├── TeslaFi92019.csv
    ├── TeslaFi102019.csv
    ├── TeslaFi112019.csv
    └── TeslaFi122019.csv
    TIP
    The path of the import directory can be customized with the IMPORT_DIR environment variable.

  2. Restart the teslamate service and open the TeslaMate admin interface. Now the import form should be displayed instead of the vehicle summary.

  3. Since the raw data is in the local timezone (assigned by the home address in the TeslaFi settings page) you need to select your local timezone. Then start the import. On low-end hardware like the Raspberry Pi, importing a large data set spanning several years will take a couple of hours.

  4. After the import is complete, empty the import directory (or remove but ensure docker doesn't have a volume mapping) and restart the teslamateservice.
NOTE
If there is an overlap between the already existing TeslaMate and TeslaFi data, only the data prior to the first TeslaMate data will be imported.

NOTE
Since the exported CSV files do not contain addresses, they are added automatically during and after the import. So please note that not all addresses are visible immediately after the import/restarting. Depending on the amount of data imported, it may take a while before they appear. The same applies to elevation data
 
Very odd. The link works for me! Here’s the page content:


Import from TeslaFi (BETA)
#

  • CREATE A BACKUP OF YOUR DATA‼️

  • If you have been using TeslaMate since before the 1.16 release, the docker-compose.yml needs to be updated. Add the following volume mapping to the teslamate service:

    services:
    teslamate:
    # ...
    volumes:
    - ./import:/opt/app/import
  • Export your TeslaFi data (for one car) as CSV by month: Settings -> Account -> Download TeslaFi Data.
    • If you have a ton of TeslaFi data and don't want to deal with the UI, you can run this python script to export all data: Export from TeslaFi #563
#
  1. Copy the exported CSV files into a directory named import next to the docker-compose.yml:

    .
    ├── docker-compose.yml
    └── import
    ├── TeslaFi82019.csv
    ├── TeslaFi92019.csv
    ├── TeslaFi102019.csv
    ├── TeslaFi112019.csv
    └── TeslaFi122019.csv
    TIP
    The path of the import directory can be customized with the IMPORT_DIR environment variable.

  2. Restart the teslamate service and open the TeslaMate admin interface. Now the import form should be displayed instead of the vehicle summary.

  3. Since the raw data is in the local timezone (assigned by the home address in the TeslaFi settings page) you need to select your local timezone. Then start the import. On low-end hardware like the Raspberry Pi, importing a large data set spanning several years will take a couple of hours.

  4. After the import is complete, empty the import directory (or remove but ensure docker doesn't have a volume mapping) and restart the teslamateservice.
NOTE
If there is an overlap between the already existing TeslaMate and TeslaFi data, only the data prior to the first TeslaMate data will be imported.

NOTE
Since the exported CSV files do not contain addresses, they are added automatically during and after the import. So please note that not all addresses are visible immediately after the import/restarting. Depending on the amount of data imported, it may take a while before they appear. The same applies to elevation data
Thanks, the link appeared a few minutes after you posted it.
 
  • Like
Reactions: Roy W.

Getting there:

2020-09-12 18_52_20-Window.png


Edit: Sods law, as soon as I posted that I got an error:

Code:
{%DBConnection.EncodeError{ message: "Postgrex expected an integer in -32768..32767, got 43227. Please make sure the value you are passing matches the definition in your table or in your query or convert the value accordingly." }, [ {Postgrex.DefaultTypes, :encode_params, 3, [file: 'lib/postgrex/type_module.ex', line: 897]}, {DBConnection.Query.Postgrex.Query, :encode, 3, [file: 'lib/postgrex/query.ex', line: 75]}, {DBConnection, :encode, 5, [file: 'lib/db_connection.ex', line: 1148]}, {DBConnection, :run_prepare_execute, 5, [file: 'lib/db_connection.ex', line: 1246]}, {DBConnection, :parsed_prepare_execute, 5, [file: 'lib/db_connection.ex', line: 539]}, {DBConnection, :prepare_execute, 4, [file: 'lib/db_connection.ex', line: 532]}, {Postgrex, :query, 4, [file: 'lib/postgrex.ex', line: 202]}, {Ecto.Adapters.SQL, :struct, 10, [file: 'lib/ecto/adapters/sql.ex', line: 630]} ]}
 
  • Like
Reactions: Roy W.
A couple of quick questions from a TeslaMate newbie; I installed it on a Synology NAS Docker container yesterday, and seemingly successfully imported all of my past TeslaFi data. The only problem I had was concerned with the initial lack of an /import folder which didn't seem to be covered by the installation instructions. But maybe I missed it?

1. Anyway, having had a quick read through some of this thread, I thought that TeslaMate didn't need to periodically wake the car up to scrape data (unlike TeslaFi). But if this is so, why is there a Sleep Mode / Requirements / Vehicle must be locked check box in Settings? (FWIW I have ticked it).

2. How do I move a Dashboard from the Teslamate or Grafana folders to the Internal folder? Currently I have some in each and would prefer them all to be in one place.

TIA
 
1. Anyway, having had a quick read through some of this thread, I thought that TeslaMate didn't need to periodically wake the car up to scrape data (unlike TeslaFi). But if this is so, why is there a Sleep Mode / Requirements / Vehicle must be locked check box in Settings? (FWIW I have ticked it).

2. How do I move a Dashboard from the Teslamate or Grafana folders to the Internal folder? Currently I have some in each and would prefer them all to be in one place.

TIA

1. In reality that setting is really there for when you don't have the Streaming API setting enabled. Today I believe it really only helps Teslamate to decide when to mark the car as "asleep", but streaming means it'll pickup any changes anyway.

2. You'll need to login as an Admin to Grafana to manage dashboards, you can click the Sign In option in the bottom left and login using the "admin" user. If you don't know what the password is for that you can reset it:
Code:
docker-compose exec grafana grafana-cli admin reset-admin-password
 
Getting there:

View attachment 587481

Edit: Sods law, as soon as I posted that I got an error:

Code:
{%DBConnection.EncodeError{ message: "Postgrex expected an integer in -32768..32767, got 43227. Please make sure the value you are passing matches the definition in your table or in your query or convert the value accordingly." }, [ {Postgrex.DefaultTypes, :encode_params, 3, [file: 'lib/postgrex/type_module.ex', line: 897]}, {DBConnection.Query.Postgrex.Query, :encode, 3, [file: 'lib/postgrex/query.ex', line: 75]}, {DBConnection, :encode, 5, [file: 'lib/db_connection.ex', line: 1148]}, {DBConnection, :run_prepare_execute, 5, [file: 'lib/db_connection.ex', line: 1246]}, {DBConnection, :parsed_prepare_execute, 5, [file: 'lib/db_connection.ex', line: 539]}, {DBConnection, :prepare_execute, 4, [file: 'lib/db_connection.ex', line: 532]}, {Postgrex, :query, 4, [file: 'lib/postgrex.ex', line: 202]}, {Ecto.Adapters.SQL, :struct, 10, [file: 'lib/ecto/adapters/sql.ex', line: 630]} ]}

So three of the 7 months of TeslaFi imported then I got the error above.

I deleted the import files than reimported the ones that failed one by one. They all reported as imported successfully however when I check my drive history I'm missing the last 3 months.

I guess I need to find the row in the csv that's causing the error? Then I think I'll need to restart the import. But given the import tool marked 3 imports as successful yet there's no data to show, should I nuke the existing imports? Suspect I can access TeslaMate via mysql workbench or similar and delete the existing imported rows?