Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

TeslaMate [megathread]

This site may earn commission on affiliate links.
Since you know the Drive ID, run the same delete command replacing drives with positions and id with drive_id. Will remove the location data.
I just ran

DELETE FROM positions WHERE id = 37;

It shows DELETE 1

However when I refresh the old drive's page, the "Locations" page as well as the "Visited" map it is still showing that entry. When I scroll through the Addresses under "Locations" I can see the entry there as well. Any thoughts? Thanks for your help, it's really annoying me.
 
I just ran

DELETE FROM positions WHERE id = 37;

It shows DELETE 1

However when I refresh the old drive's page, the "Locations" page as well as the "Visited" map it is still showing that entry. When I scroll through the Addresses under "Locations" I can see the entry there as well. Any thoughts? Thanks for your help, it's really annoying me.
You miss read the quoted message. You needed to replace ‘id’ with ‘drive_id’. You just removed the 37th value inside of Positions. Not all the positions values that reference drive ID 37.

If you took a backup of your data and want that single position back, restore and run the modified command again.

Regardless, take a backup before running any commands.
 
You miss read the quoted message. You needed to replace ‘id’ with ‘drive_id’. You just removed the 37th value inside of Positions. Not all the positions values that reference drive ID 37.

If you took a backup of your data and want that single position back, restore and run the modified command again.

Regardless, take a backup before running any commands.
I just ran

DELETE FROM positions WHERE drive_id = 37;

But it shows DELETE 0

Is it because I already ran DELETE FROM drive WHERE id = 37; earlier?
 
No, as @cwanja said that just deleted the 37th record in the positions table.

Might be worth trying the proper command with the typo amended as I mentioned above.
I tried the proper command without the ! after Repo.get but nothing seems to happened, probably because the drive has already been deleted.

I believe @cwanja 's command earlier (DELETE FROM positions WHERE drive_id = 37;) should of worked, if I didn't already ran (DELETE FROM drive WHERE id = 37;), the page below points it out which should be what's on the offical page (if you are going to delete a drive, you most likely want to delete the position data along with it?)


I did however tried to find the position row ID of the drive prior and the drive after, to work out the position row ID of the drive I want to delete, then I use



which removed all the position data of that drive. Now when I go to Visited page, the map is correct!

*However*, under the "Locations" page, I can still see two Addresses entry of that wrong location, which messes up the "Cities" graph above. How can I delete the wrong Addresses? It must be stored in a different table/databases somewhere. Sooo close to fixing this!
 
  • Like
Reactions: cwanja
is the option to export the contents of a dashboard to a CSV disabled in teslamate? i looked up how to export data from grafana and i don't see it in the "more" menu where i would expect it. i'm trying to cross-reference my usage with my utility to see if a time of use rate is a sensible switch
 
Hi, I'm hoping one of you teslamate gurus can help me through some self inflicted teslamate issues...

I changed some networking gear and tidied up my DHCP reservations, which resulted in new IP for both my teslamate host and my home assistant (which is also my MQTT broker).

Teslamate seemed to working OK with the host IP updated but MQTT wasn't updated, so I did the following:

updated docker-compose.yml with MQTT_HOST=192.168.1.3

ran docker compose pull and docker compose up -d (figured I may as well grab the latest)

restarted the host for good measure

Now the teslamate container keeps restarting and I'm experiencing serious lagging when logged in with ssh trying to figure this out with my basic command line skills! The MQTT stuff seems right as topics are being received in the few moments that the container is online. The teslamate web UI does load, but doesn't last for long before the container crashes and I get an internal server error.

I've run docker-compose logs teslamate which results in thousands of lines such as I've attached. I'm hoping that will mean something to someone...

Hoping for a magic fix, however I do have backups.
 

Attachments

  • teslamate logs.txt
    8.7 KB · Views: 29
dB Connection errors. Is your database up?

I believe so.
Code:
lex@teslamate:~ $ docker container ls -a
CONTAINER ID   IMAGE                           COMMAND                  CREATED        STATUS              PORTS                                                                                            NAMES
65161d91338b   teslamate/teslamate:latest      "tini -- /bin/sh /en…"   15 hours ago   Up About a minute   0.0.0.0:4000->4000/tcp, :::4000->4000/tcp                                                        lex-teslamate-1
7eb4bce1573c   postgres:14                     "docker-entrypoint.s…"   15 hours ago   Up 12 hours         5432/tcp                                                                                         lex-database-1
f788cd0d36af   eclipse-mosquitto:2             "/docker-entrypoint.…"   15 hours ago   Up 12 hours         1883/tcp                                                                                         lex-mosquitto-1
be81f820ff99   teslamate/grafana:latest        "/run.sh"                4 months ago   Up 12 hours         0.0.0.0:3000->3000/tcp, :::3000->3000/tcp                                                        lex-grafana-1
19b58744b70d   portainer/portainer-ce:latest   "/portainer"             8 months ago   Up 12 hours         0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9443->9443/tcp, :::9443->9443/tcp, 9000/tcp   portainer

Any errors in that log?
docker-compose logs database

output is attached in 'tm database logs.txt'.
Output of docker-compose ps would be helpful

Code:
lex@teslamate:~ $ docker-compose ps
     Name                    Command               State                    Ports
---------------------------------------------------------------------------------------------------
lex-database-1    docker-entrypoint.sh postgres    Up      5432/tcp
lex-grafana-1     /run.sh                          Up      0.0.0.0:3000->3000/tcp,:::3000->3000/tcp
lex-mosquitto-1   /docker-entrypoint.sh mosq ...   Up      1883/tcp
lex-teslamate-1   tini -- /bin/sh /entrypoin ...   Up      0.0.0.0:4000->4000/tcp,:::4000->4000/tcp

Now might be a good time to mention that this is running on a Pi 3B+ with an SD card. I have an SSD for it but for the life of me I could not get it to detect (even changed SSD and adaptor).
 
Nothing attached that I can see
oops. Now it's attached.

Since posting I have found another M.2 adaptor that is working with the 3B+ so I'm setting up a new install to restore backup to. the SD card that is having 'issues' is still in tact though.

edit: hmm. I'm attaching a file but it's not showing in thread. I'll just paste a snip of the contents instead.

Code:
lex-database-1 | 2023-10-17 20:44:03.207 UTC [1] LOG:  server process (PID 10380) was terminated by signal 11: Segmentation fault
lex-database-1 | 2023-10-17 20:44:03.207 UTC [1] DETAIL:  Failed process was running: SELECT s0."drive_id" FROM (SELECT sp0."drive_id" AS "drive_id", count(*) FILTER (WHERE NOT (sp0."odometer" IS NULL) AND (sp0."ideal_battery_range_km" IS NULL)) AS "streamed_count" FROM "positions" AS sp0 WHERE (NOT (sp0."drive_id" IS NULL)) GROUP BY sp0."drive_id") AS s0 WHERE (s0."streamed_count" = 0)
lex-database-1 | 2023-10-17 20:44:03.207 UTC [1] LOG:  terminating any other active server processes
lex-database-1 | 2023-10-17 20:44:03.272 UTC [10395] LOG:  PID 10384 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:44:03.291 UTC [10396] LOG:  PID 10386 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:44:03.307 UTC [10397] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:44:03.317 UTC [10398] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:44:03.351 UTC [10399] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:44:03.515 UTC [1] LOG:  all server processes terminated; reinitializing
lex-database-1 | 2023-10-17 20:44:07.787 UTC [10400] LOG:  database system was interrupted; last known up at 2023-10-17 20:43:28 UTC
lex-database-1 | 2023-10-17 20:44:07.789 UTC [10401] LOG:  PID 10385 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:44:07.793 UTC [10402] LOG:  PID 10383 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:44:07.805 UTC [10404] LOG:  PID 10381 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:44:07.807 UTC [10403] LOG:  PID 10382 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:44:07.810 UTC [10405] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:44:07.830 UTC [10407] LOG:  PID 10393 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:44:07.833 UTC [10408] LOG:  PID 10387 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:44:07.837 UTC [10410] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:44:07.841 UTC [10409] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:44:07.851 UTC [10406] LOG:  PID 10389 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:44:07.866 UTC [10411] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:44:07.883 UTC [10412] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:44:07.895 UTC [10413] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:44:07.905 UTC [10414] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:44:07.911 UTC [10417] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:44:07.915 UTC [10416] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:44:07.924 UTC [10415] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:44:08.113 UTC [10400] LOG:  database system was not properly shut down; automatic recovery in progress
lex-database-1 | 2023-10-17 20:44:08.155 UTC [10400] LOG:  redo starts at 0/E5746510
lex-database-1 | 2023-10-17 20:44:08.155 UTC [10400] LOG:  invalid record length at 0/E5746548: wanted 24, got 0
lex-database-1 | 2023-10-17 20:44:08.155 UTC [10400] LOG:  redo done at 0/E5746510 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
lex-database-1 | 2023-10-17 20:44:08.353 UTC [1] LOG:  database system is ready to accept connections
lex-database-1 | 2023-10-17 20:45:28.470 UTC [1] LOG:  background worker "parallel worker" (PID 10438) was terminated by signal 11: Segmentation fault
lex-database-1 | 2023-10-17 20:45:28.470 UTC [1] DETAIL:  Failed process was running: SELECT s0."drive_id" FROM (SELECT sp0."drive_id" AS "drive_id", count(*) FILTER (WHERE NOT (sp0."odometer" IS NULL) AND (sp0."ideal_battery_range_km" IS NULL)) AS "streamed_count" FROM "positions" AS sp0 WHERE (NOT (sp0."drive_id" IS NULL)) GROUP BY sp0."drive_id") AS s0 WHERE (s0."streamed_count" = 0)
lex-database-1 | 2023-10-17 20:45:28.470 UTC [1] LOG:  terminating any other active server processes
lex-database-1 | 2023-10-17 20:45:28.828 UTC [10441] LOG:  PID 10430 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:45:29.248 UTC [10442] LOG:  PID 10433 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:45:29.263 UTC [10443] LOG:  PID 10432 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:45:29.268 UTC [10444] LOG:  PID 10428 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:45:29.836 UTC [10445] LOG:  PID 10436 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:45:29.842 UTC [10446] LOG:  PID 10427 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:45:29.846 UTC [10447] LOG:  PID 10431 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:45:29.857 UTC [10448] LOG:  PID 10429 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:45:29.863 UTC [10449] LOG:  PID 10434 in cancel request did not match any process
lex-database-1 | 2023-10-17 20:45:31.674 UTC [1] LOG:  all server processes terminated; reinitializing
lex-database-1 | 2023-10-17 20:45:32.645 UTC [10450] LOG:  database system was interrupted; last known up at 2023-10-17 20:44:08 UTC
lex-database-1 | 2023-10-17 20:45:32.647 UTC [10451] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:45:32.655 UTC [10452] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:45:32.658 UTC [10453] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:45:32.669 UTC [10455] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:45:32.676 UTC [10454] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:45:32.684 UTC [10456] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:45:32.689 UTC [10459] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:45:32.690 UTC [10458] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:45:32.707 UTC [10460] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:45:32.708 UTC [10457] FATAL:  the database system is in recovery mode
lex-database-1 | 2023-10-17 20:45:32.896 UTC [10450] LOG:  database system was not properly shut down; automatic recovery in progress
lex-database-1 | 2023-10-17 20:45:32.924 UTC [10450] LOG:  redo starts at 0/E57465C0
lex-database-1 | 2023-10-17 20:45:32.936 UTC [10450] LOG:  invalid record length at 0/E5748500: wanted 24, got 0
lex-database-1 | 2023-10-17 20:45:32.936 UTC [10450] LOG:  redo done at 0/E57484C8 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.01 s
lex-database-1 | 2023-10-17 20:45:33.132 UTC [1] LOG:  database system is ready to accept connections
 
well, my pi is in use restoring my backup and I can't tell from windows because of the file system (so I don't know) It's a 64GV card though and backup file is <1GB.
I did notice in my troubleshooting through portainer a bunch of unused volumes and containers which I removed. Last nights back up was significantly smaller but these issues existed before that. I am restoring a backup form before the issues so I'll likely have to remove those again. This is going on to a 256GB SSD, so fingers crossed it works smoothly once finished.
 
I've not been able to install any of my backups form the past 7 days and I don't keep any further out.

The backup restore keeps failing while altering tables. Then the teslamate container starts looping again.

I was able to get a fresh instance of teslamate working fine.

This is the error when restoring the backup.
Code:
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.


This seems relevant - I think I should not have pulled the latest. Will go back to my sdcard and see if I can get that working, then get a working backup over to my now operating SSD.

How much space do you have left on your disk (SD Card)
Code:
lex@teslamate:~ $ df -H
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        62G   13G   47G  22% /
devtmpfs        341M     0  341M   0% /dev
tmpfs           477M     0  477M   0% /dev/shm
tmpfs           191M  1.5M  190M   1% /run
tmpfs           5.3M  4.1k  5.3M   1% /run/lock
/dev/mmcblk0p1  268M   32M  236M  12% /boot
tmpfs            96M     0   96M   0% /run/user/1000
 
Last edited: