# Logging software in my own server



## Niki-and-I (Nov 18, 2018)

I want to keep data logs for my Model 3 but I don't want to use a 3rd party service. Essentially I want something like TeslaFi but running on my own computer.

I've looked around a bit and found DoctorMcKay's Tesla Data Recorder which is the closest to what I need. I am going to test this and would like to enhance it.

Since I don't really like javascript I plan to write my own software. I would prefer to use C or PERL but I haven't found API bindings for these languages. So I will likely do it in Ruby, the same language as used in timdorr's API. (Will be my first program in Ruby, but it sure looks like a cool language).

If anyone has done something like this I would like to hear about it. If I am successful in developing this project (i.e. if I have enough stamina and time to dedicate to it), I will be very happy to open source it, as I've been doing with my scientific software (for 30 years).

For now I am going to install DoctorMcKay's software and see how it works, before I embark in writing code. At least the database is likely to keep the same structure so this will be good preparation.

So watch this space. I will use this thread to provide updates.


----------



## HeavyPedal (Oct 28, 2018)

I'll be interested to follow your progress. I do some rudimentary logging, command scheduling, voice control and geolocation using the Tesla API, but but since my programming skills are now classified as '_really prehistoric, Dad_' I did most of my intial work via SmartThings/webCoRE. I've thought about trying Ruby, but haven't made the push just yet.


----------



## FF35 (Jul 13, 2018)

I’ve thought about this also and will be following this thread.


----------



## Niki-and-I (Nov 18, 2018)

So far so good: I've managed to get DoctorMcKay's Tesla Data Recorder to work. It is currently logging Niki's data to the SQL database in my server.

For the next step I will concentrate on creating useful reports from the database, since that is something DoctorMcKay's code doesn't do (it is simply a logger with no frontend). Adding reports will already expand the capability of that code.

Candidates for the reporting are a) R statistical software or b) PERL+gnuplot. I guess R is probably better for people using Windows/Mac as it may be easier to install and I know there are many packages that may help with visualization (eg maps).


----------



## Niki-and-I (Nov 18, 2018)

The DoctorMcKay's script has been running for a few days now so I get to appreciate how it is organizing things. One thing that I don't like is that it stores four JSON structures verbatim in the database (ie JSON stored in text strings). This makes it harder to use SQL to make complex queries on the data inside those JSON objects. The first thing I would change in the logger script therefore would be to convert all the data inside the JSON object into table columns. That way we could query things like charge_current_request, charger_voltage or battery_heater_on (though this one may only be interesting to people with models X and S).

So if I am to touch the logger software the first thing will be to put everything in SQL as it will make more queries possible (without complex code).

Another thing I am starting to grapple with is that SQL is not great for time series data. But since I am writing the analysis in R, I think I am just going to pass large chunks of data to the R scripts and let these deal with the time series.

The first analysis script I am working on is a weekly charging summary (later can easily change to monthly, yearly, etc.). Basically the R script will do an SQL query to pull all the charging entries for the week and then do the rest of the work itself.


----------



## Niki-and-I (Nov 18, 2018)

Niki-and-I said:


> The DoctorMcKay's script has been running for a few days now so I get to appreciate how it is organizing things. One thing that I don't like is that it stores four JSON structures verbatim in the database (ie JSON stored in text strings).


Another thing I don't like is that the logging daemon needs to be 'awaken' before starting to drive, otherwise it may miss a chunk of the ride (or even the whole thing). This is because when the car is in sleep mode the logger only queries it every 6 hours (to avoid vampire drain); or every 30 minutes when it is parked and not asleep, etc.

Currently I have set up a web page that I call from my cell phone or from the car's browser just before starting. But this is not practical, and I am sure I will forget many times. Ideally this should be triggered automatically by the car or the phone app. But I have not figured out how to do this yet (there's a method that could be used with Android phones but does not work on iPhones.)

It would be great if we could program the car to load some web page automatically as it started...


----------



## Climate Change Denier (Jan 2, 2019)

I chose python and found an existing package that supported the Tesla API. I then extended it with a polling/logging program and a text parser. I also created an RPC interface to support integration with a CLI or web server or whatever.

However, with my program you don't need to explicitly tell it to start polling. The polling daemon I wrote checks the car up every (configurable) minute when asleep to see if something has changed (like, the car is awake because you started to drive). It also wakes up the car every (configurable) 2.8 hours to track temperature and vampire drain. It lets the car go back to sleep after ~10m after the last state change (stop driving/charging etc). My vampire drain is in the 4.6 to 5.3 mile per day range, which seems to be about par for Tesla 3s so this isn't a big burden.

It currently doesn't have any packaged analytics for producing graphs or whatever, but graphing from json isn't too challenging.

My version is at https://github.com/tesladdicts/teslajson (was https://github.com/SethRobertson/teslajson) while the original version is at https://github.com/gglockner/teslajson


----------



## Bandit (May 5, 2018)

AmpHog said:


> I did most of my intial work via SmartThings/webCoRE.


Have you published your SmartThings/webCoRE handlers/pistons? I've written some python stuff to interface with the car but I haven't had success with a decent device for ST with the Tesla.


----------



## HeavyPedal (Oct 28, 2018)

Bandit said:


> Have you published your SmartThings/webCoRE handlers/pistons? I've written some python stuff to interface with the car but I haven't had success with a decent device for ST with the Tesla.


I haven't published all of my pistons, as much of the work is still in its infancy and the code is not as clean or complete as I'd prefer, but the foundation for the work that I've done is published here.


----------



## Niki-and-I (Nov 18, 2018)

Climate Change Denier said:


> I chose python and found an existing package that supported the Tesla API. I then extended it with a polling/logging program and a text parser. I also created an RPC interface to support integration with a CLI or web server or whatever.
> 
> However, with my program you don't need to explicitly tell it to start polling. The polling daemon I wrote checks the car up every (configurable) minute when asleep to see if something has changed (like, the car is awake because you started to drive). It also wakes up the car every (configurable) 2.8 hours to track temperature and vampire drain. It lets the car go back to sleep after ~10m after the last state change (stop driving/charging etc). My vampire drain is in the 4.6 to 5.3 mile per day range, which seems to be about par for Tesla 3s so this isn't a big burden.
> 
> ...


Thanks for that. From what you describe and a quick look at your code, it seems that your approach has some advantages to what I am using, though also some things I'd change. I will evaluate it some time soon to get a better feel.

These are the things I like in your approach:

no need to wake up the logger
python code (even though I don't write python, I prefer it to javascript)
but there are some negatives:

drain: 4.6 - 5.3 mile per day is a bit more than I'm getting (3.8-4.0)
writing output in json to files (I'd rather use a database)
I have a question about the difference between "checking the car" versus "awaking the car". I thought that checking the car would also awake it, no?
The current poller I am using waits 6 hours when it finds the car was asleep (which is why it drains so little); of course this comes at the cost of having to manually wake up the poller (which will then wake up the car) before starting to drive.

Another question: are you following the polling approach used by TeslaFi? (does anyone know what is the approach they follow? I guess it cannot be much different)

I'm thinking that I may end up adapting your code to write out the data to a relational database; that would allow me to recycle the SQL/R code I've already written. I think I can write out reports as good as those from TeslaFi that way, which is really my objective. That way I could get the same functionality as in TeslaFi but not have my data outside my control (except Tesla itself, of course).


----------



## Bokonon (Apr 13, 2017)

Niki-and-I said:


> I have a question about the difference between "checking the car" versus "awaking the car". I thought that checking the car would also awake it, no?


The "vehicles" API endpoint provides a way of checking whether a given car is awake or asleep without waking it up. (Note that while the car is asleep, a minimal amount of information relating to the car's identity is available through the API, such as its name, configuration, and VIN.)

TeslaFi basically calls this "vehicles" endpoint every [n] minutes to check your car's status, except when TeslaFi pauses polling to allow your car to sleep (per your configured sleep settings). Once TeslaFi confirms that your car is asleep, it resumes polling every [n] minutes so it will know when your car wakes up. Also, when TeslaFi determines that your car is driving, it increases the polling frequency to 2-3 times a minute until the drive ends, in order to improve the accuracy of its data.


----------



## Climate Change Denier (Jan 2, 2019)

I'd like to know if you find a longer timeout gives you lower drain. I didn't notice a difference, but I was not doing data-driven analysis before I had the data but other people online were reporting more lower and higher drains without monitoring so I figured I was close enough. But that is why I make all of the timeouts configurable on the command line.

I thought about throwing it into PostgreSQL, but I thought that the ability to compress older files probably would be more valuable to save space. However, if you have a great analytics package, that would certainly make more sense.

I was looking at some of the posts by the teslafi people when they were first playing around, and it seems to be approximately the same. I'm sure there are specific details on query selection and timeouts which may vary (though my impression is the timeout are configurable on their system as well, on a per-account basis. (And this agrees with what Bokonon just said).

I could probably hack up a quick json to sql conversion if you want and provided the schema.


----------



## Niki-and-I (Nov 18, 2018)

Climate Change Denier said:


> I'd like to know if you find a longer timeout gives you lower drain. I didn't notice a difference, but I was not doing data-driven analysis before I had the data but other people online were reporting more lower and higher drains without monitoring so I figured I was close enough. But that is why I make all of the timeouts configurable on the command line.
> 
> I thought about throwing it into PostgreSQL, but I thought that the ability to compress older files probably would be more valuable to save space. However, if you have a great analytics package, that would certainly make more sense.
> 
> ...


I think I am going to fork your project and add the code for SQL logging (probably using MariaDB but if you strongly prefer PostgreSQL I can also use that, let me know). Then I will send you a pull request and you can add my mods if you like them.

I will base the schema on the one from DoctorMcKay's, except that I will flatten all of the json objects and put everything as columns.

I definitely don't want to concentrate on the logger, I would like to put most of my effort on reports. Python can also be a good option and I may end up using that for reports (would allow me to learn more python too). But reporting doesn't have to be done in the same language as the logger.

I am going to do this on a few evening hours per week and some weekend time, so don't expect it to happen very fast...


----------



## Climate Change Denier (Jan 2, 2019)

I certainly prefer postgresql. I just looked at his sql and it is interesting how we have approximately the same number of "important data" elements (I have 14, he has 17) but we only have five things in common.

Probably I'd suggest a union of the two.


----------



## jat255 (Sep 12, 2017)

Niki-and-I said:


> I think I am going to fork your project and add the code for SQL logging (probably using MariaDB but if you strongly prefer PostgreSQL I can also use that, let me know). Then I will send you a pull request and you can add my mods if you like them.
> 
> I am going to do this on a few evening hours per week and some weekend time, so don't expect it to happen very fast...


@Niki-and-I is your code public anywhere? I'm somewhat capable hacking around on SQL, R, and Python, and I'd love to contribute to the efforts.


----------



## Niki-and-I (Nov 18, 2018)

jat255 said:


> @Niki-and-I is your code public anywhere? I'm somewhat capable hacking around on SQL, R, and Python, and I'd love to contribute to the efforts.


Not yet, the amount of code I've produced is still small. But I will get something on Gihub this weekend. I will post here when I get it done.


----------



## Niki-and-I (Nov 18, 2018)

Climate Change Denier said:


> I certainly prefer postgresql. I just looked at his sql and it is interesting how we have approximately the same number of "important data" elements (I have 14, he has 17) but we only have five things in common.
> 
> Probably I'd suggest a union of the two.


I've run your logger this morning and I am now looking over the json file it created. I'm now going through the data to see what I would like to log into a relational table.

What I am not sure is whether a) I should just create a separate app that logs to SQL, independent from your app, or b) if I should extend yours to allow users the option to use json or SQL data logging. What do you think?


----------



## Climate Change Denier (Jan 2, 2019)

Niki-and-I said:


> What I am not sure is whether a) I should just create a separate app that logs to SQL, independent from your app, or b) if I should extend yours to allow users the option to use json or SQL data logging. What do you think?


I think it should be extended. I think ideally the tesla_parselib.py should have the SQL output since it already parses the json---it also would let people (me I guess ) switch from json to SQL. Obviously the parser would have to be extended to support whatever new variables you wanted to directly put into the SQL schema. Then the poller can just import that class, parse the json it retrieves, and export it. Probably I would then convert the poller to use the parsed class data instead of directly accessing the json data like I'm doing now.


----------



## Niki-and-I (Nov 18, 2018)

I've forked @Climate Change Denier's package to https://github.com/pmendes/teslajson and I am now working on adding SQL to it. Feel free to check my progress if you want; when I am done I will then issue a Pull Request. Right now I have pushed a preliminary SQL schema https://github.com/pmendes/teslajson/blob/master/create_tables.sql please discuss whether there are important missing features.

I decided to create two tables, one for cars, the other for their statuses. This is because this code can log several cars simultaneously and I like to keep the data normalized. Currently the car table has only a few items of information. I'd like to see if other items are important to people, including those with Model X and S (the code should work for all Tesla models).

I'm going to use the Psycopg2 package to do the connection from Python to Postgresql. If anyone here has very strong opinions to use another package instead please come forward (I've been spending more time reading about why are there so many postgresql connectors for python than I wanted!)


----------



## Climate Change Denier (Jan 2, 2019)

Schema looks good at first blush. psycopg2 is the correct answer in my experience.


----------



## Niki-and-I (Nov 18, 2018)

I've finished updating the _tesla-parser.py_ script. Now it has a command line option to supply a file with database connection details; when that option is given the script inserts all the data in json files (specified in the command line) into that database. This included updating the _tesla_parselib.py_ to provide the data logged in the database as dictionaries. The code now requires the following extra packages: psycopg2, datetime, and tzlocal

This setup is already usable for me (but not yet optimal). I continue to run the poller that saves json files, and then use the parser to insert the data into the database. However for my own use I would rather get rid of the json storage entirely. So my next step will be to change the poller (or make a new one...) that stores the data directly into the database. That should be fairly easy now that the bulk of the work is already done in _tesla_parselib.py_.

@Climate Change Denier if you want to keep your package using json (as its name implies!) then I think you could merge this code and be done with it. Let me know if this is what you want, and we can do the pull request. Then I will fork again and start a new package with a new name (derived from yours, properly acknowledge, etc.) My plans for this package is to have a poller logging directly into postgresql, and then analysis scripts that query from postgresql. The analysis scripts will likely be in R (as I had already started) but may still consider keeping everything in python (to have less dependencies)

Anyone who may want to comment on the code, please do. It is at https://github.com/pmendes/teslajson . Even thoughI have >30 years of programming experience this is actually the first time I wrote python! I would not be surprised if there were much better ways of doing this.


----------



## Climate Change Denier (Jan 2, 2019)

Niki-and-I said:


> if you want to keep your package using json (as its name implies!) then I think you could merge this code and be done with it. Let me know if this is what you want, and we can do the pull request.


Pull away. I'll try to look this weekend.

I see no reason to have divergent versions, though. Direct to db as option sounds great.

@Niki-and-I


----------



## Niki-and-I (Nov 18, 2018)

Climate Change Denier said:


> Pull away. I'll try to look this weekend.
> 
> I see no reason to have divergent versions, though. Direct to db as option sounds great.
> 
> @Niki-and-I


I've sent the pull request already.

Ok I will then work on the poller to add the option to log directly to the DB. Another thing I would like to change in the poller is that it should be run as a daemon rather than from the command line. I'll investigate how to do this in python (I suspect there is some package to do this easily).

I see that the poller saves many json records that don't really contain much information (only 3 fields for the database: car, timestamp and status); I wonder if these could be filtered out? But that is not a major issue as it is pretty simple to remove them in the database after the fact.

But before working on the poller, I will concentrate on the creating some nice reports from the dabatase. I'd like to get the kind of reports that come out of TeslaFi.


----------



## Climate Change Denier (Jan 2, 2019)

I accepted your patch, and then made a ton of changes, including a few which will cause conflicts for you. pushed.

Specifically, I unified the column names between python and postgresql--I did this by updating the postgresql columns, but I'm willing to go the other way (for time/ts I merged into timets to avoid the postgresql reserved word) if you have a lot of code already written to use the database names. Note that you can use `alter table` to rename columns (easy)
and types (check to make sure the timestamps get converted into the desired timezone). Of course if you have been saving everything as json, it probably is easier to reimport.

alter table foo alter COLUMN ts type timestamp with time zone;
alter table foo rename COLUMN ts TO tsx

I also changed the timestamp types to include timezone--always a very good idea in most circumstances.

This allows me to massively deduplicate code through use of loops and getattr/setattr.

I didn't have a chance to hack the poller tonight, which means not until Friday at the earliest.


----------



## Niki-and-I (Nov 18, 2018)

Climate Change Denier said:


> I accepted your patch, and then made a ton of changes, including a few which will cause conflicts for you. pushed.
> 
> Specifically, I unified the column names between python and postgresql--I did this by updating the postgresql columns, but I'm willing to go the other way (for time/ts I merged into timets to avoid the postgresql reserved word) if you have a lot of code already written to use the database names. Note that you can use `alter table` to rename columns (easy)
> and types (check to make sure the timestamps get converted into the desired timezone). Of course if you have been saving everything as json, it probably is easier to reimport.
> ...


Great! I was also thinking that it would be better that the both SQL and python variables have the same names.

I don't yet have much code and I can easily convert. I will drop tables and reload everything from the json files. I had already fixed a couple of bugs in my repository; but rather than doing another pull request I think it's easier for me to just start contributing directly to your github repository. Could you enable "Issues" in the repository (under settings) ? It may be useful to comment on details of the code outside this thread...

By the way, I decided to stay with python for the analysis scripts too. I'm going to use matplotlib and a few other libraries. For example I think we can use the OpenStreetMap API to identify supercharger locations, etc.

One thing I would like is that we name this package something else than "teslajson" (that was already the title of the package you forked from). Think about that.


----------



## Niki-and-I (Nov 18, 2018)

@Climate Change Denier
Of course I also need permission to contribute to your repository... can you enable that? or do you prefer that we just continue with pull requests?


----------



## Climate Change Denier (Jan 2, 2019)

Moved to https://github.com/tesladdicts/teslajson Invite sent


----------



## Niki-and-I (Nov 18, 2018)

Climate Change Denier said:


> Moved to https://github.com/tesladdicts/teslajson Invite sent


invite accepted and I cloned. I'm trying to push in two small commits (one small bug fix) but it seems I don't have permission to do it. Is this something I should do on my side or yours? I use git on the command line. I can send my ssh public key (though I thought that was already on my profile?)


----------



## Climate Change Denier (Jan 2, 2019)

Niki-and-I said:


> invite accepted and I cloned. I'm trying to push in two small commits (one small bug fix) but it seems I don't have permission to do it. Is this something I should do on my side or yours? I use git on the command line. I can send my ssh public key (though I thought that was already on my profile?)


Fixed.


----------



## Niki-and-I (Nov 18, 2018)

I just added the first prototype of a report for the *testatus* package reading from the SQL database. This generates a report for charging sessions (charging_stats.py) , provides details of each charging session and a total summary. It covers a certain number of days (i.e. last _n_ days). Currently it outputs a report in plain text to the console or optionally a HTML file (actually it is XHTML). Plans to improve this report script include:

adding data plots for temperature and charging rate as a function of time for each charging session
adding optional output in PDF, CSV and XLS formats
adding names for locations (detect charging locations, including superchargers)
This report script is also a bit of a prototype for other forthcoming ones, in terms of adopting the same command line options, file formats, etc. Eventually the *testatus* package will provide the means to give you a detailed view of your tesla car(s), while keeping the data in your own server.


----------



## cappas (Dec 3, 2018)

Thanks for doing this. I have not tried with SQL database yet -- I did try tesla_poller to generate json. A couple of minor changes were needed to run it with python 3.6 on Ubuntu 18.04:

line 11: rename "import Queue" to "import queue"
line 196: W = open(pname, "a", 0) has this error "ValueError: can't have unbuffered text I/O". Removing the 3rd parameter ", 0)" rids the error.


----------



## Niki-and-I (Nov 18, 2018)

Since firmware 2019.8.5 (or maybe even 2019.8.3), the logger now prevents the car from sleeping resulting in about 20 miles/day charge lost when it is parked. It seems this has been confirmed by other apps and some changes are required. 

If anyone is using this software please be warned! I would suggest you switch the logger off when you are parked for a long while, particularly if not plugged in.

I will try to address this soon, but cannot promise when. First issue is understanding what has changed so that a solution can be adopted.


----------



## Niki-and-I (Nov 18, 2018)

So I have investigated the logger issue preventing the car from sleeping. Simply turning up one of the timings in the poller has resolved the problem. Instead of having "to_sleep": 150 the value 900 works well (15 minutes). This can be done without altering the code, as the poller allows us to pass timings on the command line, so adding this argument to the command line is enough:
[

```
--intervals "to_sleep=900"
```
I do think, however that we should probably change the default of 150 to 900, as many users will just leave the default values, and then they may run their batteries down, as happened to me...


----------



## MelindaV (Apr 2, 2016)

Niki-and-I said:


> So I have investigated the logger issue preventing the car from sleeping. Simply turning up one of the timings in the poller has resolved the problem. Instead of having "to_sleep": 150 the value 900 works well (15 minutes). This can be done without altering the code, as the poller allows us to pass timings on the command line, so adding this argument to the command line is enough:
> [
> 
> ```
> ...


I've always had Teslafi 'sleep' setting set to 15 minutes and it has always worked flawlessly. Makes sense having it at such a short interval when the car should be attempting to sleep would cause issues otherwise.


----------



## Pallieter (Jun 26, 2019)

Hi @Niki-and-I, have you looked at using https://graphite.readthedocs.io/en/latest/ to indeed be able to compress historic (time series) data?


----------



## Niki-and-I (Nov 18, 2018)

Pallieter said:


> Hi @Niki-and-I, have you looked at using https://graphite.readthedocs.io/en/latest/ to indeed be able to compress historic (time series) data?


Thanks for the pointer, I did not know about it. However I am now sufficiently far into my code that I don't think I will change the architecture. But this could be a good solution for anyone else. The main tool you will need is the logger and I suppose you could use the testatus logger for this (it produces JSON) .

If anything, working on this project is helping me develop my skills in python... I've got a charging report script done and I am now working on the driving report.


----------



## Niki-and-I (Nov 18, 2018)

and one other thing: Graphite seems to require a web server for interaction (using Django). My code is all local by design. The whole purpose of getting this done is to *not* expose the data on the web. Otherwise I would be using TeslaFi, which is great.


----------



## emilkje (Jul 8, 2019)

Niki-and-I said:


> and one other thing: Graphite seems to require a web server for interaction (using Django). My code is all local by design. The whole purpose of getting this done is to *not* expose the data on the web. Otherwise I would be using TeslaFi, which is great.


Sure Graphite needs a web server, but any workstation/laptop is capable of running that server locally. Your code could easily push the telemetry to a loopback- or lan address and never expose the web server outside your local network.


----------



## Niki-and-I (Nov 18, 2018)

emilkje said:


> Sure Graphite needs a web server, but any workstation/latptop is capable of running that server locally. Your code could easily push the telemetry to a loopback- or lan address and never expose the web server outside your local network.


True, but that is not the approach I'm taking. It seems to me to be that it would also be a bit of overkill, though I confess to not have taken more than 30 minutes to assess it; I've committed quite a bit of effort with the current approach using a simple SQL schema. Maybe someone else can try using Graphite.


----------

