Code Monkey home page Code Monkey logo

tconnectsync's Introduction

tconnectsync

Python Package workflow codecov

Tconnectsync synchronizes data one-way from the Tandem Diabetes t:connect web/mobile application to Nightscout.

If you have a t:slim X2 pump with the companion t:connect mobile Android or iOS app, this will allow your pump bolus and basal data to be uploaded to Nightscout automatically. Together with a CGM uploader, such as xDrip+ or the official Dexcom mobile app plus Dexcom Share, this allows your CGM and pump data to be automatically uploaded to Nightscout!

If you have an Android phone, you can use tconnectpatcher to modify the t:connect Android app to upload more frequently. By default, pump data is uploaded to Tandem's servers every hour, but with tconnectpatcher the frequency can be brought down to as low as every five minutes! This allows for nearly real-time (albeit not fully instantaneous) pump data updates, almost like your pump uploads data directly to Nightscout!

How It Works

At a high level, tconnectsync works by querying Tandem's undocumented APIs to receive basal and bolus data from t:connect, and then uploads that data as treatment objects to Nightscout. It contains features for checking for new Tandem pump data continuously, and updating that data along with the pump's reported IOB value to Nightscout whenever there is new data.

When you run the program with no arguments, it performs a single cycle of the following, and exits after completion:

  • Queries for basal information via the t:connect ControlIQ API.
  • Queries for bolus, basal, and IOB data via the t:connect non-ControlIQ API.
  • Merges the basal information received from the two APIs. (If using ControlIQ, then basal information appears only on the ControlIQ API. If not using ControlIQ, it appears only on the legacy API.)
  • Queries Nightscout for the most recently created Temp Basal object by tconnectsync, and uploads all data newer than that.
  • Queries Nightscout for the most recently created Bolus object by tconnectsync, and uploads all data newer than that.

If run with the --auto-update flag, then the application performs the following steps:

  • Queries an API endpoint used only by the t:connect mobile app which returns an internal event ID, corresponding to the most recent event published by the mobile app.
  • Whenever the internal event ID changes (denoting that the mobile app uploaded new data to synchronize), perform all of the above mentioned steps to synchronize data.

What Gets Synced

Tconnectsync is composed of individual so-called synchronization features, which are elements of data that can be synchronized between t:connect data from the pump and Nightscout. When setting up tconnectsync, you can choose to configure which synchronization features are enabled and disabled.

Here are a few examples of reasons why you might want to adjust the enabled synchronization features:

  • If you currently input boluses into Nightscout manually with comments, then you may wish to disable the BOLUS synchronization feature so that there are no duplicated boluses in Nightscout.
  • If you want to see Sleep and Exercise Mode data appear in Nightscout, then you may with to enable the PUMP_EVENTS synchronization feature.
  • If you want to automatically update your Nightscout insulin profile settings from your pump, then you may want to enable the PROFILES synchronization feature.

These synchronization features are enabled by default:

  • BASAL: Basal data
  • BOLUS: Bolus data

The following synchronization features can be optionally enabled:

  • PROFILES: Insulin profile information, including segments, basal rates, correction factors, carb ratios, and the profile which is active.
  • PUMP_EVENTS: Events reported by the pump. Includes support for the following:
    • Site/Cartridge Change (occurs for both a site change and a cartridge change)
    • Empty Cartridge/Pump Shutdown (from my investigation, occurs either when the cartridge runs out of insulin OR you hard-shut off the pump)
    • User Suspended (occurs when you manually disable insulin delivery)
    • Exercise Mode (in Nightscout, appears with a start and end time)
    • Sleep Mode (in Nightscout, appears with a start and end time)
  • IOB: Insulin-on-board data. Only the most recent IOB entry is saved to Nightscout, as an "activity". The Nightscout UI does not currently display this information. In order to read this value, you need to query the Nightscout activity API endpoint. If you don't know what that means, then there is no reason to enable this option.

The following synchronization features are considered to be in alpha, and haven't been widely tested. If you want to use them, set ENABLE_TESTING_MODES=true for them to show up:

  • BOLUS_BG: Adds BG readings which are associated with boluses on the pump into the Nightscout treatment object. It will determine whether the BG reading was automatically filled via the Dexcom connection on the pump or was manually entered by seeing if the BG reading matches the current CGM reading as known to the pump at that time. Support for this is nearly complete.
  • CGM: Adds Dexcom CGM readings from the pump to Nightscout as SGV (sensor glucose value) entries. This should only be used in a situation where xDrip/Dexcom Share/etc. is not used and the pump connection to the CGM will be the only source of CGM data to Nightscout. This requires additional testing before it should be considered ready.

To specify custom synchronization features, pass the names of the desired features to the --features flag, e.g.:

$ tconnectsync --features BASAL BOLUS PUMP_EVENTS PROFILES

If you're using tconnectsync-heroku, see this section in its README.

Setup

The following setup instructions assume that you have a Linux, MacOS, or Windows (with WSL) machine that will run the application continuously.

If you've configured Nightscout before, you may be familiar with Heroku. You can opt to run tconnectsync with Heroku by following these instructions.

To get started, you need to choose whether to install the application on your computer via Pip, Pipenv, or Docker.

After that, you can choose to run the program continuously via Supervisord or on a regular interval with Cron.

NOTE: If you fork the tconnectsync repository on GitHub, do not commit your .env file. If pushed to GitHub, this will make your tconnect and Nightscout passwords publicly visible and put your data at risk.

Installation

First, you need to create a file containing configuration values. The name of this file will be .env, and its location will be dependent on which method of installation you choose. You should specify the following parameters:

# Your credentials for t:connect
TCONNECT_EMAIL='[email protected]'
TCONNECT_PASSWORD='password'

# Your pump's serial number (numeric)
PUMP_SERIAL_NUMBER=11111111

# URL of your Nightscout site
NS_URL='https://yournightscouturl/'
# Your Nightscout API_SECRET value
NS_SECRET='apisecret'

# Current timezone of the pump
TIMEZONE_NAME='America/New_York'

This file contains your t:connect username and password, Tandem pump serial number (which is utilized in API calls to t:connect), your Nightscout URL and secret token (for uploading data to Nightscout), and local timezone (the timezone used in t:connect). When specifying the timezone, enter a TZ database name value.

(Alternatively, these values can be specified via environment variables.)

Installation via Pip

This is the easiest method to install.

First, ensure that you have Python 3 with Pip installed:

  • On MacOS: Open Terminal. Install Homebrew, and then run brew install python3
  • On Linux: Follow your distribution's specific instructions.
    • For Debian/Ubuntu based distros, sudo apt install python3 python3-pip
    • For CentOS/Rocky Linux 8:
      • sudo dnf install python39-pip
      • sudo alternatives --set python /usr/bin/python3.9
  • On Windows:
    • With WSL: Install Ubuntu under the Windows Subsystem for Linux. Open the Ubuntu Terminal, then run sudo apt install python3 python3-pip. Perform the remainder of the steps under the Ubuntu environment.
    • Native: Alternatively, you can run tconnectsync in native Windows with no modifications. However, this is less well-tested (open a GitHub issue if you experience any problems).

Now install the tconnectsync package with pip:

$ pip3 install tconnectsync

To install into a user environment instead of system-wide for a more contained install:

$ pip3 install --user tconnectsync
  • This will place the tconnectsync binary file at /home/<username>/.local/bin/tconnectsync
  • For non-WSL Windows, it will be in <PYTHON DIRECTORY>\Lib\site-packages\tconnectsync

If the pip3 command is not found, run python3 -m pip install tconnectsync instead.

After this, you should be able to view tconnectsync's help with:

$ tconnectsync --help
usage: tconnectsync [-h] [--version] [--pretend] [-v] [--start-date START_DATE] [--end-date END_DATE] [--days DAYS] [--auto-update] [--check-login]
               [--features {BASAL,BOLUS,IOB,PUMP_EVENTS} [{BASAL,BOLUS,IOB,PUMP_EVENTS} ...]]

Syncs bolus, basal, and IOB data from Tandem Diabetes t:connect to Nightscout.

optional arguments:
  -h, --help            show this help message and exit
  --version             show program's version number and exit
  --pretend             Pretend mode: do not upload any data to Nightscout.
  -v, --verbose         Verbose mode: show extra logging details
  --start-date START_DATE
                        The oldest date to process data from. Must be specified with --end-date.
  --end-date END_DATE   The newest date to process data until (inclusive). Must be specified with --start-date.
  --days DAYS           The number of days of t:connect data to read in. Cannot be used with --from-date and --until-date.
  --auto-update         If set, continuously checks for updates from t:connect and syncs with Nightscout.
  --check-login         If set, checks that the provided t:connect credentials can be used to log in.
  --features {BASAL,BOLUS,IOB,PUMP_EVENTS} [{BASAL,BOLUS,IOB,PUMP_EVENTS} ...]
                        Specifies what data should be synchronized between tconnect and Nightscout.

Move the .env file you created to the following folder:

  • MacOS: /Users/<username>/.config/tconnectsync/.env
  • Linux: $HOME/.config/tconnectsync/.env
  • Windows: $HOME/.config/tconnectsync/.env (inside WSL) OR C:\Users\<username>\.config\tconnectsync (native Windows)
$ tconnectsync --check-login

If you receive no errors, then you can move on to the Running Tconnectsync Continuously section.

Installing with Pipenv

You can run the application using Pipenv.

First, ensure you have Python 3 and pip installed, then install pipenv with pip3 install pipenv.

Clone the Git repository for tconnectsync and cd into it with:

$ git clone https://github.com/jwoglom/tconnectsync
$ cd tconnectsync

Then install tconnectsync's dependencies with pipenv install. Afterwards, you can launch the program with pipenv run tconnectsync so long as you are inside the checked-out tconnectsync folder.

$ git clone https://github.com/jwoglom/tconnectsync && cd tconnectsync
$ pip3 install pipenv
$ pipenv install
$ pipenv run tconnectsync --help
usage: main.py [-h] [--version] [--pretend] [-v] [--start-date START_DATE] [--end-date END_DATE] [--days DAYS] [--auto-update] [--check-login]
               [--features {BASAL,BOLUS,IOB,PUMP_EVENTS} [{BASAL,BOLUS,IOB,PUMP_EVENTS} ...]]

Syncs bolus, basal, and IOB data from Tandem Diabetes t:connect to Nightscout.

optional arguments:
  -h, --help            show this help message and exit
  --version             show program's version number and exit
  --pretend             Pretend mode: do not upload any data to Nightscout.
  -v, --verbose         Verbose mode: show extra logging details
  --start-date START_DATE
                        The oldest date to process data from. Must be specified with --end-date.
  --end-date END_DATE   The newest date to process data until (inclusive). Must be specified with --start-date.
  --days DAYS           The number of days of t:connect data to read in. Cannot be used with --from-date and --until-date.
  --auto-update         If set, continuously checks for updates from t:connect and syncs with Nightscout.
  --check-login         If set, checks that the provided t:connect credentials can be used to log in.
  --features {BASAL,BOLUS,IOB,PUMP_EVENTS} [{BASAL,BOLUS,IOB,PUMP_EVENTS} ...]
                        Specifies what data should be synchronized between tconnect and Nightscout.

Move the .env file you created earlier into the tconnectsync folder, and run:

$ pipenv run tconnectsync --check-login

If you receive no errors, then you can move on to the Running Tconnectsync Continuously section.

Installing with Docker

First, ensure that you have Docker running and installed.

To download and run the prebuilt Docker image from GitHub Packages:

$ docker pull ghcr.io/jwoglom/tconnectsync/tconnectsync:latest
$ docker run ghcr.io/jwoglom/tconnectsync/tconnectsync --help

Move the .env file you created earlier into the current folder, and run:

$ docker run tconnectsync --check-login

If you receive no errors, then you can move on to the Running Tconnectsync Continuously section.

Building Locally

To instead build the image locally and launch the project:

$ git clone https://github.com/jwoglom/tconnectsync
$ cd tconnectsync
$ docker build -t tconnectsync .
$ docker run tconnectsync --help

Move the .env file you created earlier into this folder, and run:

$ docker run --env-file=.env tconnectsync --check-login

NOTE: If using the --env-file option to docker run, you may need to remove all quotation marks (' and "s) around values in the .env file for Docker to propagate the variables correctly.

If you receive no errors, then you can move on to the Running Tconnectsync Continuously section.

Running Tconnectsync Continuously

You most likely want tconnectsync to run either continuously (via the auto-update feature) or on a regular interval (via cron).

The supervisord approach is recommended for simplicity.

Running with Supervisord (recommended)

To configure tconnectsync to run continuously in the background using its --auto-update feature, you can use a tool such as Supervisord.

First, install supervisord via your Linux system's package manager. (For example, for Ubuntu/Debian-based systems, run sudo apt install supervisor)

Supervisord is configured by creating a configuration file in /etc/supervisor/conf.d.

Here is an example tconnectsync.conf which you can place in that folder:

[program:tconnectsync]
command=/path/to/tconnectsync/run.sh
directory=/path/to/tconnectsync/
stderr_logfile=/path/to/tconnectsync/stderr.log
stdout_logfile=/path/to/tconnectsync/stdout.log
user=<your username>
numprocs=1
autostart=true
autorestart=true

In order to create a run.sh file, see the section below which aligns with your choice of installation method.

After the configuration file has been created, ensure that Supervisor is running and configured to start on boot:

$ sudo systemctl daemon-reload
$ sudo systemctl start supervisord
$ sudo systemctl enable supervisord

Then use the supervisorctl command to manage the status of the tconnectsync program:

$ sudo supervisorctl status
tconnectsync                     STOPPED
$ sudo supervisorctl start tconnectsync
$ sudo supervisorctl status
tconnectsync                     RUNNING   pid 18810, uptime 00:00:05

You can look at the stderr.log and stdout.log files to check that tconnectsync is running and has started up properly:

$ tail -f /path/to/tconnectsync/stdout.log
Starting auto-update between 2021-09-30 00:06:39.942273 and 2021-10-01 00:06:39.942273
2021-10-01 00:06:39 DEBUG    Instantiating new AndroidApi
2021-10-01 00:06:39 DEBUG    Starting new HTTPS connection (1): tdcservices.tandemdiabetes.com:443
2021-10-01 00:06:40 DEBUG    https://tdcservices.tandemdiabetes.com:443 "POST /cloud/oauth2/token HTTP/1.1" 200 404
2021-10-01 00:06:40 INFO     Logged in to AndroidApi successfully (expiration: 2021-10-01T08:06:40.362Z, in 7 hours, 59 minutes)

With Pip Installation

In the tconnectsync.conf, you should set /path/to/tconnectsync to the folder containing your .env file.

Create a run.sh file containing:

#!/bin/bash

tconnectsync --auto-update

With Pipenv Installation

In the tconnectsync.conf, you should set /path/to/tconnectsync to the folder where you checked-out the GitHub repository.

An example run.sh which launches tconnectsync within its pipenv-configured virtual environment:

#!/bin/bash

PIPENV=/home/$(whoami)/.local/bin/pipenv
VENV=$($PIPENV --venv)

source $VENV/bin/activate

cd /path/to/tconnectsync
exec python3 -u main.py --auto-update

With Docker Installation

In the tconnectsync.conf, you should set /path/to/tconnectsync to the folder where you checked-out the GitHub repository.

An example run.sh if you installed tconnectsync via the GitHub Docker Registry:

#!/bin/bash

docker run ghcr.io/jwoglom/tconnectsync/tconnectsync --auto-update

An example run.sh if you built tconnectsync locally:

#!/bin/bash

docker run tconnectsync --auto-update

Running with Cron

If you choose not to run tconnectsync with --auto-update continuously, you can instead run it at a periodic interval (i.e. every 15 minutes) by just invoking tconnectsync with no arguments via cron.

If using Pipenv or a virtualenv, make sure that you either prefix the call to main.py with pipenv run or source the bin/activate file within the virtualenv, so that the proper dependencies are loaded. If not using any kind of virtualenv, you can instead just install the necessary dependencies as specified inside Pipfile globally.

An system-wide example configuration in /etc/crontab which runs every 15 minutes, on the 15 minute mark:

# m         h  dom mon dow user   command
0,15,30,45  *  *   *   *   root   /path/to/tconnectsync/run.sh

An example of a user crontab crontab -e if not running system-wide, which runs every 15 minutes:

*/15 * * * * /path/to/tconnectsync/run.sh

You can use one of the same run.sh files referenced above, but remove the --auto-update flag since you are handling the functionality for running the script periodically yourself.

For Native Windows

Create a batch file 'tconnectsync.bat' file containing:

python "C:\Users\<USERNAME>\AppData\Local\Programs\Python\<PYTHONVERSIONDIRECTORY>\Lib\site-packages\tconnectsync\main.py" --auto-update

If python does not exist in your path, specify the full path to python.exe.

If main.py doesn't exist in C:\Users\<USERNAME>\AppData\Local\Programs\Python\<PYTHONVERSIONDIRECTORY>\Lib\site-packages\tconnectsync\, create it to match the copy in this repository.

Use Windows Task Scheduler to run this batch file on a scheduled basis.

Tandem APIs

This application utilizes three separate Tandem APIs for obtaining t:connect data, referenced here by the identifying part of their URLs:

  • controliq - Contains Control:IQ related data, namely a timeline of all Basal events uploaded by the pump, separated by type (temp basals, algorithmically-updated basals, or profile-updated basals). Additionally includes CGM and Bolus data.
  • android - Used internally by the t:connect Android app, these API endpoints were discovered by reverse-engineering the Android app. Most of the API endpoints are used for uploading pump data, and tconnectsync uses one endpoint which returns the most recent event ID uploaded by the pump, so we know when more data has been uploaded.
  • tconnectws2 - More legacy than the others, this seems to power the bulk of the main t:connect website. It is used as a last resort due to severe performance issues with this API (see #43). We can use it to retrieve a CSV export of non-ControlIQ basal data, as well as bolus and IOB data. It is only used for bolus data as a fallback, and for pump-reported IOB data if requested. Full tracking of pump events also uses a limited version of this API.

I have only tested tconnectsync with a Tandem pump set in the US Eastern timezone. Tandem's (to us, undocumented) APIs are a bit loose with timezones, so please let me know if you notice any timezone-related bugs.

Backfilling t:connect Data

To backfill existing t:connect data in to Nightscout, you can use the --start-date and --end-date options. For example, the following will upload all t:connect data between January 1st and March 1st, 2020 to Nightscout:

python3 main.py --start-date 2020-01-01 --end-date 2020-03-01

In order to bulk-import a lot of data, you may need to use shorter intervals, and invoke tconnectsync multiple times. Tandem's API endpoints occasionally return invalid data if you request too large of a data window which causes tconnectsync to error out mid-way through.

One oddity when backfilling data is that the Control:IQ specific API endpoints return errors if they are queried before you updated your pump to utilize Control:IQ. This is partially worked around in tconnectsync's code, but you might need to update the logic if you did not switch to a Control:IQ enabled pump immediately after launch.

t:connect API Testing

To test t:connect API endpoints in a Python shell, you can do something like the following:

import tconnectsync
tconnectsync.util.cli.enable_logging()
api = tconnectsync.util.cli.get_api()
# Make API calls, e.g.
therapy_timeline = api.controliq.therapy_timeline('2022-08-01', '2022-08-10')

tconnectsync's People

Contributors

bewest avatar jwalberg avatar jwoglom avatar jyaw avatar legendarygeek avatar policetonyr avatar rstutsman avatar t1diotac avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tconnectsync's Issues

Feature request: support for multiple .env files

My wife and I both have t:slim pumps running control IQ. I would like to be able to natively support 2 .env files to be able to run this for both of our setups (this way I can run everything off a raspberry pi instead of leaving my desktop on all day)

As a work around, I think for now I can script it with cron to copy the first .env into the proper directory, run tconnectsync, copy the second .env into the proper directory and run tconnectsync again. I'm not sure if changing out the .env file every time it runs will create any issue.

Running tests in different timezone

I was troubleshooting some issues with the CGM features and wanted to run the tests...

It seems that's not possible outside of the assertequal-assumed timezone of America/New_York (-04:00). The called functions automatically use the .env-specified TIMEZONE_NAME, while the tests use a specific timezone.

It seems like we could...

  1. Adjust the timezone-related application functions to take timezone as an input so we could specifically call them with the America/New_York time zone...
  2. We could adjust the test functions to use the user-specified timezone.
  3. Other options?

After thinking about this a second.... I think I'd opt for modifying the tests "time" fields to read something like...

arrow.get("2021-10-12 00:00:30-04:00").replace(tzinfo=TIMEZONE_NAME).format()

It notes the original test case you had, but makes it clear you're acknowledging the user's timezone should be used since the application functions use it. Any thoughts before I do some search/replace? There's alot of tests involving the timezone in here and I didnt want to do this if there's a better way to approach it...

Crash in Docker version 0.6.6

2022-02-22 07:56:55 | stdout | TypeError: unsupported operand type(s) for //: 'str' and 'int'
2022-02-22 07:56:55 | stdout | "No new data has been detected via the API for %d minutes. " % (now - self.last_successful_process_time_range)//60 +
2022-02-22 07:56:55 | stdout | File "/home/appuser/tconnectsync/autoupdate.py", line 127, in process
2022-02-22 07:56:55 | stdout | sys.exit(u.process(tconnect, nightscout, time_start, time_end, args.pretend, features=args.features))
2022-02-22 07:56:55 | stdout | File "/home/appuser/tconnectsync/init.py", line 87, in main
2022-02-22 07:56:55 | stdout | main()
2022-02-22 07:56:55 | stdout | File "/home/appuser/main.py", line 5, in 
2022-02-22 07:56:55 | stdout | Traceback (most recent call last):
2022-02-22 07:56:55 | stdout | 2022-02-22 07:56:55 INFO     No new reported t:connect data. (last event index: 557043)
2022-02-22 07:51:55 | stdout | 2022-02-22 07:51:55 INFO     Sleeping for 300.0 sec
2022-02-22 07:51:55 | stdout | 2022-02-22 07:51:55 INFO     No new reported t:connect data. (last event index: 557043)
2022-02-22 07:46:55 | stdout | 2022-02-22 07:46:55 INFO     Sleeping for 300.0 sec
2022-02-22 07:46:55 | stdout | 2022-02-22 07:46:55 INFO     No new reported t:connect data. (last event index: 557043)
2022-02-22 07:41:54 | stdout | 2022-02-22 07:41:54 INFO     Sleeping for 300.0 sec
2022-02-22 07:41:54 | stdout | 2022-02-22 07:41:54 INFO     Added 120 items from process_time_range
2022-02-22 07:41:54 | stdout | 2022-02-22 07:41:54 INFO     Wrote 120 events to Nightscout this process cycle
2022-02-22 07:41:54 | stdout | 2022-02-22 07:41:54 INFO       Processing bolus: {'description': 'Standard/Correction', 'complete': '1', 'completion': 'Completed', 'request_time': '2022-02-21 22:19:39-08:00', 'completion_time': '2022-02-21 22:20:46-08:00', 'insulin': '1.10', 'requested_insulin': '1.10', 'carbs': '0', 'bg': '214', 'user_override': '1', 'extended_bolus': '', 'bolex_completion_time': None, 'bolex_start_time': None} entry: {'eventType': 'Combo Bolus', 'created_at': '2022-02-21 22:20:46-08:00', 'carbs': 0, 'insulin': 1.1, 'notes': 'Standard/Correction (Override)', 'enteredBy': 'Pump (tconnectsync)'}
2022-02-22 07:41:54 | stdout | 2022-02-22 07:41:54 INFO       Processing bolus: {'description': 'Standard', 'complete': '1', 'completion': 'Completed', 'request_time': '2022-02-21 20:57:25-08:00', 'completion_time': '2022-02-21 20:58:19-08:00', 'insulin': '0.70', 'requested_insulin': '0.70', 'carbs': '0', 'bg': '181', 'user_override': '1', 'extended_bolus': '', 'bolex_completion_time': None, 'bolex_start_time': None} entry: {'eventType': 'Combo Bolus', 'created_at': '2022-02-21 20:58:19-08:00', 'carbs': 0, 'insulin': 0.7, 'notes': 'Standard (Override)', 'enteredBy': 'Pump (tconnectsync)'}
2022-02-22 07:41:53 | stdout | 2022-02-22 07:41:53 INFO       Processing bolus: {'description': 'Automatic Bolus/Correction', 'complete': '1', 'completion': 'Completed', 'request_time': '2022-02-21 20:45:46-08:00', 'completion_time': '2022-02-21 20:46:27-08:00', 'insulin': '0.37', 'requested_insulin': '0.37', 'carbs': '0', 'bg': '', 'user_override': '0', 'extended_bolus': '', 'bolex_completion_time': None, 'bolex_start_time': None} entry: {'eventType': 'Combo Bolus', 'created_at': '2022-02-21 20:46:27-08:00', 'carbs': 0, 'insulin': 0.37, 'notes': 'Automatic Bolus/Correction', 'enteredBy': 'Pump (tconnectsync)'}
2022-02-22 07:41:53 | stdout |
<...>
2022-02-22 07:41:07 | stdout | 2022-02-22 07:41:07 INFO       Processing basal: {'time': '2022-02-21 00:08:17-08:00', 'delivery_type': 'algorithmDelivery', 'duration_mins': 5.016666666666667, 'basal_rate': 0.203} entry: {'eventType': 'Temp Basal', 'reason': 'algorithmDelivery', 'duration': 5.016666666666667, 'absolute': 0.203, 'rate': 0.203, 'created_at': '2022-02-21 00:08:17-08:00', 'carbs': None, 'insulin': None, 'enteredBy': 'Pump (tconnectsync)'}
2022-02-22 07:41:06 | stdout | 2022-02-22 07:41:06 INFO       Processing basal: {'time': '2022-02-21 00:03:17-08:00', 'delivery_type': 'algorithmDelivery', 'duration_mins': 5.0, 'basal_rate': 0.194} entry: {'eventType': 'Temp Basal', 'reason': 'algorithmDelivery', 'duration': 5.0, 'absolute': 0.194, 'rate': 0.194, 'created_at': '2022-02-21 00:03:17-08:00', 'carbs': None, 'insulin': None, 'enteredBy': 'Pump (tconnectsync)'}
2022-02-22 07:41:06 | stdout | 2022-02-22 07:41:06 INFO       Processing basal: {'time': '2022-02-21 00:00:00-08:00', 'delivery_type': 'algorithmDelivery', 'duration_mins': 3.283333333333333, 'basal_rate': 0.24} entry: {'eventType': 'Temp Basal', 'reason': 'algorithmDelivery', 'duration': 3.283333333333333, 'absolute': 0.24, 'rate': 0.24, 'created_at': '2022-02-21 00:00:00-08:00', 'carbs': None, 'insulin': None, 'enteredBy': 'Pump (tconnectsync)'}
2022-02-22 07:41:06 | stdout | 2022-02-22 07:41:06 INFO     Last Nightscout basal upload: None
2022-02-22 07:41:05 | stdout | 2022-02-22 07:41:05 INFO     Last CGM reading from t:connect: 2022-02-21T14:00:56-08:00 (9 hours, 40 minutes ago)
2022-02-22 07:41:05 | stdout | 2022-02-22 07:41:05 INFO     Downloading t:connect CSV data
2022-02-22 07:41:04 | stdout | 2022-02-22 07:41:04 INFO     Logged in to ControlIQApi successfully (expiration: 2022-02-22T15:41:03.894Z, in 7 hours, 59 minutes)
2022-02-22 07:41:03 | stdout | 2022-02-22 07:41:03 INFO     Logging in to ControlIQApi...
2022-02-22 07:41:03 | stdout | 2022-02-22 07:41:03 INFO     Downloading t:connect ControlIQ data
2022-02-22 07:41:03 | stdout | 2022-02-22 07:41:03 INFO     New reported t:connect data. (event index: 557043 last: None)
2022-02-22 07:41:03 | stdout | 2022-02-22 07:41:03 INFO     Logged in to AndroidApi successfully (expiration: 2022-02-22T15:41:03.492Z, in 7 hours, 59 minutes)
2022-02-22 07:41:03 | stdout | Starting auto-update between 2022-02-21 07:41:03.119633 and 2022-02-22 07:41:03.119633
2022-02-22 07:41:03 | stdout | 2022-02-22 07:41:03 INFO     Enabled features: BASAL, BOLUS

invalid token error

sadly, i lack the python knowledge to make a pr. the android api reports an api exception when the token expires, it reports the error on line 77, which i look and its because there is no catch to see if its because its an expired token error. anyway, here is the exact error from the docker console:

Traceback (most recent call last):

File "/home/appuser/main.py", line 64, in <module>

main()

File "/home/appuser/main.py", line 57, in main

process_auto_update(tconnect, time_start, time_end, args.pretend)

File "/home/appuser/tconnectsync/autoupdate.py", line 23, in process_auto_update

last_event = tconnect.android.last_event_uploaded(PUMP_SERIAL_NUMBER)

File "/home/appuser/tconnectsync/api/android.py", line 91, in last_event_uploaded

return self.get('cloud/upload/getlasteventuploaded?sn=%d' % pump_serial_number)

File "/home/appuser/tconnectsync/api/android.py", line 77, in get

raise ApiException(r.status_code, "Internal API HTTP %s response: %s" % (str(r.status_code), r.text))

tconnectsync.api.common.ApiException: Internal API HTTP 401 response: {"statusCode":401,"status":401,"code":401,"message":"Invalid token: access token has expired","name":"invalid_token"}

Issues with --check-login

I am getting a couple API errors. Any feedback is appreciated. I've uploaded the --check-login log file for reference and pasted some of the error output below. Saw some issues related to API re-work in the last couple months, not sure if this is related. I've double-checked my passwords, etc.

Issue 1: when querying the Control IQ API for dashboard summary
Issue 2: getting a RemoteDisconnected error on the WS2 API

Error occurred querying ControlIQ API for dashboard_summary:
ControlIQ API HTTP 403 response:

<title>403 Forbidden</title> ...

Logging in to t:connect WS2 API...
Error occurred querying WS2 API:
('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Querying WS2 therapy_timeline_csv...
Error occurred querying WS2 therapy_timeline_csv:
('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
...

pip3 install tconnectsync - Permission Error CentOS/RHEL/RockyLinux 8

Hello,

To perform a pip3 install on CentOS/RHEL/Rocky Linux 8 (mostly an out-of-the-box server install) I had to:
sudo dnf install python3
as my "wheel" user. I then created a non-privileged, limited user for tconnectsync:
sudo useradd -m t1d
And sudo'd to that user:
sudo su - t1d

Finally, when initially installing tconnectsync I received:

PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.6'

I was able to get past this with:
pip3 install --user tconnectsync
Which should keep the entire install contained within my user's home directory. Easier for compartmentalizing apps, keeping the OS clean, and trying out different versions/installs/cleanup.

Hopefully that helps someone!

401 api response

No event index change: 91532

Sleeping 60 seconds after unexpected no index change

New event index: 91631 last: 91532

Downloading t:connect ControlIQ data

Traceback (most recent call last):

File "/home/appuser/main.py", line 69, in <module>

main()

File "/home/appuser/main.py", line 62, in main

process_auto_update(tconnect, nightscout, time_start, time_end, args.pretend)

File "/home/appuser/tconnectsync/autoupdate.py", line 31, in process_auto_update

added = process_time_range(tconnect, nightscout, time_start, time_end, pretend)

File "/home/appuser/tconnectsync/process.py", line 35, in process_time_range

raise e

File "/home/appuser/tconnectsync/process.py", line 26, in process_time_range

ciqTherapyTimelineData = tconnect.controliq.therapy_timeline(time_start, time_end)

File "/home/appuser/tconnectsync/api/controliq.py", line 73, in therapy_timeline

return self.get('therapytimeline/users/%s' % (self.userGuid), {

File "/home/appuser/tconnectsync/api/controliq.py", line 63, in get

raise ApiException(r.status_code, "ControlIQ API HTTP %s response: %s" % (str(r.status_code), r.text))

tconnectsync.api.common.ApiException: ControlIQ API HTTP 401 response:

acording to google, 401 is accessing someting without proper credientals. so ya. there is that. i really wish i could help, but ive not got the time to learn the python rn, but its on my list of things to do!

Credential issues with t:connect Android API

First build took overnight on a Synology for installing dependencies from Pipfile.lock, but it persevered. Now building using cache, build performs very very well.

My .env file is set up adjacent to the dockerfile in /volume1/docker/tconnectsync/
I am Eastern Daylight Time too.

Now, I'm so close yet so far...

MyUserName@SERVERNAME:/volume1/docker/tconnectsync$ docker build -t tconnectsync .
Sending build context to Docker daemon 161.3kB
Step 1/19 : FROM python:3.9-slim as base
---> b2b5367cdfd4
Step 2/19 : ENV LANG C.UTF-8
---> Using cache
---> 19f4b9cd64b4
Step 3/19 : ENV LC_ALL C.UTF-8
---> Using cache
---> d9b7e61b5051
Step 4/19 : ENV PYTHONDONTWRITEBYTECODE 1
---> Using cache
---> a687a2cd0a25
Step 5/19 : ENV PYTHONFAULTHANDLER 1
---> Using cache
---> 544bc9df5dfb
Step 6/19 : FROM base AS python-deps
---> 544bc9df5dfb
Step 7/19 : RUN pip install pipenv
---> Using cache
---> 7eb38a3e0f6b
Step 8/19 : RUN apt-get update && apt-get install -y --no-install-recommends gcc
---> Using cache
---> cbf00d210bec
Step 9/19 : COPY Pipfile .
---> Using cache
---> 66e07482e49b
Step 10/19 : COPY Pipfile.lock .
---> Using cache
---> aed461982635
Step 11/19 : RUN PIPENV_VENV_IN_PROJECT=1 pipenv install --deploy
---> Using cache
---> 690e8e891410
Step 12/19 : FROM base AS runtime
---> 544bc9df5dfb
Step 13/19 : COPY --from=python-deps /.venv /.venv
---> Using cache
---> 76c45c4c0944
Step 14/19 : ENV PATH="/.venv/bin:$PATH"
---> Using cache
---> 42f6080509b8
Step 15/19 : RUN useradd --create-home appuser
---> Using cache
---> fb81eef236e7
Step 16/19 : WORKDIR /home/appuser
---> Using cache
---> b9b5a398108c
Step 17/19 : USER appuser
---> Using cache
---> 61da1ca386a5
Step 18/19 : COPY . .
---> Using cache
---> 6ae19514c3d0
Step 19/19 : ENTRYPOINT ["python3", "-u", "main.py"]
---> Using cache
---> 2cac16cc5442
Successfully built 2cac16cc5442
Successfully tagged tconnectsync:latest

MyUserName@SERVERNAME:/volume1/docker/tconnectsync$ docker run tconnectsync --auto-update
Traceback (most recent call last):
File "/home/appuser/main.py", line 355, in
main()
File "/home/appuser/main.py", line 314, in main
last_event = tconnect.android.last_event_uploaded(PUMP_SERIAL_NUMBER)
File "/home/appuser/api/init.py", line 40, in android
self._android = AndroidApi(self.email, self.password)
File "/home/appuser/api/android.py", line 31, in init
self.login(email, password)
File "/home/appuser/api/android.py", line 56, in login
self.patientObjectId = j["user"]["patientObjectId"]
KeyError: 'patientObjectId'

feature request: include site change data

Good afternoon!

Just some thoughts after using this for a bit. If it is possible, it would be nice to have other pump data included such as cartridge replacements, and fill cannula. With "fill cannula" a nice option would be have some sort of flag that allows that step to populate the site change log entry in NightScout, something like:
--site-change-when-fill-cannula
Ha, maybe that's a little wordy, just a thought though.

Battery level would be awesome too, I'm just not sure how much data is out there that can be pulled in from TConnect, so the "feature request" is just blind, more of a "wish list" I suppose.

Thanks!

Cannot parse argument of type None

My tconnectsync was working fine until i did a split bolus for 75% now and 25% later over 2 hours...30 min in i decided to cancel the rest of the bolus. here is the verbose output for the command prompt (i am running using docker on windows 10)

DEBUG:root:Set logging level to DEBUG
INFO:tconnectsync.process:Downloading t:connect ControlIQ data
Processing data between 2021-07-08 07:09:01.450854 and 2021-07-09 07:09:01.450854
DEBUG:tconnectsync.api:Instantiating new ControlIQApi
INFO:tconnectsync.api.controliq:Logging in to ControlIQApi...
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): tconnect.tandemdiabetes.com:443
DEBUG:urllib3.connectionpool:https://tconnect.tandemdiabetes.com:443 "GET /login.aspx?ReturnUrl=%2F HTTP/1.1" 200 33507
DEBUG:urllib3.connectionpool:https://tconnect.tandemdiabetes.com:443 "POST /login.aspx?ReturnUrl=%2F HTTP/1.1" 302 148
DEBUG:urllib3.connectionpool:https://tconnect.tandemdiabetes.com:443 "POST /CookieCheck.aspx?ReturnUrl=%2F HTTP/1.1" 302 118
DEBUG:urllib3.connectionpool:https://tconnect.tandemdiabetes.com:443 "GET / HTTP/1.1" 200 167543
INFO:tconnectsync.api.controliq:Logged in to ControlIQApi successfully (expiration: 2021-07-09T15:09:02.176Z, in 7 hours, 59 minutes)
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): tdcservices.tandemdiabetes.com:443
DEBUG:urllib3.connectionpool:https://tdcservices.tandemdiabetes.com:443 "GET /tconnect/controliq/api/therapytimeline/users/c6dfafb2-1dc0-47b4-9209-a985d311cf62?startDate=07-08-2021&endDate=07-09-2021 HTTP/1.1" 200 2049
INFO:tconnectsync.process:Downloading t:connect CSV data
DEBUG:tconnectsync.api:Instantiating new WS2Api
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): tconnectws2.tandemdiabetes.com:443
DEBUG:urllib3.connectionpool:https://tconnectws2.tandemdiabetes.com:443 "GET /therapytimeline2csv/c6dfafb2-1dc0-47b4-9209-a985d311cf62/07-08-2021/07-09-2021?format=csv HTTP/1.1" 200 32835
DEBUG:tconnectsync.process:{'DeviceType': 't:slim X2 Insulin Pump', 'SerialNumber': 'deleted'
', 'Description': 'EGV', 'EventDateTime': '2021-07-08T23:30:47', 'Readings (CGM / BGM)': '116'}
INFO:tconnectsync.process:Last CGM reading from t:connect: 2021-07-08T23:30:47-06:00 (1 hours, 38 minutes ago)
DEBUG:tconnectsync.process:No CSV basal data found
DEBUG:tconnectsync.sync.basal:ns_write_basal_events: querying for last uploaded entry
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): deleted.herokuapp.com:443
DEBUG:urllib3.connectionpool:https://deleted.herokuapp.com:443 "GET /api/v1/treatments?count=1&find%5BenteredBy%5D=Pump%20%28tconnectsync%29&find%5BeventType%5D=Temp%20Basal&ts=1625814557.7124944 HTTP/1.1" 200 269
INFO:tconnectsync.sync.basal:Last Nightscout basal upload: 2021-07-09T07:02:11+00:00
DEBUG:tconnectsync.sync.basal:ns_write_basal_events: added 0 events
DEBUG:tconnectsync.sync.bolus:ns_write_bolus_events: querying for last uploaded entry
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): deleted.herokuapp.com:443
DEBUG:urllib3.connectionpool:https://deleted.herokuapp.com:443 "GET /api/v1/treatments?count=1&find%5BenteredBy%5D=Pump%20%28tconnectsync%29&find%5BeventType%5D=Combo%20Bolus&ts=1625814558.1518533 HTTP/1.1" 200 198
INFO:tconnectsync.sync.bolus:Last Nightscout bolus upload: 2021-07-09T01:19:45+00:00
Traceback (most recent call last):
File "/home/appuser/main.py", line 77, in
main()
File "/home/appuser/main.py", line 73, in main
added = process_time_range(tconnect, nightscout, time_start, time_end, args.pretend)
File "/home/appuser/tconnectsync/process.py", line 72, in process_time_range
added += ns_write_bolus_events(nightscout, bolusEvents, pretend=pretend)
File "/home/appuser/tconnectsync/sync/bolus.py", line 46, in ns_write_bolus_events
if last_upload_time and arrow.get(event["completion_time"]) <= last_upload_time:
File "/.venv/lib/python3.9/site-packages/arrow/api.py", line 80, in get
return _factory.get(*args, **kwargs)
File "/.venv/lib/python3.9/site-packages/arrow/factory.py", line 228, in get
raise TypeError("Cannot parse argument of type None.")
TypeError: Cannot parse argument of type None.

I also attached my csv, the last entry is the extended bolus that i canceled. Since this error no boluses have been uploaded to my nightscout.
tt-08072021_074258.xlsx

Write tconnect pump data to Tidepool

Tidepool's PC uploader obtains pump data from the pump. However, the uploader requires connecting the pump via USB to a computer running the Tidepool uploader.

A Tidepool uploader that obtains the pump data from tconnect would avoid the wait and hassle of connecting via USB.

The endstate would be an automatic uploader.

Would this be a totally new tool 'tconnectsync-to-Tidepool' or an additional capability of tconnectsync?

Documentation Order

Hey,

Thank you so much for creating this project!

I have one small piece of feedback on the documentation for Docker using the local copy method. I had to first copy the .env file into the build directory and then run the build commands in order for tconnectsync to see/use my .env file inside the container once it was running. The documentation has that step happening after the build is complete.

I am unclear how to do a PR to update it and submit it, so I figure an issue report might be helpful.

Thanks again for all the work!
-Chad

Feature Request: Option to adjust values to correct for use of U-200 insulin

I use U-200 insulin in my pump (per endo's orders), and of course the pump doesn't understand anything other than U-100, so any values reported by the API that represent units of Insulin administered need to be doubled. The incorrect numbers in the t-connect site created some major confusion - it seems like it would be nice at least for NS to have the correct values.

Thanks!

Not Working

I have tried the latest version. I believe the CSV format was updated or something. I tried printing out the sections but all I get is an empty array for each.

image

feature: track Sleep and Exercise modes in Nightscout

Example sleep mode schema from api.controliq.therapy_timeline:

'events': [{'continuation': None,
             'duration': 30660,
             'eventType': 1,
             'timeZoneId': 'America/Los_Angeles',
             'x': 1635150630}],

Event types are listed in TConnectEntry

Feature Request: Dump useful info upon startup to stdout; read in command line parameters via environmental variables

Features for Docker environments that don't allow changes to the execution command

Synology (and others?) doesn't let you change the execution command once the container is created. The only way to change the execution command is to delete the container and create a new one.

So, upon boot up, write useful stuff to stdout so that we can leave the execution command alone. Useful info that could be dumped:
--version
--check-login

Other command line parameters could be read in as environmental variables, like:
--auto-update <yes|no>
--features
--start-date
--days
--verbose
Or do it with a single environmental variable. Something like: COMMAND_LINE_PARAMETERS

Get CGMTherapyEvent processed and logged to Nightscout

Need to add logging for CGM data from the main control iq api.

Was able to get it added with minimal updates, but had to do a funky workaround with features.py. I may not quite understand the design, but since ws2 is out, may need to re-think how the features are grouped... I'll push my commits in a bit.

Fatal Parse Error on Date Format

I regularly encounter the following error:
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/arrow/parser.py", line 727, in _parse_multiformat
raise ParserError(
arrow.parser.ParserError: Could not match input '' to any of the following formats: YYYY-MM-DD, YYYY-M-DD, YYYY-M-D, YYYY/MM/DD, YYYY/M/DD, YYYY/M/D, YYYY.MM.DD, YYYY.M.DD, YYYY.M.D, YYYYMMDD, YYYY-DDDD, YYYYDDDD, YYYY-MM, YYYY/MM, YYYY.MM, YYYY, W.

It seems to occur only during an extended bolus. Once the extended bolus is over the error goes away. I'll put the full debug output here:
tconnectsync --auto-update --verbose
2021-12-11 11:39:17 DEBUG Set logging level to DEBUG
2021-12-11 11:39:17 INFO Enabled features: BASAL, BOLUS, IOB
Starting auto-update between 2021-12-10 11:39:17.665888 and 2021-12-11 11:39:17.665888
2021-12-11 11:39:17 DEBUG Instantiating new AndroidApi
2021-12-11 11:39:17 DEBUG Starting new HTTPS connection (1): tdcservices.tandemdiabetes.com:443
2021-12-11 11:39:18 DEBUG https://tdcservices.tandemdiabetes.com:443 "POST /cloud/oauth2/token HTTP/1.1" 200 345
2021-12-11 11:39:18 INFO Logged in to AndroidApi successfully (expiration: 2021-12-12T00:39:18.046Z, in 7 hours, 59 minutes)
2021-12-11 11:39:18 DEBUG Starting new HTTPS connection (1): tdcservices.tandemdiabetes.com:443
2021-12-11 11:39:18 DEBUG https://tdcservices.tandemdiabetes.com:443 "GET /cloud/upload/getlasteventuploaded?sn=880757 HTTP/1.1" 200 48
2021-12-11 11:39:18 INFO New reported t:connect data. (event index: 49149 last: None)
2021-12-11 11:39:18 INFO Downloading t:connect ControlIQ data
2021-12-11 11:39:18 DEBUG Instantiating new ControlIQApi
2021-12-11 11:39:18 INFO Logging in to ControlIQApi...
2021-12-11 11:39:18 DEBUG Starting new HTTPS connection (1): tconnect.tandemdiabetes.com:443
2021-12-11 11:39:18 DEBUG https://tconnect.tandemdiabetes.com:443 "GET /login.aspx?ReturnUrl=%2F HTTP/1.1" 200 33832
2021-12-11 11:39:19 DEBUG https://tconnect.tandemdiabetes.com:443 "POST /login.aspx?ReturnUrl=%2F HTTP/1.1" 302 148
2021-12-11 11:39:19 DEBUG https://tconnect.tandemdiabetes.com:443 "POST /CookieCheck.aspx?ReturnUrl=%2F HTTP/1.1" 302 118
2021-12-11 11:39:19 DEBUG https://tconnect.tandemdiabetes.com:443 "GET / HTTP/1.1" 200 165794
2021-12-11 11:39:20 INFO Logged in to ControlIQApi successfully (expiration: 2021-12-12T00:39:19.094Z, in 7 hours, 59 minutes)
2021-12-11 11:39:20 DEBUG Starting new HTTPS connection (1): tdcservices.tandemdiabetes.com:443
2021-12-11 11:39:20 DEBUG https://tdcservices.tandemdiabetes.com:443 "GET /tconnect/controliq/api/therapytimeline/users/081b3701-3f3f-4f11-9a9e-c7d4bd709905?startDate=12-10-2021&endDate=12-11-2021 HTTP/1.1" 200 2038
2021-12-11 11:39:20 INFO Downloading t:connect CSV data
2021-12-11 11:39:20 DEBUG Instantiating new WS2Api
2021-12-11 11:39:20 DEBUG Starting new HTTPS connection (1): tconnectws2.tandemdiabetes.com:443
2021-12-11 11:39:24 DEBUG https://tconnectws2.tandemdiabetes.com:443 "GET /therapytimeline2csv/081b3701-3f3f-4f11-9a9e-c7d4bd709905/12-10-2021/12-11-2021?format=csv HTTP/1.1" 200 51819
2021-12-11 11:39:24 DEBUG {'DeviceType': 't:slim X2 Insulin Pump', 'SerialNumber': '880757', 'Description': 'EGV', 'EventDateTime': '2021-12-11T11:12:58', 'Readings (CGM / BGM)': '73'}
2021-12-11 11:39:24 INFO Last CGM reading from t:connect: 2021-12-11T11:12:58-05:00 (26 minutes ago)
2021-12-11 11:39:24 DEBUG Creating basal event for unprocessed suspension: {'time': '2021-12-10 04:52:03-05:00', 'delivery_type': 'manual suspension', 'duration_mins': 26.2, 'basal_rate': 0.0}
2021-12-11 11:39:24 DEBUG Creating basal event for unprocessed suspension: {'time': '2021-12-10 10:33:35-05:00', 'delivery_type': 'manual suspension', 'duration_mins': 3.7, 'basal_rate': 0.0}
2021-12-11 11:39:24 DEBUG Creating basal event for unprocessed suspension: {'time': '2021-12-10 11:52:10-05:00', 'delivery_type': 'manual suspension', 'duration_mins': 0.6, 'basal_rate': 0.0}
2021-12-11 11:39:24 DEBUG No CSV basal data found
2021-12-11 11:39:24 DEBUG ns_write_basal_events: querying for last uploaded entry
2021-12-11 11:39:24 DEBUG Starting new HTTPS connection (1): .herokuapp.com:443
2021-12-11 11:39:25 DEBUG https://
.herokuapp.com:443 "GET /api/v1/treatments?count=1&find%5BenteredBy%5D=Pump%20%28tconnectsync%29&find%5BeventType%5D=Temp%20Basal&ts=1639240764.892172 HTTP/1.1" 200 284
2021-12-11 11:39:25 INFO Last Nightscout basal upload: 2021-12-11T15:27:54+00:00
2021-12-11 11:39:25 DEBUG ns_write_basal_events: added 0 events
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/bin/tconnectsync", line 8, in
sys.exit(main())
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tconnectsync/init.py", line 85, in main
process_auto_update(tconnect, nightscout, time_start, time_end, args.pretend, features=args.features)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tconnectsync/autoupdate.py", line 39, in process_auto_update
added = process_time_range(tconnect, nightscout, time_start, time_end, pretend, features=features)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tconnectsync/process.py", line 87, in process_time_range
bolusEvents = process_bolus_events(bolusData)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tconnectsync/sync/bolus.py", line 21, in process_bolus_events
parsed = TConnectEntry.parse_bolus_entry(b)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tconnectsync/parser/tconnect.py", line 116, in parse_bolus_entry
"bolex_completion_time": TConnectEntry._datetime_parse(data["BolexCompletionDateTime"]).format() if complete and extended_bolus else None,
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tconnectsync/parser/tconnect.py", line 65, in _datetime_parse
return arrow.get(date, tzinfo=TIMEZONE_NAME)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/arrow/api.py", line 91, in get
return _factory.get(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/arrow/factory.py", line 254, in get
dt = parser.DateTimeParser(locale).parse_iso(arg, normalize_whitespace)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/arrow/parser.py", line 298, in parse_iso
return self._parse_multiformat(datetime_string, formats)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/arrow/parser.py", line 727, in _parse_multiformat
raise ParserError(
arrow.parser.ParserError: Could not match input '' to any of the following formats: YYYY-MM-DD, YYYY-M-DD, YYYY-M-D, YYYY/MM/DD, YYYY/M/DD, YYYY/M/D, YYYY.MM.DD, YYYY.M.DD, YYYY.M.D, YYYYMMDD, YYYY-DDDD, YYYYDDDD, YYYY-MM, YYYY/MM, YYYY.MM, YYYY, W.

Docker Hub images not being updated (Docker on Synology)

Reading the log below I'm assuming self.patientObjectId = j["user"]["patientObjectId"] means that the environmental variables aren't getting in.

Does the fix for reading the environment variables from $HOME/.config/tconnectsync/.env work for Docker? When I tried it, the path doesn't exist. I tried mounting the .env file on "/home/appuser/tconnectsync/.env" and /home/.config/tconnectsync/.env . Neither one worked.

Running Docker on Synology using the GUI. GUI allows me to put in environment variables but I can't change the execution command after I create the container (annoying). The below uses execution command: "python3 -u main.py --auto-update"

2022-02-10 08:33:16 | stdout | KeyError: 'patientObjectId'
2022-02-10 08:33:16 | stdout | self.patientObjectId = j["user"]["patientObjectId"]
2022-02-10 08:33:16 | stdout | File "/home/appuser/tconnectsync/api/android.py", line 72, in login
2022-02-10 08:33:16 | stdout | self.login(email, password)
2022-02-10 08:33:16 | stdout | File "/home/appuser/tconnectsync/api/android.py", line 41, in init
2022-02-10 08:33:16 | stdout | self._android = AndroidApi(self.email, self.password)
2022-02-10 08:33:16 | stdout | File "/home/appuser/tconnectsync/api/init.py", line 54, in android
2022-02-10 08:33:16 | stdout | last_event = tconnect.android.last_event_uploaded(PUMP_SERIAL_NUMBER)
2022-02-10 08:33:16 | stdout | File "/home/appuser/tconnectsync/autoupdate.py", line 30, in process_auto_update
2022-02-10 08:33:16 | stdout | process_auto_update(tconnect, nightscout, time_start, time_end, args.pretend)
2022-02-10 08:33:16 | stdout | File "/home/appuser/main.py", line 76, in main
2022-02-10 08:33:16 | stdout | main()
2022-02-10 08:33:16 | stdout | File "/home/appuser/main.py", line 83, in
2022-02-10 08:33:16 | stdout | Traceback (most recent call last):
2022-02-10 08:33:16 | stdout | Starting auto-update between 2022-02-09 08:33:16.338007 and 2022-02-10 08:33:16.338007
2022-02-10 08:33:03 | stdout | KeyError: 'patientObjectId'
2022-02-10 08:33:03 | stdout | self.patientObjectId = j["user"]["patientObjectId"]
2022-02-10 08:33:03 | stdout | File "/home/appuser/tconnectsync/api/android.py", line 72, in login
2022-02-10 08:33:03 | stdout | self.login(email, password)
2022-02-10 08:33:03 | stdout | File "/home/appuser/tconnectsync/api/android.py", line 41, in init
2022-02-10 08:33:03 | stdout | self._android = AndroidApi(self.email, self.password)
2022-02-10 08:33:03 | stdout | File "/home/appuser/tconnectsync/api/init.py", line 54, in android
2022-02-10 08:33:03 | stdout | last_event = tconnect.android.last_event_uploaded(PUMP_SERIAL_NUMBER)
2022-02-10 08:33:03 | stdout | File "/home/appuser/tconnectsync/autoupdate.py", line 30, in process_auto_update
2022-02-10 08:33:03 | stdout | process_auto_update(tconnect, nightscout, time_start, time_end, args.pretend)
2022-02-10 08:33:03 | stdout | File "/home/appuser/main.py", line 76, in main
2022-02-10 08:33:03 | stdout | main()
2022-02-10 08:33:03 | stdout | File "/home/appuser/main.py", line 83, in
2022-02-10 08:33:03 | stdout | Traceback (most recent call last):
2022-02-10 08:33:03 | stdout | Starting auto-update between 2022-02-09 08:33:03.449161 and 2022-02-10 08:33:03.449161

Sleep pump events duplicated in Nightscout

A single sleep event is separated and not merged.

/api/v1/treatments?find[enteredBy]=Pump%20(tconnectsync)&find[created_at][$gte]=2022-07-18%2010:00&find[created_at][$lte]=2022-07-19%2012:00&find[eventType]=Sleep

[
  {
    "_id": "62d793ac89093289ef6df149",
    "eventType": "Sleep",
    "reason": "Sleep",
    "notes": "Sleep",
    "duration": 8.15,
    "created_at": "2022-07-19 09:53:19-04:00",
    "enteredBy": "Pump (tconnectsync)",
    "carbs": null,
    "insulin": null
  },
  {
    "_id": "62d793ac89093289ef6df148",
    "eventType": "Sleep",
    "reason": "Sleep",
    "notes": "Sleep",
    "duration": 85.01666666666667,
    "created_at": "2022-07-19 07:53:19-04:00",
    "enteredBy": "Pump (tconnectsync)",
    "carbs": null,
    "insulin": null
  },
  {
    "_id": "62d793ac89093289ef6df147",
    "eventType": "Sleep",
    "reason": "Sleep",
    "notes": "Sleep",
    "duration": 25,
    "created_at": "2022-07-19 07:08:20-04:00",
    "enteredBy": "Pump (tconnectsync)",
    "carbs": null,
    "insulin": null
  },
  {
    "_id": "62d793ac89093289ef6df146",
    "eventType": "Sleep",
    "reason": "Sleep",
    "notes": "Sleep",
    "duration": 130,
    "created_at": "2022-07-19 04:18:20-04:00",
    "enteredBy": "Pump (tconnectsync)",
    "carbs": null,
    "insulin": null
  },
  {
    "_id": "62d793ac89093289ef6df145",
    "eventType": "Sleep",
    "reason": "Sleep",
    "notes": "Sleep",
    "duration": 48.5,
    "created_at": "2022-07-19 02:59:50-04:00",
    "enteredBy": "Pump (tconnectsync)",
    "carbs": null,
    "insulin": null
  },
  {
    "_id": "62d793ac89093289ef6df144",
    "eventType": "Sleep",
    "reason": "Sleep",
    "notes": "Sleep",
    "duration": 171.56666666666666,
    "created_at": "2022-07-19 00:00:28-04:00",
    "enteredBy": "Pump (tconnectsync)",
    "carbs": null,
    "insulin": null
  }
]

extended bolus doesn't get added to nightscout

Extended bolus doesn't add the extended part as a treatment just adds notes to the up front bolus amount.
tconnectsync 0.5

12:36 PM | Combo Bolus |   | 4.42 | 76 |   |   |   |   |   |   | Pump (tconnectsync) | Extended 50.00%/8.84 (Extended)

tconnect

nightscout

Feature Suggestion: Upload Bolus Calculator Carbs as Note

Feature Suggestion: Add an option for carbs from bolus entries to be uploaded to Nightscout as a carb treatment as they are now or as a note along the lines of "calculated bolus for x carbs" on the insulin treatment.

Use Cases: I log carbs as I consume them so I end up with double entries in Nightscout. Doing this allows me to track pre-bolus duration. Other situations in which I find this useful include covering an estimated number of carbs then logging carbs as I consume them such as when grazing at a party, logging multiple parts of the meal separately such as lunch and dessert so I can see if the spike is related to when I ate the higher carb portion, and eating something different than planned (possibly due to glucose trends).

In this example, I calculated a bolus for 37g carbs but then changed my mind about what I was eating and logged the 25g and 18g as I ate and then covered the 6g difference between what I'd eaten and what I'd originally intended to eat. So I ate 43g, not 86 as Nightscout implies.
image

"Unable to import parser secrets from secret.py"

With;

  • up to date Arch Linux x86_64
  • Python 3.9.6
  • git cloned into a temporary directory
  • .env file filled with known good values as per instructions :

Running "pipenv run tconnectsync" from one directory above "main.py", results in "Error: the command tconnectsync could not be found within PATH or Pipfile's [scripts]."

Also, the error message "Unable to import parser secrets from secret.py" is returned, no matter what combination of arguments I put after "python3 main.py", even just "-h", during initial testing. secret.py exists in the second tconnectsync directory (one below main.py), and attributes are the same as all other .py files (-rw-r--r--).

Probably unrelated, but I had to manually 'pip install arrow' to get to this point.

Let me know what other troubleshooting data you might need.
Thanks

parser/tconnect.py does not account for BG therapy event types

Noticed this error in my first attempts getting tconnectsync running. Seems that when calling split_therapy_events in ciq_therapy_events.py, parser/tconnect.py only allows for "CGM" and "Bolus" event types (data['type']). When I ran on a day of my data, a few records were type "BG". Not sure if this is just a new update in tconnect or what... Probably will need to account for those entries and try to parse them or at least acknowledge that they're there and continue to process known data types...

See below error log from another user (matches my terminal output):
$ tconnectsync --auto-update
2022-08-20 20:17:19 INFO Enabled features: BASAL, BOLUS
Starting auto-update between 2022-08-19 20:17:19.923273 and 2022-08-20 20:17:19.923273
2022-08-20 20:17:20 INFO Logged in to AndroidApi successfully (expiration: 2022-08-21T08:17:20.401Z, in 7 hours, 59 minutes)
2022-08-20 20:17:20 INFO New reported t:connect data. (event index: 817956 last: None)
2022-08-20 20:17:20 INFO Downloading t:connect ControlIQ data
2022-08-20 20:17:20 INFO Logging in to ControlIQApi...
2022-08-20 20:17:21 INFO Reported tconnect software version: t:connect 7.14.0.1
2022-08-20 20:17:24 INFO Logged in to ControlIQApi successfully (expiration: 2022-08-21T08:17:21.513Z, in 7 hours, 59 minutes)
2022-08-20 20:17:25 INFO Downloading t:connect therapy_events
Traceback (most recent call last):
File "/usr/local/bin/tconnectsync", line 8, in
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/tconnectsync/init.py", line 87, in main
sys.exit(u.process(tconnect, nightscout, time_start, time_end, args.pretend, features=args.features))
File "/usr/local/lib/python3.7/dist-packages/tconnectsync/autoupdate.py", line 48, in process
added = process_time_range(tconnect, nightscout, time_start, time_end, pretend, features=features)
File "/usr/local/lib/python3.7/dist-packages/tconnectsync/process.py", line 67, in process_time_range
ciqBolusData, ciqReadingData = split_therapy_events(ciqTherapyEventsData)
File "/usr/local/lib/python3.7/dist-packages/tconnectsync/parser/ciq_therapy_events.py", line 12, in split_therapy_events
event = TConnectEntry.parse_therapy_event(e)
File "/usr/local/lib/python3.7/dist-packages/tconnectsync/parser/tconnect.py", line 204, in parse_therapy_event
raise UnknownTherapyEventException(data)
tconnectsync.parser.tconnect.UnknownTherapyEventException: Unknown therapy event type:

Update Frequency: how?

First of all, great work. I followed the posted direction, and it pulled 24hrs of data from Tconnect and displayed it on Nightscout. Awesome. However, the Android App only pushes data to Tconnect every hour, so how are you able to reduce that to 5 minutes? I'd really like to try that someday.

Thanks,
-Ryan

Unknown basal suspension event type

I tried the --features option to include PUMP_EVENTS from the command line before adding it to my run.sh and I got the following error:

kristen_swick@instance-5:~$ tconnectsync --auto-update --features {BASAL,BOLUS,IOB,PUMP_EVENTS}
2021-12-27 20:01:28 INFO     Enabled features: BASAL, BOLUS, IOB, PUMP_EVENTS
Starting auto-update between 2021-12-26 20:01:28.755914 and 2021-12-27 20:01:28.755914 
2021-12-27 20:01:29 INFO     Logged in to AndroidApi successfully (expiration: 2021-12-28T04:01:29.065Z, in 7 hours, 59 minutes)
2021-12-27 20:01:29 INFO     New reported t:connect data. (event index: 36241 last: None)
2021-12-27 20:01:29 INFO     Downloading t:connect ControlIQ data
2021-12-27 20:01:29 INFO     Logging in to ControlIQApi...
2021-12-27 20:01:31 INFO     Logged in to ControlIQApi successfully (expiration: 2021-12-28T04:01:29.952Z, in 7 hours, 59 minutes)
2021-12-27 20:01:31 INFO     Downloading t:connect CSV data
2021-12-27 20:01:33 INFO     Last CGM reading from t:connect: 2021-12-27T14:41:13-05:00 (20 minutes ago)
2021-12-27 20:01:33 INFO     Last Nightscout basal upload: 2021-12-27T19:19:17+00:00
Traceback (most recent call last):
  File "/home/kristen_swick/.local/bin/tconnectsync", line 8, in 
    sys.exit(main())
  File "/home/kristen_swick/.local/lib/python3.8/site-packages/tconnectsync/__init__.py", line 85, in main
    process_auto_update(tconnect, nightscout, time_start, time_end, args.pretend, features=args.features)
  File "/home/kristen_swick/.local/lib/python3.8/site-packages/tconnectsync/autoupdate.py", line 39, in process_auto_update
    added = process_time_range(tconnect, nightscout, time_start, time_end, pretend, features=features)
  File "/home/kristen_swick/.local/lib/python3.8/site-packages/tconnectsync/process.py", line 97, in process_time_range
    bsPumpEvents = process_basalsuspension_events(ws2BasalSuspension)
  File "/home/kristen_swick/.local/lib/python3.8/site-packages/tconnectsync/sync/pump_events.py", line 41, in process_basalsuspension_events
    parsed = TConnectEntry.parse_basalsuspension_event(event)
  File "/home/kristen_swick/.local/lib/python3.8/site-packages/tconnectsync/parser/tconnect.py", line 179, in parse_basalsuspension_event
    raise UnknownBasalSuspensionEventException(data)
tconnectsync.parser.tconnect.UnknownBasalSuspensionEventException: Unknown basal suspension event type: {'EventDateTime': '/Date(1640541521000-0000)/', 'SuspendReason': 'temp-profile'}

Note: I already hadtconnectsync running continuously using supervisor using the default features so all of the other data should have already been up to date in Nightscout.

Write tconnect pump data to Sugarmate

Sugarmate has no ability to automatically get pump data.

Writing pump data into Sugarmate obtained from tconnect would allow remote monitoring of the X2 in Sugarmate without care providers manually putting in data and provide currently unavailable data from automatic boluses.

Example of usefulness:
We have 4 different care providers who can monitor blood sugar and pump actions via Sugarmate. Different care providers (school med-tech, daycare, relatives) manually input pump information into Sugarmate when they are with the child allowing other providers (parents) to stay informed and respond as necessary with guidance.

X2 runs ControlIQ and does automatic boluses. If remote monitoring (by parents) does not know what boluses are being done automatically, over-bolusing (stacking?) can occur.

Would this be a totally new tool 'tconnectsync-to-Sugarmate' or an additional capability of tconnectsync?

Any plans for Windows environments?

Apologies if this has been asked before, but I didn't see it as an open/closed issue.

Do you know of any way to run this under Windows? I was thinking of using a VM, but never had luck with VirtualBox.

TIA

ws2 api 500 error

New event index: 83187 last: 83092

Downloading t:connect ControlIQ data

Downloading t:connect CSV data

Traceback (most recent call last):

File "/home/appuser/main.py", line 64, in <module>

main()

File "/home/appuser/main.py", line 57, in main

process_auto_update(tconnect, time_start, time_end, args.pretend)

File "/home/appuser/tconnectsync/autoupdate.py", line 31, in process_auto_update

added = process_time_range(tconnect, time_start, time_end, pretend)

File "/home/appuser/tconnectsync/process.py", line 38, in process_time_range

csvdata = tconnect.ws2.therapy_timeline_csv(time_start, time_end)

File "/home/appuser/tconnectsync/api/ws2.py", line 61, in therapy_timeline_csv

req_text = self.get('therapytimeline2csv/%s/%s/%s?format=csv' % (self.userGuid, startDate, endDate), {})

File "/home/appuser/tconnectsync/api/ws2.py", line 18, in get

raise ApiException(r.status_code, "WS2 API HTTP %s response: %s" % (str(r.status_code), r.text))

tconnectsync.api.common.ApiException: WS2 API HTTP 500 response: Options,Error,Output,ResponseStatus

,,,"{ErrorCode:XmlException,Message:Root element is missing.,Errors:[]}"

i run into all the errors... anywayhere is the latest one

Stop using ws2 therapytimeline endpoint

WS2 therapytimeline is currently used to fetch CGM, IOB, bolus, and NON-ControlIQ basal data. The WS2 therapytimeline API endpoint is excruciatingly slow to respond to queries at times -- Tandem's API frontend sometimes returns http 500's since it times out querying its backend. It can sometimes take multiple minutes to get a response as it refuses to return results entirely. The end data also needs to be heavily processed.

Equivalent data should exist in the ControlIQ therapy_events endpoint, which contains at least Basal, Bolus, and CGM data.

Basal/bolus data shows 3 hours early in PT

From @rwyler:

For user in PT time zone, on nightscout UX, it's displaying CGM info at correct time (e.g., 4pm PT) (from dexcom's api), but displaying (controliq) Bolus/Basal info (from tconnect's api) as having happened 3 hours earlier (eg., a 4pm PT bolus appears as happening at 1pm PT). Suspect timezone conversion problem with getting the tconnect bolus/basal info and pushing it to the nightscout db. Using tconnectsync-heroku with no modifications. Any fix/help would be appreciated. Thanks

Check for last uploaded entry relative to end date

Currently, if you run tconnectsync with a fixed end date in the past, but have uploaded Nightscout events past that date, it will still not upload any new events past the current most recent upload date in Nightscout.

Calls to nightscout.last_uploaded_entry(eventType) should be wrapped by a function that also accepts the start and end dates of the synchronization, and only queries Nightscout for the "last uploaded entry" within that interval.

As a workaround, SKIP_NS_LAST_UPLOADED_CHECK=true can be used to bypass the check, but in this case tconnectsync may upload duplicate entries to Nightscout.

t:connect Issues with WS2 API (Prior to tconnectsync v0.8)

Some portions of t:connect, specifically interactions with the therapy timeline on tconnectws2.tandemdiabetes.com, are not functioning properly and tconnectsync has been receiving errors from Tandem when trying to synchronize data over the past 24 hours.

This is showing up as the following in tconnectsync debug logs:

2022-07-15 14:43:54 DEBUG    Starting new HTTPS connection (1): tconnectws2.tandemdiabetes.com:443
Traceback (most recent call last):
  File "/home/tconnectsync/.local/share/virtualenvs/tconnectsync-wjwwe9KF/lib/python3.9/site-packages/urllib3/connectionpool.py", line 699, in urlopen
    httplib_response = self._make_request(
  File "/home/tconnectsync/.local/share/virtualenvs/tconnectsync-wjwwe9KF/lib/python3.9/site-packages/urllib3/connectionpool.py", line 445, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/home/tconnectsync/.local/share/virtualenvs/tconnectsync-wjwwe9KF/lib/python3.9/site-packages/urllib3/connectionpool.py", line 440, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/lib/python3.9/http/client.py", line 1347, in getresponse
    response.begin()
  File "/usr/lib/python3.9/http/client.py", line 307, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python3.9/http/client.py", line 276, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

cli returns empty array for basal data

Not sure if I am missing something, but the csvdata["basalData"] is always an empty array, even when the the BASAL feature is specified. What could be going on here? IOB and boluses are returned without a problem.

API errors

EDIT: I see the note at the end of the readme page now. This issue is moot. I'll close it as I think that was probably my issue for the one set of errors in the original post.

I'm certain this is user error BUT...can you please help me?

I'm getting this problem in the setup when I do the tconnectsync --check-login command

Kathryns-iMac:~ iMac4K$ tconnectsync --check-login
Logging in to t:connect ControlIQ API...
2021-11-03 15:52:26 INFO     Logging in to ControlIQApi...
Error occurred querying ControlIQ API: Error logging in to t:connect. Check your login credentials. (HTTP 200)

Logging in to t:connect WS2 API...
2021-11-03 15:52:26 INFO     Logging in to ControlIQApi...
Error occurred querying WS2 API: Error logging in to t:connect. Check your login credentials. (HTTP 200)

Logging in to t:connect Android API...
Error occurred querying Android API: Received HTTP 400 during login: {"statusCode":400,"status":400,"code":400,"message":"Max attempted logins exceeded.","name":"invalid_request"} (HTTP 400)

Logging in to Nightscout...
Error occurred querying Nightscout API: HTTPSConnectionPool(host='yournightscouturl', port=443): Max retries exceeded with url: /api/v1/status.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x102ca6f40>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known'))

API errors occurred. Please check the errors above.

*My .env file is located in the folder path I'm in
*I double checked my logins and passwords by logging in separately. Used copy and paste to avoid typos and really slowlllly checked spelling.

So someone want to give a second set of eyes to my .env file? Maybe that is wrong? Can I email it to someone privately?

Last minute thought...will the API error appear using the --check-login if tconnect doesn't have any data in it for the current day? Seems like an unlikely error message that logins are failing, if it truly is a data-related issue, but thought I'd mention it anyways. My daughter's tconnect app stopped working 15 hours ago randomly and we are waiting for the last 15 hours to sync.

Crash with "TypeError: get() got an unexpected keyword argument 'timeout'"

Been running v0.8.3 for about 3 weeks now. I have tconnectsync running in docker through systemd. I get data but it is crashing after every run.

The end of docker logs:

2022-09-17 14:11:06 DEBUG split_therapy_events: 0 bolus, 0 CGM, 0 BG
2022-09-17 14:11:06 WARNING No last CGM reading is able to be determined from CIQ
2022-09-17 14:11:06 WARNING Downloading t:connect CSV data
2022-09-17 14:11:06 WARNING Falling back on WS2 CSV data source because BOLUS is an enabled feature and CIQ bolus data was empty!!
2022-09-17 14:11:06 WARNING <!!> The WS2 data source is unreliable and may prevent timely synchronization
2022-09-17 14:11:06 DEBUG Instantiating new WS2Api
Traceback (most recent call last):
File "/home/appuser/main.py", line 5, in
main()
File "/home/appuser/tconnectsync/init.py", line 87, in main
sys.exit(u.process(tconnect, nightscout, time_start, time_end, args.pretend, features=args.features))
File "/home/appuser/tconnectsync/autoupdate.py", line 48, in process
added = process_time_range(tconnect, nightscout, time_start, time_end, pretend, features=features)
File "/home/appuser/tconnectsync/process.py", line 102, in process_time_range
csvdata = tconnect.ws2.therapy_timeline_csv(time_start, time_end)
File "/home/appuser/tconnectsync/api/ws2.py", line 83, in therapy_timeline_csv
req_text = self.get('therapytimeline2csv/%s/%s/%s?format=csv' % (self.userGuid, startDate, endDate), timeout=10)
TypeError: get() got an unexpected keyword argument 'timeout'

I'm trying to pull the BASAL BOLUS data.

Remove remaining uses of ws2 therapytimelinecsv

These sync features still need the ws2 therapytimelinecsv:

  • BOLUS_BG
  • IOB

This sync feature uses ws2 basalsuspension, which is less problematic but would still be great to remove our use of:

  • PUMP_EVENTS

Possibility to sync blood sugar data as well?

Hello! Thanks for this tool, its pretty awesome!

I was just wondering - Is it possible to also sync the blood sugar data, and calibrations?

I enter my calibrations on the pump, and they don't seem to show up in the Dexcom data - only t:connect.

If the data is available, could you point me in the right direction and I could try to do it myself?

Thanks!

Inconsistent use of time/date filters

Tandem API methods only support filtering by date. However the sync time range contains a time element, and by default is expressed as (now - 24 hrs, now). When calling most API endpoints, the time is cut off from the datetime object passed to the function and just the raw dates are used to query the API and return results. This ends up fetching twice as much data as is necessary in cases where the API parses dates inclusively.

Python 3.9.9 is Not Available for Heroku-22 Stack - Heroku Not Compiling App

I am getting the following error in my Heroku build log. I see that Heroku has started building apps on the Heroku-22 stack. I have Nightscout upgraded to the Heroku-22 stack and it is running well. It appears that runtime 'Python-3.9.9' is not available for the Heroku-22 stack and is rejecting the push so it is not compiling the app.

-----> Building on the Heroku-22 stack
-----> Determining which buildpack to use for this app
-----> Python app detected
-----> Using Python version specified in runtime.txt
! Requested runtime 'python-3.9.9' is not available for this stack (heroku-22).
! For supported versions, see: https://devcenter.heroku.com/articles/python-support
! Push rejected, failed to compile Python app.
! Push failed

To Reproduce
Steps to reproduce the behavior:

  1. Click Deploy to Heroku from Github
  2. Fill In the Config Vars in Heroku
  3. Click Deploy app
  4. Wait and watch the build log

Expected behavior
I expected Heroku to deploy the app, but it gave me the copied error above in the build log instead.

Have you followed the Troubleshooting steps in the README? Yes

Setup details

  • On what platform are you using the t:connect mobile app? Android
  • What version are you using of the t:connect mobile app? 2.1.3

Can't get tconnectsync to work.

Any suggestions? I'm getting the following when running tconnectsync

`smatthew@Scotts-iMac tconnectsync % pipenv run tconnectsync
Loading .env environment variables...
Processing data between 2021-03-21 16:07:20.254153 and 2021-03-22 16:07:20.254153
Downloading t:connect ControlIQ data
Downloading t:connect CSV data
Traceback (most recent call last):
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/urllib3/connection.py", line 169, in _new_conn
conn = connection.create_connection(
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/urllib3/util/connection.py", line 73, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/local/Cellar/[email protected]/3.9.1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/socket.py", line 953, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/urllib3/connectionpool.py", line 699, in urlopen
httplib_response = self._make_request(
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/urllib3/connectionpool.py", line 382, in _make_request
self._validate_conn(conn)
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/urllib3/connectionpool.py", line 1010, in _validate_conn
conn.connect()
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/urllib3/connection.py", line 353, in connect
conn = self._new_conn()
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/urllib3/connection.py", line 181, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x1042764c0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/urllib3/connectionpool.py", line 755, in urlopen
retries = retries.increment(
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/urllib3/util/retry.py", line 573, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='cgm-roberts.herokuapp.comapi', port=443): Max retries exceeded with url: /v1/treatments?count=1&find%5BenteredBy%5D=Pump%20%28tconnectsync%29&find%5BeventType%5D=Temp%20Basal&ts=1616454448.179522 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x1042764c0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/smatthew/Documents/src/tconnectsync/tconnectsync/main.py", line 59, in
main()
File "/Users/smatthew/Documents/src/tconnectsync/tconnectsync/main.py", line 55, in main
added = process_time_range(tconnect, time_start, time_end, args.pretend)
File "/Users/smatthew/Documents/src/tconnectsync/tconnectsync/tconnectsync/process.py", line 54, in process_time_range
added += ns_write_basal_events(basalEvents, pretend=pretend)
File "/Users/smatthew/Documents/src/tconnectsync/tconnectsync/tconnectsync/sync/basal.py", line 68, in ns_write_basal_events
last_upload = last_uploaded_nightscout_entry(BASAL_EVENTTYPE)
File "/Users/smatthew/Documents/src/tconnectsync/tconnectsync/tconnectsync/nightscout.py", line 41, in last_uploaded_nightscout_entry
latest = requests.get(NS_URL + 'api/v1/treatments?count=1&find[enteredBy]=' + urllib.parse.quote(ENTERED_BY) + '&find[eventType]=' + urllib.parse.quote(eventType) + '&ts=' + str(time.time()), headers={
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/Users/smatthew/.local/share/virtualenvs/tconnectsync-MOJrdNEQ/lib/python3.9/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='cgm-roberts.herokuapp.comapi', port=443): Max retries exceeded with url: /v1/treatments?count=1&find%5BenteredBy%5D=Pump%20%28tconnectsync%29&find%5BeventType%5D=Temp%20Basal&ts=1616454448.179522 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x1042764c0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known'))
smatthew@Scotts-iMac tconnectsync %
`

After ~24 hours of running in auto-update mode, stops recognizing new t:connect treatments

Opening this issue to track an issue I have noticed where after ~24 hours of consecutive runtime, I need to restart tconnectsync in order for it to pick up new t:connect treatments.

The "Last CGM reading from t:connect" value mentioned in the logs stays the same on each cycle, but on restart of the app it immediately picks up several hours worth of data that it previously did not detect.

I will investigate to see why this is happening.

unclear running commands

it is hard to understand how to run this, such as, where do i put my .env file, and how do i put it there. ive tried what i could, but the only progress ive managed to make, creates an error i dont know if i can fix.

if i tell docker desktop to put my .env anywhere else but /.venv i get

Processing data between 2021-04-02 22:32:38.304414 and 2021-04-03 22:32:38.304414

Downloading t:connect ControlIQ data

Traceback (most recent call last):

File "/home/appuser/main.py", line 64, in <module>

main()

File "/home/appuser/main.py", line 60, in main

added = process_time_range(tconnect, time_start, time_end, args.pretend)

File "/home/appuser/tconnectsync/process.py", line 35, in process_time_range

raise e

File "/home/appuser/tconnectsync/process.py", line 26, in process_time_range

ciqTherapyTimelineData = tconnect.controliq.therapy_timeline(time_start, time_end)

File "/home/appuser/tconnectsync/api/__init__.py", line 24, in controliq

self._ciq = ControlIQApi(self.email, self.password)

File "/home/appuser/tconnectsync/api/controliq.py", line 17, in __init__

self.login(email, password)

File "/home/appuser/tconnectsync/api/controliq.py", line 37, in login

raise ApiLoginException(req.status_code, 'Error logging in to t:connect. Check your login credentials.')

tconnectsync.api.common.ApiLoginException: Error logging in to t:connect. Check your login credentials.

but if i do tell it to put .env file in /.venv i get

Traceback (most recent call last):

File "/home/appuser/main.py", line 5, in <module>

import arrow

ModuleNotFoundError: No module named 'arrow'

time offset between tsync/tconnect record and NS time parse

Observed a couple problems with my initial tconnectsync attempts. I'll paste the full content of relevant errors in the next comment. Highlighting 2 items here...

  1. Calculated "5 hours" offset between the similar log time and CIQ time? Not sure why this is hapenning. My timezone on my linux system and the date command are set correctly (central time)... (e.g. 2022-08-13 20:38:42 INFO Last bolus from t:connect CIQ: 2022-08-13T19:46:15 (5 hours, 52 minutes ago) )
  2. NS latest entry update shows a ISO-8601 parsing error, note that the NS site is a brand new site with not records. Tried adding a fake bolus manually and that didn't seem to change the error.

raise ApiException(latest.status_code, "Nightscout last_uploaded_entry response: %s" % latest.text)
tconnectsync.api.common.ApiException: Nightscout last_uploaded_entry response:
Error: Cannot parse 2022-08-12+20:38:37 as a valid ISO-8601 date

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.