Code Monkey home page Code Monkey logo

tga_data_analysis's Introduction

Thermogravimetric Analysis in Python

Code style: black

Python 3.10

Testing (CI)

Publishing (CI)

The tga_data_analysis tool automates the typical analysis of thermogravimetric analysis (TGA) data, saving time, avoiding human error, and increasing comparability of results from different groups.

Framework

File

A .txt or .csv file located in the project folder that contains time, temperature, and mass loss information for a measure.

Depending on the instrument export parameter, the structure of Files can differ slightly. Project-Parameters ensure that the loading process can lead to the same data structure to allow to perform all downstream computations.

A good naming convention for Files consists in using _ to ONLY indicate replicates ("A_1", "A_2", or long-sample-name_1, not "name_with_underscores_1").

Sample

A collection of Files that replicate the same measure and ensure reproducibility.

If the Project-Parameters do not apply to a specific Sample, their values can be modified for a single Sample instance.

The Files in the Sample are identified by their names and loaded.

Each numerical (ex. ash) or array value (ex. the time vector) from each Files is stored as a replicate using the Measure class, which provides access to each replicate of the value but also to average and standard deviation for each value.

The mass loss profile for each replicate are projected on a common temperature vector thus avoiding asynchronies and artifact peaks in the average values due to instrumental micro-delays. The original temperature, time, and mass loss vector are stored for each File.

Single-sample Analyses methods are provided to perform common TGA data analysis at the Sample level:

  • Proximate Analysis: determines the moisture, volatile matter, and ash content from TGA data.

  • Oxidation Analysis: Analyzes the oxidation behavior of materials.

  • Solid-Distillation Analysis: Studies the thermal decomposition and distillation characteristics of solids.

  • Peak Deconvolution Analysis: Resolves overlapping thermal decomposition events.

The Sample class can generate multi-replicate reports and multi-replicate plots for TG and DTG curves and for the results of any of the Single-sample Analyses.

Project

The folder path indicates where the Files are located and where the output folder will be created.

The Project-Parameters are valid for each Sample unless specified at the Sample initialization.

Samples can be added using the add_sample method or by specifying the Project to a new Sample instance during initialization.

The Project can generate reports and plots using the following methods:

  • multireport: Generate a multi-sample report based on the specified report type and style

  • plot_multi_tg: Plot multiple thermogravimetric (TG) curves for the given samples.

  • plot_multi_dtg: Plot multiple derivative thermogravimetric (DTG) curves for the given samples.

  • plot_multi_soliddist: Plot multiple solid distribution curves for the given samples.

  • plot_multireport: Plot the results for the multi-sample report

Multi-sample Analyses

For analysis that require data from multiple samples (ex. KAS kinetics), a multi-sample class that includes multiple Sample objects is defined (ex. KasSample).

Multi-sample classes provide the methods to perform the dedicated analysis and plot the results. The available ones are

  • KAS Kinetic Analysis: Applies the Kissinger-Akahira-Sunose method to determine kinetic parameters.

Project-Sample-Parameters

If specified at the Project level become the default for all Samples and therefore Files. They can be overwritten for each single Sample instance. The most important are described here, see the docs for the rest.

  • load_skiprows an int that indicates the number of rows that must be skipped in the file at loading. The first valid row should be the one that contains the name of the columns ("time", "temperature", "tg"; these are just examples).

  • column_name_mapping a dictionary used to specify how to rename the columns in the File to the standard names that the software can reliably use. These names are t_min, T_C, m_p, and m_mg for time, temperature, mass percentage, and mass in mg, respectively. At least the first three must be present (if m_mg is missing, it is assumed to be equal to m_p).

  • time_moist: The time in minutes where the mass loss should be considered moisture.

  • time_vm: The time in minutes where the mass loss should be considered volatile matter.

  • temp_initial_celsius: The initial temperature where every File is going to start to ensure uniformity.

  • temp_lim_dtg_celsius: The temperature limits for DTG analysis, in Celsius. It should exclude moisture and fixed carbon segments.

  • temp_unit: The unit of temperature the project will convert everything to, not the unit in the Files.

  • dtg_basis and resolution_sec_deg_dtg: these parameters are no longer available and raise exceptions if specified. The dtg curve is now only computed as dTG/dtime (temperature is used for plotting), but replicates are not interpolated anymore so the resolution reflects the data resolution from the machine.

  • dtg_window_filter: The window size for the Savitzky-Golay filter used to smooth the DTG curve.

  • temp_i_temp_b_threshold: The fractional threshold for the detection of Ti (t_ignition) and Tb (burnout) calculation in DTG analysis.

Example

If files start with 10 method rows before the real data and the columns are names "time/minutes", "temp/C", and "m/%", then the Project-Parameters should be:

load_skiprows=10
column_name_mapping={"time/minutes": "t_min", "temp/C": "T_C", "m/%": "m_p"}

Documentation

Check out the documentation.

Installation

You can install the package from PyPI:

pip install tga_data_analysis

Examples

Each example is available as a folder in the examples folder and contains the code and the necessary input data. To run examples:

  1. Install tga_data_analysis in your Python environment
  2. Download the folder that contains the example
  3. Run the code
  4. If you run the scripts as Jupyter Notebooks, replace the relative path at the beginning of each example with the absolute path to the folder where the code is located

Nomenclature

  • ar: as received
  • db: dry basis
  • daf: dry, ash-free
  • vm: volatile matter
  • fc: fixed carbon

Plotting with myfigure

Plots rely on the package myfigure, a package to simplify scientific plotting in data analysis packages. Check out its documentation and GitHub.

tga_data_analysis's People

Contributors

mpecchi avatar

Stargazers

 avatar  avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.