This repository includes Jupyter notebooks showcasing the integration of the MX PyTorch Emulation Library for quantization of ML Models.
Each notebook in this repository demonstrates different aspects of training and evaluating neural networks with MX-compatible formats. Below is a brief overview:
- Implements a
SimpleCNN
for CIFAR-10 classification using standard FP32 and dynamic precision switching between FP32, FP4, and FP8 based on runtime performance. - Includes advanced model management features for precision switching during runtime anomalies.
- Provides a baseline model using PyTorch's sequential API for MNIST classification.
- Simple architecture and training loops for performance comparison.
- Trains a
SimpleCNN
model strictly using FP32 precision to serve as a performance benchmark.
- Integrates the MX PyTorch Emulation Library to train
SimpleCNN
using MX formats. - Demonstrates custom CUDA extensions for performance optimization.
- Utility notebook for tensor quantization using the MX library.
- Demonstrates the process of converting tensors to MX-compatible and bfloat formats.
This notebook explores the application of Block Floating Point (BFP) quantization To ResNet18.
- Implementation of
BFPLinear
, a custom linear layer that applies BFP quantization to its weights and biases. - A
BFPModelWrapper
class that enables dynamic switching between full precision and quantized states of a model, making it suitable for scenarios where precision trade-offs can be managed dynamically based on runtime performance metrics.
To set up the repository for running these notebooks:
- Follow instructions on installing the Microxcaling library.
- Install Python dependencies listed in
requirements.txt
.