

Clone DistillerĬlone the Distiller code repository from github: These instructions will help get Distiller up and running on your local machine. Export to ONNX (export of quantized models pending ONNX standardization).Logging to the console, text file and TensorBoard-formatted file.See the research papers discussions in our model-zoo. Sample implementations of published research papers, using library-provided building blocks.This notebook creates performance indicator graphs from model data.Take a look at this notebook, which compares visual aspects of dense and sparse Alexnet models.The graphs and visualizations you see on this page originate from the included Jupyter notebooks. A set of Jupyter notebooks to plan experiments and analyze compression results.Export statistics summaries using Pandas dataframes, which makes it easy to slice, query, display and graph the data.Training with knowledge distillation, in conjunction with the other available pruning / regularization / quantization methods.Support for quantization-aware training in the loop.Post-training quantization of trained full-precision models, dynamic and static (statistics-based).No need to re-write the model for different quantization methods. Automatic mechanism to transform existing models to quantized versions, with customizable bit-width configuration for different layers.Group Lasso an group variance regularization.Examine the data from some of the networks we analyzed, using this notebook.

Lab master distilling full#
greedy layer by layer pruning to full model pruning).

Easily control what is performed each training step (e.g.One-shot and iterative pruning (and fine-tuning) are supported.Flexible scheduling of pruning, regularization, and learning rate decay (compression scheduling).Model thinning (AKA "network garbage removal") to permanently remove pruned neurons and connections.
Lab master distilling update#
