Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
-
Updated
Apr 24, 2023 - Jupyter Notebook
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Implement of Dynamic Model Pruning with Feedback with pytorch
Learn how to use L0 regularization
NNS : Neural network surgery | academic assignment
This repository contains the official implementation of "iShrink: Making 1B Models Even Smaller and Faster". iShrink is a structured pruning approach that effectively compresses 1B-parameter language models while maintaining their performance and improving efficiency.
Add a description, image, and links to the pruning-structures topic page so that developers can more easily learn about it.
To associate your repository with the pruning-structures topic, visit your repo's landing page and select "manage topics."