Skip to content

Commit b43a7cc

Browse files
Update README.md
1 parent de12da2 commit b43a7cc

File tree

1 file changed

+9
-1
lines changed

1 file changed

+9
-1
lines changed

README.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,16 @@ To run the code, [HGQ2](https://github.com/calad0i/HGQ2) is also needed.
99
PQuant replaces the layers and activations it finds with a Compressed (in the case of layers) or Quantized (in the case of activations) variant. These automatically handle the quantization of the weights, biases and activations, and the pruning of the weights.
1010
Both PyTorch and TensorFlow models are supported.
1111

12-
Layers that can be compressed: Conv2D and Linear layers, Tanh and ReLU activations for both TensorFlow and PyTorch. For PyTorch, also Conv1D.
12+
Layers that can be compressed:
13+
PQConv*D: Convolutional layers
14+
PQAvgPool*D: Average pooling layers
15+
PQBatchNorm*D: BatchNorm layers
16+
PQDense: Linear layer
17+
PQActivation: Activation layers (ReLU, Tanh)
1318

1419
The various pruning methods have different training steps, such as a pre-training step and fine-tuning step. PQuant provides a training function, where the user provides the functions to train and validate an epoch, and PQuant handles the training while triggering the different training steps.
1520

21+
1622
![alt text](docs/source/_static/overview_pquant.png)
1723

1824

@@ -24,6 +30,8 @@ Example notebook can be found [here](https://github.com/nroope/PQuant/tree/main/
2430
3. Loading a default pruning configuration of a pruning method.
2531
4. Using the configuration, the model, and the training and validation functions, call the training function of PQuant to train and compress the model.
2632
5. Creating a custom quantization and pruning configuration for a given model (disable pruning for some layers, different quantization bitwidths for different layers).
33+
6. Direct layers usage and layers replacement approaches.
34+
7. Usage of fine-tuning platform.
2735

2836
### Pruning methods
2937
A description of the pruning methods and their hyperparameters can be found [here](docs/pruning_methods.md).

0 commit comments

Comments
 (0)