You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+9-1Lines changed: 9 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,10 +9,16 @@ To run the code, [HGQ2](https://github.com/calad0i/HGQ2) is also needed.
9
9
PQuant replaces the layers and activations it finds with a Compressed (in the case of layers) or Quantized (in the case of activations) variant. These automatically handle the quantization of the weights, biases and activations, and the pruning of the weights.
10
10
Both PyTorch and TensorFlow models are supported.
11
11
12
-
Layers that can be compressed: Conv2D and Linear layers, Tanh and ReLU activations for both TensorFlow and PyTorch. For PyTorch, also Conv1D.
12
+
Layers that can be compressed:
13
+
PQConv*D: Convolutional layers
14
+
PQAvgPool*D: Average pooling layers
15
+
PQBatchNorm*D: BatchNorm layers
16
+
PQDense: Linear layer
17
+
PQActivation: Activation layers (ReLU, Tanh)
13
18
14
19
The various pruning methods have different training steps, such as a pre-training step and fine-tuning step. PQuant provides a training function, where the user provides the functions to train and validate an epoch, and PQuant handles the training while triggering the different training steps.
@@ -24,6 +30,8 @@ Example notebook can be found [here](https://github.com/nroope/PQuant/tree/main/
24
30
3. Loading a default pruning configuration of a pruning method.
25
31
4. Using the configuration, the model, and the training and validation functions, call the training function of PQuant to train and compress the model.
26
32
5. Creating a custom quantization and pruning configuration for a given model (disable pruning for some layers, different quantization bitwidths for different layers).
33
+
6. Direct layers usage and layers replacement approaches.
34
+
7. Usage of fine-tuning platform.
27
35
28
36
### Pruning methods
29
37
A description of the pruning methods and their hyperparameters can be found [here](docs/pruning_methods.md).
0 commit comments