Skip to content

Commit c42efb3

Browse files
Update README.md
1 parent b43a7cc commit c42efb3

File tree

1 file changed

+7
-6
lines changed

1 file changed

+7
-6
lines changed

README.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,13 @@ To run the code, [HGQ2](https://github.com/calad0i/HGQ2) is also needed.
99
PQuant replaces the layers and activations it finds with a Compressed (in the case of layers) or Quantized (in the case of activations) variant. These automatically handle the quantization of the weights, biases and activations, and the pruning of the weights.
1010
Both PyTorch and TensorFlow models are supported.
1111

12-
Layers that can be compressed:
13-
PQConv*D: Convolutional layers
14-
PQAvgPool*D: Average pooling layers
15-
PQBatchNorm*D: BatchNorm layers
16-
PQDense: Linear layer
17-
PQActivation: Activation layers (ReLU, Tanh)
12+
### Layers that can be compressed
13+
14+
* **PQConv*D**: Convolutional layers
15+
* **PQAvgPool*D**: Average pooling layers
16+
* **PQBatchNorm*D**: BatchNorm layers
17+
* **PQDense**: Linear layer
18+
* **PQActivation**: Activation layers (ReLU, Tanh)
1819

1920
The various pruning methods have different training steps, such as a pre-training step and fine-tuning step. PQuant provides a training function, where the user provides the functions to train and validate an epoch, and PQuant handles the training while triggering the different training steps.
2021

0 commit comments

Comments
 (0)