Skip to content

Commit e1fd104

Browse files
committed
add hpo_params to all configs, make hpo return best_trials in case of multiobjective optimization
1 parent 7a63505 commit e1fd104

10 files changed

Lines changed: 70 additions & 5 deletions

src/pquant/configs/config_ap.yaml

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
pruning_parameters:
2-
disable_pruning_for_layers: [] # Disable pruning for these layers, even if enable_pruning is true
2+
disable_pruning_for_layers: []
33
enable_pruning: true
44
pruning_method: activation_pruning
55
threshold: 0.2
@@ -46,3 +46,12 @@ training_parameters:
4646
rewind: never
4747
rounds: 1
4848
save_weights_epoch: -1
49+
hpo_parameters:
50+
experiment_name: experiment_name
51+
model_name: jet_tagger
52+
num_trials: 1
53+
sampler:
54+
type: RandomSampler
55+
hyperparameter_search:
56+
numerical: {}
57+
categorical: {}

src/pquant/configs/config_autosparse.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,3 +49,12 @@ training_parameters:
4949
rewind: never
5050
rounds: 1
5151
save_weights_epoch: -1.0
52+
hpo_parameters:
53+
experiment_name: experiment_name
54+
model_name: jet_tagger
55+
num_trials: 1
56+
sampler:
57+
type: RandomSampler
58+
hyperparameter_search:
59+
numerical: {}
60+
categorical: {}

src/pquant/configs/config_cs.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,3 +45,12 @@ training_parameters:
4545
rewind: post-training-stage
4646
rounds: 3
4747
save_weights_epoch: 2
48+
hpo_parameters:
49+
experiment_name: experiment_name
50+
model_name: jet_tagger
51+
num_trials: 1
52+
sampler:
53+
type: RandomSampler
54+
hyperparameter_search:
55+
numerical: {}
56+
categorical: {}

src/pquant/configs/config_fitcompress.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,3 +44,12 @@ training_parameters:
4444
rewind: never
4545
rounds: 1
4646
save_weights_epoch: -1
47+
hpo_parameters:
48+
experiment_name: experiment_name
49+
model_name: jet_tagger
50+
num_trials: 1
51+
sampler:
52+
type: RandomSampler
53+
hyperparameter_search:
54+
numerical: {}
55+
categorical: {}

src/pquant/configs/config_mdmm.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,15 @@ fitcompress_parameters:
5555
greedy_astar : true
5656
approximate : true
5757
f_lambda : 1
58+
hpo_parameters:
59+
experiment_name: experiment_name
60+
model_name: jet_tagger
61+
num_trials: 1
62+
sampler:
63+
type: RandomSampler
64+
hyperparameter_search:
65+
numerical: {}
66+
categorical: {}
5867

5968
# Note:
6069
# use_grad: true is having some bug... flip gradient impl not working as intended

src/pquant/configs/config_pdp.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,3 +46,12 @@ training_parameters:
4646
rewind: never
4747
rounds: 1
4848
save_weights_epoch: -1
49+
hpo_parameters:
50+
experiment_name: experiment_name
51+
model_name: jet_tagger
52+
num_trials: 1
53+
sampler:
54+
type: RandomSampler
55+
hyperparameter_search:
56+
numerical: {}
57+
categorical: {}

src/pquant/configs/config_wanda.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,3 +49,12 @@ training_parameters:
4949
rewind: never
5050
rounds: 1
5151
save_weights_epoch: -1
52+
hpo_parameters:
53+
experiment_name: experiment_name
54+
model_name: jet_tagger
55+
num_trials: 1
56+
sampler:
57+
type: RandomSampler
58+
hyperparameter_search:
59+
numerical: {}
60+
categorical: {}

src/pquant/core/hyperparameter_optimization.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -343,8 +343,10 @@ def run_optimization(self, model, **kwargs):
343343
n_trials=num_trials,
344344
n_jobs=1,
345345
)
346-
347-
return study.best_params
346+
if len(self.objectives.keys()) == 1:
347+
return study.best_params
348+
else:
349+
return study.best_trials
348350

349351

350352
def ap_config():

src/pquant/core/keras/layers.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2251,7 +2251,7 @@ def post_training_prune(model, config, calibration_data):
22512251
return apply_final_compression(model, config)
22522252

22532253

2254-
def get_ebops(model):
2254+
def get_ebops(model, **kwargs):
22552255
ebops = 0
22562256
for m in model.layers:
22572257
if isinstance(m, (PQWeightBiasBase)):

src/pquant/core/torch/layers.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1733,7 +1733,7 @@ def post_training_prune(model, config, calibration_data):
17331733
return remove_compression_layers(model, config)
17341734

17351735

1736-
def get_ebops(model):
1736+
def get_ebops(model, **kwargs):
17371737
ebops = 0
17381738
for m in model.modules():
17391739
if isinstance(m, (PQWeightBiasBase)):

0 commit comments

Comments
 (0)