-
Notifications
You must be signed in to change notification settings - Fork 39
Refactor lm_head losses #425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
oleksost
wants to merge
28
commits into
main
Choose a base branch
from
train_only_layer_losses
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
28 commits
Select commit
Hold shift + click to select a range
c335f6e
train with only layer distillation losses
oleksost e06a4b2
unscaled loss llogging + training with distillation loss factor = 0
oleksost 179ae25
make logging more explicit
oleksost af456f0
Merge remote-tracking branch 'origin/main' into train_only_layer_losses
oleksost 9968aac
clean + tests
oleksost 945c5a7
nvm
oleksost 4b6e3d7
forward KL
oleksost c5fefa0
test forward kl
oleksost 4119596
wip: report unscaled + kl loss
oleksost b55a0a4
loss config
oleksost 097baeb
wip
oleksost d773d98
tests
oleksost 35400c1
Merge remote-tracking branch 'origin/main' into train_only_layer_losses
oleksost 282925c
test
oleksost 0f73ea2
tests
oleksost 04a0193
Merge branch 'main' into train_only_layer_losses
oleksost fa85c41
wip
oleksost feb416e
Merge branch 'train_only_layer_losses' of https://github.com/ServiceNβ¦
oleksost 31cfb84
wip
oleksost 24fe67b
no grad if factor 0
oleksost 00f6118
Merge remote-tracking branch 'origin/main' into train_only_layer_losses
oleksost 0cadf98
Merge branch 'main' into train_only_layer_losses
oleksost 0e562e9
addressed comments
oleksost 2a474e2
Merge branch 'train_only_layer_losses' of https://github.com/ServiceNβ¦
oleksost 52c1c11
addressed comments
oleksost 406d0a2
Removed Targets class
oleksost f25380a
fixes
oleksost 8adb7dd
imports
oleksost File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -5,11 +5,11 @@ | |
| from fast_llm.engine.config_utils.parameter import OptionalParameterConfig, ParameterConfig, combine_lr_scales | ||
| from fast_llm.engine.config_utils.tensor_dim import TensorDim | ||
| from fast_llm.engine.distributed.config import DistributedConfig | ||
| from fast_llm.functional.config import CrossEntropyImpl, DistillationLossImpl | ||
| from fast_llm.layers.block.config import BlockConfig, BlockKwargs, BlockSequenceConfig | ||
| from fast_llm.layers.block.config import BlockConfig, BlockSequenceConfig | ||
| from fast_llm.layers.common.normalization.config import NormalizationConfig | ||
| from fast_llm.layers.common.peft.config import PeftConfig | ||
| from fast_llm.layers.decoder.config import DecoderBlockConfig | ||
| from fast_llm.layers.language_model.lm_head_losses import LanguageModelLossConfig | ||
| from fast_llm.utils import Assert | ||
|
|
||
| if typing.TYPE_CHECKING: | ||
|
|
@@ -19,21 +19,6 @@ | |
| from fast_llm.layers.language_model.multi_token_prediction import MultiTokenPrediction | ||
|
|
||
|
|
||
| class LanguageModelKwargs(BlockKwargs): | ||
| token_ids = "token_ids" | ||
| position_ids = "position_ids" | ||
| token_map = "token_map" | ||
| sample_map = "sample_map" | ||
| embedding_map = "embedding_map" | ||
| # TODO: These are generic | ||
| labels = "labels" | ||
| phase = "phase" | ||
| chosen_spans = "chosen_spans" | ||
| rejected_spans = "rejected_spans" | ||
| loss_mask = "loss_mask" | ||
| mask_inputs = "mask_inputs" | ||
|
|
||
|
|
||
| @config_class() | ||
| class LanguageModelEmbeddingsConfig(BlockConfig): | ||
| _abstract = False | ||
|
|
@@ -135,44 +120,22 @@ class LanguageModelHeadConfig(LanguageModelHeadBaseConfig): | |
| desc="Configuration for the final normalization layer.", | ||
| hint=FieldHint.architecture, | ||
| ) | ||
| losses: dict[str, LanguageModelLossConfig] = Field( | ||
| default_factory=dict, | ||
| desc="A dictionary of loss names and their configurations.", | ||
| hint=FieldHint.core, | ||
| ) | ||
| # TODO: Cleanup | ||
| output_weight: ParameterConfig = Field( | ||
| desc="Configuration for the LM output layer (weight). Ignored for tied embeddings", | ||
| hint=FieldHint.architecture, | ||
| ) | ||
| cross_entropy_implementation: CrossEntropyImpl = Field( | ||
| default=CrossEntropyImpl.auto, | ||
| desc="Implementation for the cross-entropy computation.", | ||
| hint=FieldHint.performance, | ||
| ) | ||
| distillation_loss_implementation: DistillationLossImpl = Field( | ||
| default=DistillationLossImpl.cross_entropy, | ||
| desc="Implementation for the distillation cross-entropy computation.", | ||
| hint=FieldHint.performance, | ||
| ) | ||
| cross_entropy_splits: int | None = Field( | ||
| default=None, | ||
| desc="Split the logit and cross-entropy computation into this many fragment, to reduce memory usage.", | ||
| hint=FieldHint.feature, | ||
| valid=skip_valid_if_none(check_field(Assert.gt, 0)), | ||
| ) | ||
| logit_z_loss: float = Field( | ||
| default=0.0, | ||
| desc="Regularize the logits with Z-loss.", | ||
| doc="We recommend 1e-4 for stability, as used for training PaLM.", | ||
| hint=FieldHint.feature, | ||
| valid=check_field(Assert.geq, 0), | ||
| ) | ||
| language_model_loss_factor: float = Field( | ||
| default=None, | ||
| desc="Factor to scale the language modeling loss by when using distillation.", | ||
| hint=FieldHint.feature, | ||
| ) | ||
| distillation_loss_factor: float = Field( | ||
| default=1.0, | ||
| desc="Factor to scale the distillation loss by when using distillation.", | ||
| hint=FieldHint.feature, | ||
| ) | ||
| logits_scale_factor: float = Field( | ||
| default=1.0, | ||
| desc="Multiply output logits by scale factor.", | ||
|
|
@@ -181,10 +144,10 @@ class LanguageModelHeadConfig(LanguageModelHeadBaseConfig): | |
| hint=FieldHint.feature, | ||
| valid=check_field(Assert.geq, 0), | ||
| ) | ||
| teacher_softmax_temperature: float = Field( | ||
| default=1.0, | ||
| desc="Divides distillation target logits by this factor.", | ||
| doc="Divides distillation target logits by this factor.", | ||
| logit_z_loss: float = Field( | ||
| default=0.0, | ||
| desc="Regularize the logits with Z-loss.", | ||
| doc="We recommend 1e-4 for stability, as used for training PaLM.", | ||
| hint=FieldHint.feature, | ||
| valid=check_field(Assert.geq, 0), | ||
| ) | ||
|
|
@@ -193,11 +156,6 @@ class LanguageModelHeadConfig(LanguageModelHeadBaseConfig): | |
| desc="Name of the reference model to use for dpo.", | ||
| hint=FieldHint.feature, | ||
| ) | ||
| dpo_beta: float | None = Field( | ||
| default=1.0, | ||
| desc="Beta value for DPO loss.", | ||
| hint=FieldHint.feature, | ||
| ) | ||
| distillation_model: str | None = Field( | ||
| default=None, | ||
| desc="Name of the reference model to use for knowledge distillation." | ||
|
|
@@ -237,11 +195,19 @@ def layer_class(self) -> "type[LanguageModelHead]": | |
|
|
||
| def _validate(self) -> None: | ||
| with self._set_implicit_default(): | ||
| if self.language_model_loss_factor is None: | ||
| if self.distillation_model is None: | ||
| self.language_model_loss_factor = 1.0 | ||
| else: | ||
| self.language_model_loss_factor = 0.0 | ||
| if not self.losses: | ||
| if "losses" not in self._explicit_fields: | ||
| self.losses = { | ||
| "lm_loss": LanguageModelLossConfig._from_dict( | ||
| { | ||
| "type": "cross_entropy", | ||
| "weight": 1.0, | ||
| } | ||
| ) | ||
| } | ||
| for loss_config in self.losses.values(): | ||
| if "distillation" in loss_config.type: | ||
| assert self.distillation_model is not None, "Distillation loss requires a distillation model." | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Shouldn't the distillation model go with the loss?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. hm, this raises error when there is no distillation mode, this is correct, no? |
||
| super()._validate() | ||
| assert self.dpo_reference_model is None or self.distillation_model is None # currently don't support both | ||
|
|
||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These removals are likely to cause backward compatibility issues when loading existing models. Please make sure it doesn't disrupt ongoing work, and if needed add backward compatibility in
_validateThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested training with checkpoints created on the main branch in both
distributedandapriel2format. Training starts with no issues.