Skip to content

feat: Add comprehensive fine-tuning framework with adapter layers#2674

Open
safayavatsal wants to merge 1 commit intoopenai:mainfrom
safayavatsal:feature/fine-tuning-framework
Open

feat: Add comprehensive fine-tuning framework with adapter layers#2674
safayavatsal wants to merge 1 commit intoopenai:mainfrom
safayavatsal:feature/fine-tuning-framework

Conversation

@safayavatsal
Copy link
Copy Markdown

  • Implement WhisperAdapter class for efficient fine-tuning
  • Add AdaptedWhisperModel with selective parameter freezing
  • Create FineTuningDataset for data preparation
  • Include WhisperFineTuner main training class
  • Support adapter saving/loading functionality
  • Address GitHub Discussions Finetuning/Training code ? #64, Fine-tuning Whisper #759 fine-tuning requests

Features:

  • Parameter-efficient fine-tuning using adapter layers
  • Flexible target module selection
  • Integrated training pipeline with validation
  • Compatible with all Whisper model sizes
  • Memory-efficient training approach

- Implement WhisperAdapter class for efficient fine-tuning
- Add AdaptedWhisperModel with selective parameter freezing
- Create FineTuningDataset for data preparation
- Include WhisperFineTuner main training class
- Support adapter saving/loading functionality
- Address GitHub Discussions openai#64, openai#759 fine-tuning requests

Features:
- Parameter-efficient fine-tuning using adapter layers
- Flexible target module selection
- Integrated training pipeline with validation
- Compatible with all Whisper model sizes
- Memory-efficient training approach
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant