Skip to content

Releases: moayadeldin/deeptune

DeepTune v1.1.0 RELEASE

06 Jan 15:30
b7926d0

Choose a tag to compare

Updates:

🚀 New Models added:

  • SiGLiP was reintegrated into DeepTune, with an additional support featured through the single command line call.
  • TabPFN for tabular data was added in DeepTune, providing the full train, evaluation, and embeddings extraction pipeline support for TabPFN for both training and fine-tuning.

🐞 Bug Fixes:

  • Fixing The model initialization error while calling BERT with PEFT.
  • Fixing the static --freeze-backbone option in DeepTune single call.

DeepTune v1.0.0 RELEASE

05 Jan 23:10

Choose a tag to compare

• ONE single command automating the whole raw dataset handling, preprocessing, and the full training–validation–evaluation pipeline, while generating knowledge-representative embeddings from tuned or trained models for downstream tasks (e.g., statistical ML algorithms). The user can also run each functionality separately. More details provided in the documentation.
• Applying transfer learning with multiple tuning methods and architecture modifications to state-of-the-art pretrained models for image and text datasets, with end-to-end training for tabular and time series datasets.
• Support for 6 different image models with +20 variants: ResNet (18, 34, 50, 101), DenseNet (121, 169), EfficientNet (B0–B7), VGG (11, 13, 16, 19), Vision Transformers (ViT-b-16, ViT-b-32, Vit-l-16). 2 text models: Multilingual BERT and GPT-2, GANDALF for deep learning and deepAR for time series modelling.