Skip to content

Learning Objectives

Negro Michela edited this page Nov 21, 2025 · 4 revisions

Tip

This course covered several machine learning and deep learning techniques used to accomplished some specific tasks. At the same time this course provided basic overarching concepts that are common to all machine learning and deep learning applications. Such concepts are the minima knowledge that someone is expected to have after following this course.

In other words, you can say you successfully completed this course is you can answer these questions:

  1. Generalization Techniques List the main techniques used to improve generalization in machine learning. For each method, briefly explain how it works and why it helps reduce overfitting.

  2. Weight-Update Timing and Mechanics Describe when weight updates occur in a typical training loop (e.g., every iteration, every epoch, or only at the end). Explain how the update step is computed and applied in gradient-based learning.

  3. Forward and Backward Propagation Explain the forward propagation process and the backward propagation process in neural networks. Include the role of the loss function and gradients.

  4. Optimization Algorithms Name at least two optimization algorithms used in machine learning (e.g., SGD, Adam, RMSProp). Compare their update rules and discuss key differences in behavior or performance.

  5. Bias–Variance Trade-off and Under/Overfitting Explain the bias–variance trade-off and describe how it manifests in machine learning models. How can we detect underfitting and overfitting? What strategies can be used when the dataset is too small to create separate training and validation sets?

  6. GD vs. SGD vs. Mini-batch SGD Describe the differences between Gradient Descent (GD), Stochastic Gradient Descent (SGD), and Mini-batch SGD. Why is mini-batch training generally preferred? Discuss whether more advanced optimizers exist and how they improve upon vanilla SGD.

  7. Hyperparameters in Deep Learning List at least two common hyperparameters used across deep-learning algorithms (e.g., learning rate, batch size, number of epochs). What strategies can be used to choose good values for these hyperparameters?

  8. Dataset Splitting and Validation Strategies Describe the ideal way to split a dataset into training, validation, and test sets. What techniques can be used when the dataset is small and a standard split is not feasible? Why should the test set ideally be used only once?

  9. Dimensionality Reduction Techniques Discuss the main dimensionality-reduction methods covered in class (e.g., PCA, t-SNE, UMAP). What are the limitations of these methods in terms of interpretability and trustworthiness?

  10. IAI, XAI, and Scientific Best Practices Explain the concepts of Interpretable AI (IAI) and Explainable AI (XAI) and why they are important in scientific and physics applications of machine learning. Describe best practices for writing a scientific paper that uses ML methods ensuring reproducibility, and avoiding “black-box” claims without proper justification.

Clone this wiki locally