Skip to content

Latest commit

 

History

History
12 lines (8 loc) · 1.11 KB

File metadata and controls

12 lines (8 loc) · 1.11 KB

Interpretable Dimensionality Reduction: Mathematical Foundations for Machine Learning

Authors:

  • Hana Passen
  • Roberto Barroso Luque

Problem Description:

Dimensionality reduction through SVD, principal components analysis, and other techniques is a computationally efficient method to project high dimensional data onto a lower-dimensional subspace. When used in classification problems, these methods allow us to reduce the feature space by orders of magnitude eliminating redundant features all without necessarily losing predictive power. However, this methodology is fraught with interpretability challenges. While the relative importance of the singular values can still be understood via their magnitudes, it is difficult to explain what common-sense meaning a large singular value represents in the original data set. In our project, we plan to explore whether we can derive interpretable results from dimensionality reduction methods based on the singular values of a dataset.

Final paper: https://github.com/RobertoBarrosoLuque/InterpretableDimRed/blob/main/InterpretableDimRed/Paper/neurips_2020.pdf