Skip to content

Interpretability bugs and improvements required #817

@jhnwu3

Description

@jhnwu3

Updating this is crucial to helping users understand how to define their own models to be compatible with the interpretability module.

  • standardize the forward_from_embedding method in base model
  • conditionally compute loss so the interpretability method don't need to worry about faking a label tensor
  • masking has been proven to be problematic even for now. especially some methods will zero values for perturbation, which cause the x.sum(dim=...) == 0 method fail significantly.

Sub-issues

Metadata

Metadata

Assignees

No one assigned

    Labels

    component: modelContribute a new model to PyHealthcoreCore functionality (Patient API, BaseDataset, event stream format, etc.)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions