Skip to content

Conversation

@ghadiaravi13
Copy link

Similar to the SOTA models eg. standard transformer models where the attention is returned for using it for further interpretation. Some libraries support graphic visualization for the attention involved which requires passing the attention values. Thus returning attention values on demand for ESIM for someone who wants better interpretability.

@melvyncolasse2025-lang
Copy link

ghadiaravi13:master

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants