Skip to content

layer normalization #5

@dor-amran

Description

@dor-amran

In the paper + the video https://www.youtube.com/watch?v=F7wd4wQyPd8 you mention using layer normalization

To prevent this, FF normalizes the length of the hidden vector before using
it as input to the next layer (Ba et al., 2016b; Carandini and Heeger, 2013) This removes all of the
information that was used to determine the goodness in the first hidden layer and forces the next
hidden layer to use information in the relative activities of the neurons in the first hidden layer. These
relative activities are unaffected by the layer-normalization

However I did not notice layer norm in your implementation. I see that you used per layer training where each layer does not use previous layers outputs which probably achieves similar outcome?, why did you decide to use that instead of layer normalization? or was the reason for not using layernorm that you only have 1 hidden layer?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions