Skip to content

Spatial Transformer example - locnet activation functions #46

Open
@9thDimension

Description

@9thDimension

See In [5] of https://github.com/EderSantana/seya/blob/master/examples/Spatial%20Transformer%20Networks.ipynb where the localisation network is defined.

Is there a reason why the Convolution2D layers have no activations? And the final layer (responsible for regressing the affine transformation parameters) does have a 'relu' activation. I may be wrong, but I thought that it's typical for the final layer of a neural net regression to have linear activation.

I asked some others about this, but nobody could explain why the activations are laid out this way, and they suggested I raise it here -- so hopefully the author can comment on these design choices.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions