https://xinshaoamoswang.github.io/paperlists/2020-02-16-arXiv/#foundation-of-deep-learning
-
Instance Cross Entropy for Deep Metric Learning and its application in SimCLR-A Simple Framework for Contrastive Learning of Visual Representations
-
I am very glad to highlight that: our proposed ICE is simple and effective, which has also been demonstrated in recent work SimCLR, in the context of self-supervised learning: A Simple Framework for Contrastive Learning of Visual Representations
-
Its loss expression NT-Xent (the normalized temperature-scaled cross entropy loss) is a fantastic application of our recently proposed Instance Cross Entropy for Deep Metric Learning, in the context of self-supervised learnining. I am very excited about this.
- #InstanceCrossEntropy #TemperatureScaling #RepresentationLearning
-
Robustness From CVPR 2019: https://xinshaoamoswang.github.io/paperlists/2019-12-29-CVPR/#robustness
DML From CVPR 2019: https://xinshaoamoswang.github.io/paperlists/2019-12-29-CVPR/#deep-metric-learning
Label Noise & Importance Weighting From ICML 2019: https://xinshaoamoswang.github.io/paperlists/2019-12-29-ICML/
Emphasis Regularisation by Gradient Rescaling for Training Deep Neural Networks with Noisy Labels (arXiv 2019)
Rethinking data fitting and generalisation: MAE has weak training data fitting ability. Please consider how simple our solution is, which is backed up by our fundamental analysis
- Paper: https://arxiv.org/pdf/1905.11233.pdf
- Comments, sharing, discussion: https://www.researchgate.net/publication/333418661_Emphasis_Regularisation_by_Gradient_Rescaling_for_Training_Deep_Neural_Networks_with_Noisy_Labels/comments
Rethinking data fitting and generalisation: MAE has weak training data fitting ability. Please consider how simple our solution is, which is backed up by our fundamental analysis
- Paper: https://arxiv.org/pdf/1903.12141.pdf
- Comments, sharing, discussion: https://www.researchgate.net/publication/332070641_Improving_MAE_against_CCE_under_Label_Noise
- Paper: http://arxiv.org/abs/1903.03238
- Slide: https://drive.google.com/file/d/1nSXCe-7t_EkNwjFuXTnmzzoFr-6jFKVW/view
- Poster: https://drive.google.com/file/d/1vSp3mDRJKdQFNUH12ehuDDyqQfjXFnWM/view
- Paper: https://arxiv.org/abs/1811.01459
- Slide: https://drive.google.com/file/d/1Z44yvdrnrjIeH8x2A4e9-r275y25piKo/view?usp=sharing
- Poster: https://drive.google.com/file/d/1PpCpD9HLtYJQK2tGtsgIhlr1HZ3IF8zF/view?usp=sharing
-
Paper: http://openaccess.thecvf.com/content_ICCV_2017/papers/Wu_Sampling_Matters_in_ICCV_2017_paper.pdf
-
Code (MXNet + Python): https://github.com/apache/incubator-mxnet/tree/master/example/gluon/embedding_learning
-
Pipeline: net.features->Dense 128->L2 Norm -> Distance Weighted Sampling -> Margin Loss
-
Automatic Learning Beta : does not help
-
The margin in Margin Loss is sensitive