... | ... | @@ -11,6 +11,31 @@ When: **every 3rd Monday of the month, 10pm** |
|
|
|
|
|
If you’re interested in more details about the Journal Club, please subscribe for the mailinglist at https://lists.fz-juelich.de/mailman/listinfo/julain_journal_club
|
|
|
|
|
|
## Monday 18 May 10-11:30am - Self-Supervised Visual Representation Learning
|
|
|
|
|
|
Venue: JSC meetingroom 2, building 16.3, room 315
|
|
|
|
|
|
* T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, A Simple Framework for Contrastive Learning of Visual Representations <br>
|
|
|
http://arxiv.org/abs/2002.05709
|
|
|
* L. Jing and Y. Tian, Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey, CVPR2019 <br>
|
|
|
http://arxiv.org/abs/1902.06162
|
|
|
The first paper presents a state-of-the-art approach for self-supervised learning of strong visual features based on contrastive learning. Random data augmentation is applied to images from the ImageNet dataset and a model is trained to match augmented and original images. The second paper revisits several self-supervised training techniques for visual representation learning and offers a nice overview over different approaches.
|
|
|
|
|
|
|
|
|
|
|
|
## Monday 20 April 10-11:30am - Learning discrete representations from data
|
|
|
|
|
|
Venue: JSC meetingroom 2, building 16.3, room 315
|
|
|
|
|
|
* A. van den Oord, O. Vinyals, K. Kavukcuoglu, Neural Discrete Representation Learning, NeurIPS 2017<br>
|
|
|
https://arxiv.org/abs/1711.00937
|
|
|
* A. Razavi, A. van den Oord, O. Vinyals, Generating Diverse High-Fidelity Images with VQ-VAE-2, NeurIPS 2019<br>
|
|
|
https://arxiv.org/abs/1906.00446
|
|
|
|
|
|
Those two papers are about learning discrete representations from data by taking inspiration from vector quantization. Learning discrete representations using neural networks is challenging and can helpful for tasks such as compression, planning, reasoning, and can be potentially more interpretable than continuous ones. In the two papers use those learned discrete representations to build autoregressive generative models on image, sound, and video. The second paper (Generating Diverse High-Fidelity Images with VQ-VAE-2) is basically a sequel of the first (Neural Discrete Representation Learning) where they scale the models to bigger datasets and images (up to 1024x1024 resolution).
|
|
|
|
|
|
|
|
|
|
|
|
## Canceled Monday 16 March 10-11:30am - Full-Resolution Residual Networks for Semantic Segmentation
|
|
|
|
|
|
**Replacement date: Monday 30 March 10-11:30am**
|
... | ... | @@ -61,26 +86,8 @@ full paper and supplementary material available at https://hubert0527.github.io/ |
|
|
* Intro Slides by Mickael Cormier available [[here](https://gitlab.version.fz-juelich.de/MLDL_FZJ/General_Wiki/blob/master/files/JournalClub/20191216_gan.pptx)]
|
|
|
|
|
|
|
|
|
## Archive paper proposals
|
|
|
## Archive paper proposals, not read yet
|
|
|
|
|
|
### Open
|
|
|
* 22.1.2020 Christian Schiffer<br>
|
|
|
**Full-resolution networks for semantic segmentation**<br>
|
|
|
Most common architectures for semantic segmentation consisting of an encoder and decoder part (e.g. U-Net) heavily reduce the spatial dimension of input images and may loose important details or fail to localize precisely. The proposed papers present full-resolution networks, which try to preserve high-resolution features throughout the network and improve localization accuracy.<br>
|
|
|
* Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes (CVPR'17 Oral)<br>
|
|
|
T. Pohlen, A. Hermans, M. Mathias, and B. Leibe<br>
|
|
|
Paper: http://arxiv.org/abs/1611.08323<br>
|
|
|
Code: https://github.com/TobyPDE/FRRN
|
|
|
* Deep High-Resolution Representation Learning for Visual Recognition, Wang et al. 2019<br>
|
|
|
Paper: https://arxiv.org/abs/1908.07919<br>
|
|
|
Code: https://paperswithcode.com/paper/deep-high-resolution-representation-learning-2
|
|
|
* ~~17.12.2019 Christian Schiffer~~<br>
|
|
|
==> updated by 22.1.2020
|
|
|
* ~~Deep High-Resolution Representation Learning for Visual Recognition <br>
|
|
|
https://arxiv.org/abs/1908.07919, Code: https://paperswithcode.com/paper/deep-high-resolution-representation-learning-2)<br>
|
|
|
Human Pose Estimation+Semantic Segmentation+Object detection; SOTA Cityscapes Val, #3 best model PASCAL Context~~
|
|
|
* ~~Multi-Scale Dense Networks for Resource Efficient Image Classification<br>
|
|
|
https://arxiv.org/abs/1703.09844)~~
|
|
|
* 17.12.2019 Hanno Scharr
|
|
|
* A General and Adaptive Robust Loss Function<br>
|
|
|
Jonathan T. Barron, Google Research <br> http://openaccess.thecvf.com/content_CVPR_2019/papers/Barron_A_General_and_Adaptive_Robust_Loss_Function_CVPR_2019_paper.pdf<br>
|
... | ... | @@ -97,4 +104,4 @@ A training schedule using filter pruning and orthogonal reinitialization |
|
|
|
|
|
|
|
|
---
|
|
|
last change: 27.3.2020 sw |
|
|
\ No newline at end of file |
|
|
last change: 5.4.2020 sw |
|
|
\ No newline at end of file |