[Home] [Activities]
A campus-wide Journal Club covering topics of machine and deep learning. Everybody is cordially invited to join!
We discuss most recent papers from the field, e.g., recent network architectures and their applications. For each meeting we pick two papers, everybody reads them and prepares his/her questions and comments. In the Journal Club there will be a short intro to the papers followed by a joint discussion. Each participant may propose papers to read next. The final selection is made through a vote.
When: every 3rd Monday of the month, 10pm
Where: Virtual Meetings using BigBlueButton
Please confirm your participation at the Journal Club and delete it again, if it turns out you cannot make it, at https://terminplaner4.dfn.de/HRLschH1216FC6zr
If you’re interested in more details about the Journal Club, please subscribe for the mailinglist at https://lists.fz-juelich.de/mailman/listinfo/julain_journal_club
Next Meeting
19 October 2020 Self-Supervised Visual Representation Learning
Virtual Meeting using BigBlueButton
- A Simple Framework for Contrastive Learning of Visual Representations
T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, 2020
http://arxiv.org/abs/2002.05709
15 pages, incl appendix - Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey
L. Jing and Y. Tian, CVPR2019
http://arxiv.org/abs/1902.06162
21 pages
The first paper presents a state-of-the-art approach for self-supervised learning of strong visual features based on contrastive learning. Random data augmentation is applied to images from the ImageNet dataset and a model is trained to match augmented and original images. The second paper revisits several self-supervised training techniques for visual representation learning and offers a nice overview over different approaches.
Schedule for upcoming Meetings
tbd
Past Meetings
21 September 2020 - Attention Networks
Virtual meeting
- Attention Is All You Need
Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, Polosukhin, NIPS 2017
https://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf
9 pages - Self-Attention Generative Adversarial Networks
Zhang, Goodfellow, Metaxas, Odena, ICML 2019
https://arxiv.org/abs/1805.08318
8 pages
17 August 2020 - Canceled
20 July 2020 - Summer break
15 June 2020 - Generative Adversarial Networks (and VAE)
Virtual meeting
- Original GAN paper: Generative Adversarial Networks Goodfellow, Bengio et al, NIPS 2014 https://arxiv.org/abs/1406.2661 https://papers.nips.cc/paper/5423-generative-adversarial-nets.html 8 pages
- On relation GAN, VAE: On Unifying Deep Generative Models Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, Eric P. Xing, ICLR 2017 https://arxiv.org/abs/1706.00550 https://openreview.net/forum?id=rylSzl-R- 16 pages, incl appendix
Discussion notes: https://gitlab.version.fz-juelich.de/codiMD/f2QaNtzhSrmwg0qmKPG2PA#
Monday 18 May Variational Autoencoders
- Original VAE paper Auto-Encoding Variational Bayes Diederik P Kingma, Max Welling, ICLR 2014 https://arxiv.org/abs/1312.6114 https://openreview.net/forum?id=33X9fd2-9FyZd 14 pages, incl appendix
- Recent VAE review / tutorial An Introduction to Variational Autoencoders Diederik P. Kingma, Max Welling (2019) Foundations and Trends in Machine Learning. 12. 307-392. 10.1561/2200000056. https://arxiv.org/abs/1906.02691 86 pages
Discussion notes: https://gitlab.version.fz-juelich.de/codiMD/9pd1RHfHTTqAB7XqgX-O2A
Monday 20 April - Learning discrete representations from data
- A. van den Oord, O. Vinyals, K. Kavukcuoglu, Neural Discrete Representation Learning, NeurIPS 2017 https://arxiv.org/abs/1711.00937
- A. Razavi, A. van den Oord, O. Vinyals, Generating Diverse High-Fidelity Images with VQ-VAE-2, NeurIPS 2019 https://arxiv.org/abs/1906.00446
Those two papers are about learning discrete representations from data by taking inspiration from vector quantization. Learning discrete representations using neural networks is challenging and can helpful for tasks such as compression, planning, reasoning, and can be potentially more interpretable than continuous ones. In the two papers use those learned discrete representations to build autoregressive generative models on image, sound, and video. The second paper (Generating Diverse High-Fidelity Images with VQ-VAE-2) is basically a sequel of the first (Neural Discrete Representation Learning) where they scale the models to bigger datasets and images (up to 1024x1024 resolution).
Monday 30 March - Full-Resolution Residual Networks for Semantic Segmentation
Replacement date for canceled meeting at Monday 16 March
- Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes (CVPR'17 Oral) T. Pohlen, A. Hermans, M. Mathias, and B. Leibe Paper: http://arxiv.org/abs/1611.08323 Code: https://github.com/TobyPDE/FRRN
- Deep High-Resolution Representation Learning for Visual Recognition, Wang et al. 2019 Paper: https://arxiv.org/abs/1908.07919 Code: https://paperswithcode.com/paper/deep-high-resolution-representation-learning-2
Most common architectures for semantic segmentation consisting of an encoder and decoder part (e.g. U-Net) heavily reduce the spatial dimension of input images and may loose important details or fail to localize precisely. The proposed papers present full-resolution networks, which try to preserve high-resolution features throughout the network and improve localization accuracy.
Monday 17 February - Speech Recognition
- Deep Speech 2: End-to-End Speech Recognition in English and Mandarin, Amodei et al., 2015 https://arxiv.org/abs/1512.02595 Not the most recent paper about speech recognition, but a breakthrough for language modelling, thus worth reading. It discusses some topics, which are relevant for timeseries analysis, and also makes reference to good use of HPC. Following this topic, we could continue with latest papers on that at next Journal Club.
- and, if needed, as background paper about deep recurrent networks: Speech Recognition with Deep Recurrent Neural Networks Alex Graves, Abdel-rahman Mohamed, Geoffrey Hinton, 2013 https://arxiv.org/abs/1303.5778
Monday 20 January
- Multi-Context Recurrent Neural Networks for Time Series Applications https://publications.waset.org/3524/pdf
- Global Sparse Momentum SGD for Pruning Very Deep Neural Networks https://arxiv.org/pdf/1909.12778v3.pdf
Monday 16 December
Our first Journal Club will cover two papers from ICCV 2019 about GANs.
- SinGAN: Learning a Generative Model from a Single Natural Image (Best Paper Award) http://openaccess.thecvf.com/content_ICCV_2019/papers/Shaham_SinGAN_Learning_a_Generative_Model_From_a_Single_Natural_Image_ICCV_2019_paper.pdf
- COCO-GAN: Generation by Parts via Conditional Coordinating full paper and supplementary material available at https://hubert0527.github.io/COCO-GAN/
- Intro Slides by Mickael Cormier available [here]
Archive paper proposals, not read yet
- 17.12.2019 Hanno Scharr
- A General and Adaptive Robust Loss Function Jonathan T. Barron, Google Research http://openaccess.thecvf.com/content_CVPR_2019/papers/Barron_A_General_and_Adaptive_Robust_Loss_Function_CVPR_2019_paper.pdf Simple but effective for improving accuracy in regression tasks
- RePr: Improved Training of Convolutional Filters Aaditya Prakash, James Storer, Dinei Florencio, Cha Zhang, Brandeis and Microsoft https://arxiv.org/abs/1811.07275 A training schedule using filter pruning and orthogonal reinitialization
[Home] [Activities]
last change: 18.5.2020 sw