... | ... | @@ -14,39 +14,32 @@ __Date:__ 8 May 2019, 14:00 - 16:30 |
|
|
|
|
|
The SimLab Neuroscience serves as a bridge between neuroscience and high performance computing (HPC) by providing high-level, community-oriented support and performing methodological research. The SimLab is an interdisciplinary team of scientists and engineers with complementary backgrounds and skills, dedicated to supporting neuroscientists in using HPC and data resources for their research. With constant in-house development of techniques for simulation and data analysis, including machine learning approaches, the team brings the latest technology to collaborative projects with national and international partners.
|
|
|
|
|
|
### 14:10 - 14:30
|
|
|
Andeas Herten: ML/DL on Supercomputers – An Introduction to the JURON Machine and Getting Started with Deep Learning on Supercomputers
|
|
|
### 14:10 - 14:30 <br>Andeas Herten: ML/DL on Supercomputers – An Introduction to the JURON Machine and Getting Started with Deep Learning on Supercomputers
|
|
|
|
|
|
In addition to JUWELS and JURECA, the JSC operates a smaller supercomputer called JURON.
|
|
|
Started as a prototype for the Human Brain Project, it is now administrated for the public by staff from the lab. With 18 nodes, JURON might be smaller than the larger systems installed at JSC, but it features unique technology well-suited for Deep Learning applications. In this talk we'll introduce you to some of the supercomputers of JSC (especially JURON), and then present our tutorial on getting started with ML/DL on supercomputers.
|
|
|
|
|
|
### 14:30 - 14:50
|
|
|
Kai Krajsek: The Helmholtz Analytics Toolkit (HeAT) - A Scientific Big Data Library for HPC -
|
|
|
### 14:30 - 14:50 <br>Kai Krajsek: The Helmholtz Analytics Toolkit (HeAT) - A Scientific Big Data Library for HPC -
|
|
|
|
|
|
This talk presents HeAT, an in-house developed scientific big data library supporting transparent computation on HPC systems. HeAT builds on top of PyTorch, which already provides many required features like automatic differentiation, CPU and GPU support, linear algebra operations and basic MPI functionality as well as an imperative programming paradigm allowing fast prototyping essential in scientific research. These features are generalized to a distributed tensor with a NumPy-like interface allowing to port existing NumPy algorithms to HPC systems nearly effortlessly.
|
|
|
|
|
|
|
|
|
### 14:50 -15:10
|
|
|
Break
|
|
|
### 14:50 -15:10 <br>Break
|
|
|
|
|
|
|
|
|
### 15:10 - 15:30
|
|
|
Fahad Khalid: Deep networks for brain tissue identification in ultra high-resolution images
|
|
|
### 15:10 - 15:30 <br>Fahad Khalid: Deep networks for brain tissue identification in ultra high-resolution images
|
|
|
|
|
|
The Three-dimensional Polarized Light Imaging (3D-PLI) technology is used to capture high resolution images of thinly sliced segments of post-mortem brains. These images are then stacked to reconstruct the brain in 3D, which enables tracking of individual nerve fibers through the entire brain. We are investigating the application of deep Convolutional Neural Networks (CNN) for the precise demarcation of the highly irregular border between the brain tissue and the background in each image. In this talk, we’ll present challenges presented by this problem, and the solutions explored so far.
|
|
|
|
|
|
### 15:30 - 15:50
|
|
|
Fahad Khalid: Comparing internal dynamics of artificial recurrent neural networks with biologically plausible models of neural circuits
|
|
|
### 15:30 - 15:50 <br>Fahad Khalid: Comparing internal dynamics of artificial recurrent neural networks with biologically plausible models of neural circuits
|
|
|
|
|
|
For a given cognitive task, what are the differences and/or similarities between the solutions employed by the brain, and the models engineered using deep artificial neural networks? By employing direct and systematic comparisons between biologically inspired spiking neural network (SNN) models and state of-the-art artificial neural networks (ANN), we hope to gain insights into the nature and types of solutions that different systems find for the same problem domains, and use them to improve the current understanding of fundamental principles of neural computation and cognitive processing. In this talk, we’ll present our approach to tackle this research question with a focus on artificial recurrent neural networks for symbolic sequence processing.
|
|
|
|
|
|
### 15:50 - 16:10
|
|
|
Sandra Diaz: L2L on High Performance Computing (HPC) - JUPeX
|
|
|
### 15:50 - 16:10 <br>Sandra Diaz: L2L on High Performance Computing (HPC) - JUPeX
|
|
|
|
|
|
The effective usage of supercomputers to understand the brain is one of the key endeavors in the Human Brain Project. In order to ease the process of optimizing the behavior of brain models and based on a technique known in machine learning as Learning to Learn (L2L), the Graz Institute of Technology together with the Jülich Supercomputing Center are developing JUPeX. This software helps scientists explore, optimize and better understand the models they work with every day by enabling parameter space exploration and optimization on HPC. Although it is currently used mostly for neuroscience, the software and methodology is generally applicable to all scientific domains. JUPeX is integrated in the L2L framwork which can be downloaded here: https://github.com/IGITUGraz/L2L.
|
|
|
|
|
|
### 16:10 - 16:30
|
|
|
Kai Krajsek: Inverse Modeling of Brain Microstructures by Deep Learning
|
|
|
### 16:10 - 16:30 <br>Kai Krajsek: Inverse Modeling of Brain Microstructures by Deep Learning
|
|
|
|
|
|
Microstructures in the brain can be estimated by inverting models relating the brain microstructure with a measurable MRI signal. Established methods for inverting such models based on variational optimization or based on probabilistic estimation theory require a closed form forward model. This requirement is not fulfilled for modern fine grained models simulating DTI signals of brain microstructures by stochastic processes. This talk discusses alternative inversion methods based on Deep Learning.
|
|
|
|
... | ... | |