Agenda

Review – Deep Learning World Berlin 2018

November 12, 2018 – Estrel Hotel Berlin


Monday, 12th November 2018

8:00 am
Registration
9:00 am
Room: Salon Paris
Martin SzugatDatentreiber GmbH
Founder & Managing Director
Datentreiber GmbH
9:15 am
Room: Salon Paris
Keynote:
The Keynote Description will be available shortly.
Session description
Speaker
Prof. Dr. Sven Crone
Lecturer, CEO and Founder
iqast
9:55 am
Short break
10:00 am
Room: Salon Paris
Session:

A cognitive sensor is supposed to use features such as hearing, seeing or feeling like humans do. So, in this scenario we have used audio classification to detect machine defects - we used a standard microphone as a sensor and trained a model to detect these defects. We adapted the image classification process towards audio processing and the results were quite impressive.

This talk will provide some insights about the approach, how we did the training of the DNN and the results.

Session description
Speakers
Johannes KortmannBilfinger Digital Next
Projektmanager
Bilfinger Digital Next
Dr. Marcel TillyTechnische Hochschule Rosenheim
Professor
Rosenheim University of Applied Sciences
10:30 am
Coffee Break
11:00 am
Room: Salon Paris
Session:

In 2015, CNNs first won against humans in the yearly ImageNet challenge on image classification. This marked a turning point in object recognition within visual data. Since then, further major advances have been made on the algorithmic and hardware side. It has never been easier to setup and run a CNN. dida Datenschmiede made use of these technological advancements in various projects. The presentation will provide you with insights into how object recognition projects are planned, what services are used to (technically) set up the project, what are major obstacles and solutions and what are practical applications for clients.

Session description
Speaker
Philipp Jackmuthdida Datenschmiede
MD
dida Datenschmiede
11:45 am
Short break
11:50 am
Room: Salon Paris
Session:

Reinforcement learning is in addition to (un)-supervised learning a major machine learning technology, which has a huge potential in a broad field of applications like robotics, autonomous driving, gaming and general control. This talk describes the major concepts, algorithms and software environments of it and gives a detailed overview of its capabilities. It addresses people with a reasonable background of other AI/ML technologies and therefore requires a good technical background.

Session description
Speaker
Norbert KraftNokia Bell Labs
Research & Technology
Nokia Bell Labs
12:30 pm
Lunch Break
1:30 pm
Room: Salon Paris
Session:

The best mathematical representations of dynamical systems are state space models. For the approximation of such systems with recurrent neural networks we have a universal approximation theorem, similar to the universal approximation for feedforward networks. To identify a model in this framework we can rely on data AND additional a priori insights of the class of dynamical systems. After some comments on the learning of recurrent networks we will study open (small) systems. By construction an open system is small, because there exists an environment which influences the system. Opposed to this we will study closed (large) dynamical systems. In principle they are world models, because there are no influences from an outside world. Another class are dynamical systems on manifolds. They allow the description of high dimensional systems – if the real dynamic stays on a low dimensional manifold of the description space. If we apply these insights to forecasting we have to say something on uncertainty too. We will end with a discussion about the differences between causality, determinism and uncertainty. The talk shows the relevance of the theories in real world examples like demand-, load- and commodity price forecasting. For a more detailed description of the above talk see the full day workshop at Nov 15.

Session description
Speaker
Dr. Hans-Georg ZimmermannFraunhoder IIS
Senior Research Scientist
Fraunhofer Gesellschaft
2:15 pm
Short break
2:20 pm
Room: Salon Paris
Session:

Transfer learning is a deep learning technique that uses pre-trained networks as starting points for training domain specific classifiers. This allows for virtually out-of-the-box building of powerful baseline deep learning models for virtually any domain, from medical images like X-ray images to industrial optical images or satellite imagery. This can be further generalized to non-image datasets like IoT, by emphasizing its multichannel 1D images properties. We use GPU enabled Deep Learning Virtual Machines available on Microsoft Azure AI platform to show how engineers can leverage open-source deep learning frameworks like Keras to build end to end intelligent signal classification solutions.

Session description
Speaker
Dr. George IordanescuMicrosoft
Algorithm Specialist
Microsoft
3:00 pm
Coffee Break
3:30 pm
Room: Salon Paris
Session:

Machine learning thrives on large, well-organized and labeled training data sets. Big Data - large data sets collected in the real world - is often not. These data sets require unsupervised learning approaches that help us to discover the inherent structure in the data, and visualize them. I'll discuss a statistical learning approach based on mixture models and Naive Bayesian classifiers to find clusters in binary feature vectors. By arranging the classifiers topologically one can impose a spatial structure and visualize large data sets in a way similar to self-organized maps. Such maps can help us to understand messy real-world data appearing in many Big Data analyses

Session description
Speaker
Dr. Christoph BestGoogle
Senior Data Scientist
Google
4:15 pm
Short break
4:20 pm
Room: Salon Paris
Closing Keynote:

The renowned Berlin painter Roman Lipski has been working for two years with his Artificial Muse A.I.R., which inspires and augments him in his artistic work and pushes him to new frontiers. Now we present the latest generation of the muse, that is based on generative networks and allows for an intuitive and fluid interaction between artist and algorithm. Using Conditional Generative Adversarial Networks at its core, cGANs for short, A.I.R. translates sketches directly into new inspirations, facets and images. While the algorithm itself is mathematically complex and not easily accessible to human understanding, Lipski’s new approach to the muse exemplifies how curious discovery and experimentation can lead to intuitive understanding and ultimately trust, in a new generation of tools, in artificial intelligence per se. Explainability by interaction, trust by time. In our talk we will take a deep dive into the technical layer and share the learnings we made at “the inbetweens” of Roman and his Artificial Muse, of human and artificial intelligence. And this is just the beginning...

Session description
Speakers
Klaas Willhelm BollhöferBirds on Mars
Founder & AI Thinker
Birds on Mars
Sebastian ZimmermannBirds on Mars
Developer
Birds on Mars
5:15 pm
End of Deep Learning World Berlin 2018
CloseSelected Tags:
We use cookies to provide you with the best possible experience on our website. By utilising our website you agree to the placement of cookies on your device. Find out more