Review – Agenda 2019
Deep Learning World Munich 2019
May 6-7, 2019 – Holiday Inn Munich – City Centre
Deep Learning World - Munich - Day 1 - Monday, 6th May 2019
When attempting to solve AI based problems we often face barriers to entry. Data scarcity, data sparsity, class imbalance, and on-premise data storage requirements are just a few. Recent advancements in the world of deep learning have brought new solutions when dealing with practical challenges with noisy data, data augmentation and generation, visualization, and acceleration. In this talk David Austin will give a broad overview to recent advancements that can result brining AI based solutions to market.
Processing 3D images has many use cases. For example, to improve autonomous car driving, to enable digital conversions of old factory buildings, to enable augmented reality solutions for medical surgeries, etc. The size of the point cloud, the number of points, sparse and irregular point cloud, and the adverse impact of the light reflections, (partial) occlusions, etc., make it difficult for engineers to process point clouds. 3D Point Cloud image processing is increasingly used to solve Industry 4.0 use cases to help architects, builders and product managers. I will share some of the innovations that are helping the progress of 3D point cloud processing. I will share the practical implementation issues we faced while developing deep learning models to make sense of 3D Point Clouds.
Model selection using hyper parameter tuning is a necessary technique to obtain good machine / deep learning models. It's especially challenging in the deep learning area, where, the amount of data being preprocessed and computing resources footprint, are usually much larger. Good news though is that the technique is, what we call, 'embarassingly parallel', allowing for efficiently utilising large computing clusters. The talk will show you how to run your model selection efficiently on commonly available cluster resources with Apache Spark.
Natural language is gaining more and more relevance as an interface between man and machine. An important challenge for any kind of dialog agent or chatbot is to include external knowledge into the conversation with the user. Therefore, such systems need to be able to interact with resources like relational databases or NoSQL resources. However, the complexity of natural language makes it hard to capture diverse utterances with a set pre-defined rules. Instead, we present an approach that leverages Deep Learning to learn how to query an Elasticsearch given natural language questions.
Invoicing globally still calls for paper-based workflows. Berggren, a leading European IPR agency handles thousands of documents monthly, from roughly 200 countries. E.g. patenting invoices are scanned printouts or attachments, formats vary widely. Traditionally NLP is attempted here, e.g. bag of words or named entity recognition. Instead, our end to end ML solution is semantic: extract line items from invoice images, match for the known reference numbers. How to best leverage DL in document processing? Since each organisation's process is unique, extreme data-efficiency must drive the ML technique choice. The finished automation gives a 70% efficiency gain, improves markedly accountants' work satisfaction, and opens new development avenues. The API solution model means that integration took 5 days.
In 2018 we were asked to develop a software, that is able judge the legal validity of certain paragraphs within a contract. After exposing lawyers to an early prototype, we realised that we needed to give detailed explanations about how the neural net derives its decision. As a consequence, we decided to split up the problem into different modules which helped much in creating a better transparency. Now that the software is in production we would like to share our learnings and also discuss the quality of the predictions.
From the start Deep Learning World has been the place to discuss and share our common problems. These are your people – they understand your situation. Often rated the best session of all, sharing your problems with like-minded professionals is your path to answers and a stronger professional network.
There will be two discussion rounds of 30 minutes each. Choose which topic you want to discuss first and then switch to the second one.
- Beyond Image Recognition: What are the Deep Learning Use Cases with Real Business Impact? (with Prof. Dr. Sven Crone)
- How to Overcome the Real Challenges of Deep Learning: Your Boss' Skepticism, Unclear Targets, Bullshit Data and More. (with Gloria Macia)
Most CCTV video cameras exist as a sort of time machine for insurance purposes. Deep neural networks make it easy to convert video into 'data' which can then be used to trigger real-time anomaly alerts and optimize complex business processes. Deep learning can also be used in academic research to speed up labelling of video recorded from the point of view of animals wearing go-pros. Streaming video to the 'cloud' is not practical for many applications so we will discuss deploying models at the edge, federated learning and differential privacy. This talk will present some theory of deep neural networks for video applications as well as academic research and several applied real-world industrial examples. We will also briefly discuss Sebenz.ai, a mobile game that creates jobs for people in Africa who earn money for labelling training data used to train the deep learning models presented in the talk.
Deep Learning World - Munich - Day 2 - Tuesday, 7th May 2019
Image recognition is an essential part of autonomous driving technology. Cars have to recognise a multitude of items in a front camera scene. Deep learning networks are the state-of-the-art modelling approach to take decisions based on the camera image feed. Unfortunately, research has shown that universal perturbations on this image feed can be designed such as to corrupt the networks’ decisions. This fact has strong implications on the security and safety of autonomous cars today. This deep dive explains how perturbations work and if and how they can be detected in the data. The talk includes a live demonstration where a perturbation is constructed from data and applied to a self-driving car’s street scene.
With the growing amount of products and information, with significant rise number of users, it becomes increasingly important for companies to search, map and provide them with the relevant chunk of information or products according to their preferences and tastes. Let's talk about deep learning approach in recommender systems, which is gaining more and more popularity, their advantages and disadvantages, and the specific scenarios in which they are most effective.
Finding anomalies is only one part of a comprehensive repair/maintenance solution. Not all anomalies are problems and not all problems need to be fixed. Understanding the context behind a sensor reading — where anomalies come from, what they mean, and what needs to be done about them — requires extracting the intent-data from other sources, be they historical service orders, OEM manuals, or human heuristics. This gives symptom and resolution information that guides the process from complaint to correction. This presentation covers our AI models that enable intent discovery in one of the noisiest domains: Automotive. Learn how we made sense of data coming from 70,000+ different vehicle makes and models.
Recommendation Systems (RecSys) are now fully integrated in daily users experience, helping to discover content on digital platforms. Jonathan Greve and Sébastien Foucaud will compare the RecSys implemented at XING and heycar, based on classical machine-learning and ensemble for the former and deep neural networks and embedding for the latter. They will in particular highlight the differences in business impact and propose future developments joining these approaches.
Embeddings have become a powerful tool for representing discrete entities as continuous vectors. Recent advances have extended the original text embedding framework to accommodate new types of data. Of particular interest is a novel algorithm called “node2vec” that learns embeddings for nodes in a network. This is an especially pertinent use case for our team, since WeWork’s member community can conveniently be expressed in graphical form. In this talk, we’ll discuss how we use “node2vec” to create rich feature representations of WeWork communities, and then build recommendation services that are powered by these trained models.
Human languages are complex, diverse and riddled with exceptions – translating between different languages is therefore a highly challenging technical problem. Deep learning approaches have proved powerful in modelling the intricacies of language, and have surpassed all statistics-based methods for automated translation. This session begins with an introduction to the problem of machine translation and discusses the two dominant neural architectures for solving it – recurrent neural networks and transformers. A practical overview of the workflow involved in training, optimising and adapting a competitive neural machine translation system is provided. Attendees will gain an understanding of the internal workings and capabilities of state-of-the-art systems for automatic translation, as well as an appreciation of the key challenges and open problems in the field.
Although Machine Vision techniques have been in use for defect detection in manufacturing processes for the last few decades, the recent advances in Machine-Learning (ML) algorithms combined with powerful computational hardware have opened up new possibilities in this field. The main advantage of these techniques is their generalization ability without the need for extensive programming. We conducted a visual inspection (VI) contest at Schaeffler on an assembly-line of bearing parts. The evaluation, lasting several weeks, rated solutions created by external vendors as well as those developed in-house. The results show that ML approaches are applicable for real-time VI in production and comprise lessons learned how to approach visual inspection in this context.
Healthcare is emerging as a prominent area for deep learning applications which promise to improve the life quality of millions of patients worldwide. In such a regulated industry, however, innovators aiming to seize this chance face however one major issue: achieving regulatory compliance. With a real case study, this talk will guide the audience through the current American / European regulatory framework for medical devices and provide a step-by-step guide to market for deep learning applications, highlighting the main challenges and pitfalls to avoid as well as the key issues a company needs to consider to succeed in this endeavor.
A machine learning solution is only as good as it is deemed by the end-user. More often than not, we do not think through how results are communicated or measured. If we want users to trust and correctly interpret AI models, we need to make our models transparent and understandable. In this case-study we will discuss the platform we developed for deep learning on medical images. Two example projects: “Cell detection in bone marrow” and “Analysis of colon tissue” will be discussed to illustrate how UX affects end-users perception of AI.