SGAICO Annual Meeting and Workshop: Deep Learning and Beyond

Loading Map....

Date(s) - 16/11/2016
12:00 - 18:00

Hochschule Luzern - Informatik
Room: S12.002

The annual meeting of SGAICO will feature invited talks by Andreas Krause, ETH, and our two new corporate members Balzano and Quantinum, as well as a poster session for in-depth discussion of projects conducted by our members.

Registration until November 6, 2016 via

This year we will explore trends in Deep Learning, Cognitive Services and AI application frameworks.



11:00 – 12:00 Meeting of the SGAICO Steering Board – Open to all interested members and guests

12:00 – 13:00 Registration, Coffee and Snacks

13:00 – 14:00 Andreas Krause, ETH Zürich: Learning to Optimize with Confidence
With the success of machine learning, we increasingly see learning algorithms make decisions in the real world. Often, however, this is in stark contrast to the classical train-test paradigm, since the learning algorithm affects the very data it must operate on. A central challenge is to trade exploration — collecting data for the sake of fitting better models — and exploitation — using the estimate to make decisions. In many applications, such as in robotics, exploration is a potentially dangerous proposition, as it requires experimenting with actions with unknown consequences. In this talk, I will formalize the problem of safe exploration as one of optimizing an unknown function subject to unknown constraints. Both objective and constrains are revealed through noisy experiments, and safety requires that no infeasible action is chosen at any point. I will present an approach that uses Bayesian inference over the objective and constraints, which — under some regularity conditions — is guaranteed to converge to a natural notion of reachable optimum. I will also show experiments on safe automatic parameter tuning of a robotic platform.

14:00 – 14:45 René Balzano, Balzano Informatik GmbH: Get rid of your VC, or how Microsoft commoditizes Deep Learning
Just mention healthcare and artificial intelligence, and you can pick from a list of venture capitalists who want to fund you. Luckily enough they are no longer needed, at least not while engineering a new solution. 20 years ago we caused massive hardware-investments for our customers who wanted to use our automated timetabling solution – then based on genetic algorithms implemented in SQL. Even worse were the thousands of hours of programming that we had to pay a highly skilled staff of 15 engineers for.
Today we’re a team of 5, launching four ML products of high complexity at the same time with minimal investment. The convolutional deep neural network is the new kid on the block to address all sorts of pressing global issues, and it’s quickly becoming a commodity through the initiatives of Google, Amazon, NVIDIA and Microsoft. Microsoft’s traditional platform and cross-premises approach for their now broadly Cloud-centric, but still locally deployable tools and services allow for an abundance of solution scenarios that implement CDNN and Machine Learning in general. At minimal costs. During this presentation I’ll illustrate Microsoft’s machine learning platforms, services and tools – Azure ML, Cognitive API, CNTK, R Server, GPU VMs and the lot -, how we leverage them for commercial products and for research projects, and what our customers’ position towards those solutions is.

14:45 – 15:00 Coffee Break

15:00 – 15:45 Reto Trinkler, Quantinum AG: Build your own Cognitive Computing System with Open Source Software
Quantinum AG was founded in 2013 with the mission to make collaboration of man and machine easy and smart. Since then, we have created an integrated, scalable and extendable software platform, that combines Graph Technology, Natural Language Processing and Machine Learning along with easy-to-use interaction tools. In this session, we will go „under the hood“ of the platform looking at how the platform is built, what Open Source components are used and how they are working together. We will also talk about success factors as well as pitfalls and learnings we had during development and market launch. Last but not least, we will present some industry solutions that have been realized on top of the Quantinum platform.

15:45 – 16:00 News from SI and next plans for SGAICO

16:00 – 17:30 Poster Session and Discussion of projects among members

Please submit the title and a short abstract for your poster when you register via until November 6, 2016.


List of currently accepted posters


Jean-Daniel Dessimoz: Cognition Squeezed Between Nature and Values

Cognition. Cognition typically operates in the realm of logic and allows to some extent to think, reason, communicate and control the world. Cognition is very powerful, capable of dealing with virtually anything via some kinds of representations, yet it has also very significant limits, notably  in two domains that, at the same time, critically support, and definitely limit, cognition: nature and values.

Nature. Cognition vanishes without real-world support. And when the latter is available, strong constraints remain: infinite complexity, existence exclusively confined in present time, and often with a dire fragility.

Values. Cognition is essentially driven by values. Constantly, in parallel with ongoing cognitive processes, some monitoring of circumstances and assessment of the laws of values are mandatory in order to keep the physical, cognitive engine operational, to select and focus specific current modeling, and thereby cognition, onto appropriate priority goals, in constant synchrony with real world opportunities and threats.

Ref.: J.-D. Dessimoz, “Cognition, cognitics, and team action—Overview, foundations, and five theses for a better world”, Elsevier, Robotics and Autonomous Systems, Volume 85, November 2016, Pages 73–82; access to editor’s page:; author presentation (5 slides, 4:35 min): Click here. Early days access:


Sabine Schilling: What’s ugly? A shape based  aneurysm  rupture risk prediction pipeline

Clinicians associate aneurysm shape irregularity (or “ugliness”) with an increased risk of rupture, but no consensus exist which shape features most reliably indicate an unstable aneurysm prone to rupture. To tackle this problem, we developed machine learning pipeline based on three dimensional (3D) clinical imaging data. We trained the classifier by 3D shape representations such as Zernike moment invariants (ZMI) or simpler geometry indices like the non sphericity index combined with clinical labels such as aneurysm type (bifurcation vs. side wall) or rupture status (ruptured vs. unruptured).
The processing pipeline was applied to synthetic data and clinical datasets of 413 aneurysms registered in the AneurysmDataBase (SwissNeuroFoundation) and the AneuriskWeb database.
Classification based on ZMI alone allowed us to distinguish reliably between bifurcation and side wall aneurysms, whereas rupture status prediction showed  less robust classification results both with 3D ZMI  and the much simpler non-sphericity index.
It remains to be investigated whether further stratification of the aneurysms in terms of location, size and clinical factors will increase the robustness of the applied classification methods.


Thilo Stadelmann: Speaker Identification and Clustering using Convolutional Neural Networks

Deep learning, especially in the form of convolutional neural networks (CNNs), has triggered substantial improvements in computer vision and related fields in recent years. This progress is attributed to the shift from designing features and subsequent individual sub-systems towards learning features and recognition systems end to end from nearly unprocessed data. For speaker clustering, however, it is still common to use handcrafted processing chains such as MFCC features and GMM-based models. In this paper, we use simple spectrograms as input to a CNN and study the optimal design of those networks for speaker identification and clustering. Furthermore, we elaborate on the question how to transfer a network, trained for speaker identification, to speaker clustering. We demonstrate our approach on the well known TIMIT dataset, achieving results comparable with the state of the art– without the need for handcrafted features.

Stefan Schnürle: Detection and Quantification of Hand Eczema by Means of Support Vector Machines

Hand Eczema is one of the most frequent dermatoses with grave consequences for patients as well as society as a whole, potentially leading to impairment or disability to work in many professions. Computer-aided detection of hand eczema could support patients in their decision whether to visit a dermatologist.
We devised an image processing method for hand eczema segmentation based on Support Vector Machines and conducted several experiments with different feature sets. Instead of focusing on a high accuracy like most existing state-of-the-art approaches, we selected F1 score as our primary measure. This decision had several implications regarding the design of our segmentation method, since all popular implementations of Support Vector Machines aim for optimizing accuracy. Finally, we evaluated our system and achieved an F1 score of 58.6% for front sides of hands and 43.8% for back sides, which outperforms several state-of-the-art methods that were tested on our gold standard data set as well.


Flavio Trolese: Computer Aided Staging of Cancer

The diagnosis and treatment of (specific) cancer has been drastically improved by new imaging methods which generate large number of images. This has come at a high cost, the time spent reading images. 4Quant using image analysis techniques and deep learning has demonstrated the potential to radically reduce the reading time without sacrificing quality. We make these benefits tangible by computer aided staging products we develop with radiologists and nuclear medicine specialists.


Christoph Zaugg: Peaks in load

The total load profile of the SBB electric railway power supply is recorded every second and exhibits large load peaks and steep gradients. Using a filter we extract a deterministic component and define the stochastic component to be the total load minus the deterministic component. Finally the stochastic component is modeled by a stochastic differential equation with a Markov regime-switching. The parameters are estimated with a machine learning tool and the resulting model is validated.


Jana Koehler: Business Process Innovation with AI – Benefits and Operational Risks

Artificial Intelligence (AI) is gaining a strong momentum in business leading to novel business models and triggering business process innovation. The poster reviews key AI technologies such as machine learning, decision theory, and intelligent search and discusses their role in business process innovation. Besides discussing potential benefits, it also identifies sources of potential risks and discusses a blueprint for the quantification and control of AI-related operational risk.