PhD Thesis Defenses 2020

PhD thesis defenses are a public affair and open to anyone who is interested. Attending them is a great way to get to know the work going on by your peers in the various research groups. On this page you will find a list of upcoming and past defense talks.

Please go here for electronic access to most of the doctoral dissertations from Saarbrücken Computer Science going back to about 1990.


Michael GERKE

Modeling and Verifying the FlexRay Physical Layer Protocol with Reachability Checking of Timed Automata
(Advisor: Prof. Bernd Finkbeiner)
Monday, 21.12.2020, 14:15 h


The FlexRay automotive bus protocol is used in many modern cars to facilitate communication between the various electronic control units (ECU) that control the car. FlexRay’s physical layer protocol facilitates communicating a message from an ECU to other ECUs that each have their own local oscillator which is not synchronized to the other ECUs local oscillators, even though they all run at roughly the same frequency. The physical layer protocol is thus the foundation that the upper protocol layers build upon, and it has to deal with a non-synchronized bit transfer over the bus‘ potentially long wire harness, with variances in delay that can be incurred while traversing the bus, with interference that might cause voltage values on the bus to change thus effectively flipping bits, and with local oscillators that will deviate slightly from their nominal frequency. Verifying the reliability and the resilience of such a physical layer protocol against various potentially fault inducing scenarios under a variety of realistic assumptions on the performance of the hardware the protocol is deployed on, is nontrivial. To this end, the thesis presents models of the physical layer protocol and its hardware environment using Timed-Automata, which include error models. Using a combination of Binary Decision Diagrams and Difference Bound Matrices, Algorithmic approaches for fully symbolic model checking of Timed Automata and data structures enabling these approaches are presented as well. This allows to handle the considerable complexity of model checking such an industrially used protocol. Furthermore, the thesis presents the lessons learned on how to optimize such Timed Automata models to enable model checking while retaining enough complexity to provide relevant results concerning the reliability and resilience of the modeled protocol. The results of the verification effort confirm the resilience of the FlexRay physical layer protocol against specific patterns of bit flips and against specific degradations in the hardware performance, exceeding the limits put forth by the protocol’s designers.

Tribhuvanesh OREKONDY

Understanding and Controlling Leakage in Machine Learning
(Advisor: Prof. Bernt Schiele)
Friday, 18.12.2020, 16:00 h


Machine learning models are being increasingly adopted in a variety of real-world scenarios. However, the privacy and confidentiality implications for users in such scenarios is not well understood. In this talk, I will present our work that focuses on better understanding and controlling leakage of information in three parts. The first part of the talk is motivated by the fact that a massive amount of personal photos are disseminated by individuals, many of which unintentionally reveals a wide range of private information. To address this, I will present our work on visual privacy, which attempts to identify privacy leakage in visual data. The second part of my talk analyzes privacy leakage during training of ML models, such as in the case of federated learning where individuals contribute towards the training of a model by sharing parameter updates. The third part of my talk focuses on leakage of information at inference-time by presenting our work on model functionality stealing threats, where an adversary exploits black-box access to a victim ML model to inexpensively create a replica model. In each part, I also discuss our work to mitigate leakage of sensitive information to enable wide-spread adoption of ML techniques in real-world scenarios.


Towards Accurate Multi-Person Pose Estimation in the Wild
(Advisor: Prof. Bernt Schiele)
Wednesday, 16.12.2020, 16:00 h


Human pose estimation is a computer vision task of localising major joints of a human skeleton in natural images and is one of the most important visual recognition tasks in the scenes containing humans with numerous applications in robotics, virtual and augmented reality, gaming and healthcare among others. This thesis proposes methods for human pose estimation and articulated pose tracking in images and video sequences. Unlike most of the prior work, which is focused on pose estimation of single pre-localised humans, we address a case with multiple people in real world images which entails several challenges such as person-person overlaps in highly crowded scenes, unknown number of people or people entering and leaving video sequences. Our multi-person pose estimation algorithm is based on the bottom-up detection-by-grouping paradigm through an optimization of a multicut graph partitioning objective. Our approach is general and is also applicable to multi-target pose tracking in videos. In order to facilitate further research on this problem we collect and annotate a large scale dataset and a benchmark for articulated multi-person tracking. We also propose a method for estimating 3D body pose using on-body wearable camera using a pair of downward facing, head-mounted cameras that capture an entire body. Our final contribution is a method for reconstructing 3D objects from weak supervision. Our approach represents objects as 3D point clouds and is able to learn them with 2D supervision only and without requiring camera pose information at training time. We design a differentiable renderer of point clouds as well as a novel loss formulation for dealing with camera pose ambiguity.

Daniel VAZ

Approximation Algorithms for Network Design and Cut Problems in Bounded-Treewidth
(Advisors: Prof. Kurt Mehlhorn and Prof. Parinya Chalermsook, now Aalto University, Finland)
Tuesday, 08.12.2020, 14:00 h


In this talk, we will present new approximation results for the group Steiner tree problem on bounded-treewidth graphs. In the group Steiner tree problem, the input is a graph together with sets of vertices called groups; the goal is to choose one representative vertex from each group and connect all the representatives with minimum cost. The problem is hard to approximate even when the input graph is a tree: it is unlikely that an approximation factor better than O(log^2 n) exists, and current techniques cannot achieve this ratio even on graphs with treewidth 2. We show an O(log^2 n)-approximation algorithm for bounded-treewidth graphs, which matches the known lower bound for trees, and improves upon previous techniques. These results imply new approximation algorithms for other related problems, such as a high-connectivity extension of group Steiner tree. We will also introduce the firefighter problem, which models the spread of fire in a graph, and briefly discuss our results for this problem on trees and bounded-treewidth graphs.

Franziska MÜLLER

Real-time 3D Hand Reconstruction in Challenging Scenes from a Single Color or Depth Camera
(Advisor: Prof. Christian Theobalt)
Wednesday, 02.12.2020, 16:00 h


Hands are one of the main enabling factors for performing complex tasks and humans naturally use them for interactions with their environment. Reconstruction and digitization of 3D hand motion opens up many possibilities for important applications, for example in human–computer interaction, augmented or virtual reality (AR/VR), automatic sign-language translation, activity recognition, or teaching robots. Different approaches for 3D hand motion capture have been actively researched in the past. Earlier methods have clear drawbacks like the need to wear intrusive markers or gloves or having to calibrate a stationary multi-camera setup. For interaction purposes, users need continuous control and immediate feedback. This means the algorithms have to run in real time and be robust in uncontrolled scenes. Thus, the majority of more recent methods uses a single color or depth camera which, however, makes the problem harder due to more ambiguities in the input. While recent research has shown promising results, current state-of-the-art methods still have strong limitations. Most approaches only track the motion of a single hand in isolation and do not take background-clutter or interactions with arbitrary objects or the other hand into account. The few methods that can handle more general and natural scenarios run far from real time or use complex multi-camera setups which makes them unusable for many of the aforementioned applications. This thesis pushes the state of the art for real-time 3D hand tracking and reconstruction in general scenes from a single RGB or depth camera. The presented approaches explore novel combinations of generative hand models and powerful cutting-edge machine learning techniques. In particular, this thesis proposes a novel method for hand tracking in the presence of strong occlusions and clutter, the first method for full global 3D hand tracking from in-the-wild RGB video, and a method for simultaneous pose and dense shape reconstruction of two interacting hands that, for the first time, combines a set of desirable properties previously unseen in the literature.


Program Analysis for Anomaly Detection
(Advisor: Prof. Andreas Zeller)
Wednesday, 02.12.2020, 10:30 h


When interacting with mobile applications, users may not always get what they expect. For instance, when users download Android applications from a market, they do not know much about their actual behavior. A brief description, a set of screens-hots and a list of permissions, which give a high level intuition of what applications might be doing, form user expectations. However applications do not always meet them. For example, a gaming application intentionally could send SMS messages to a premium number in a background without a user’s knowledge. A less harmful scenario would be a wrong message confirming a successful action that was never executed.
Whatever the behavior of a mobile application might (app) be, in order to test and fully understand it, there needs to be a technique that can analyse the User Interface (UI) of the app and the code associated with it as the whole.
This thesis presents a static analysis framework called SAFAND that given an ANDROID app performs the following analysis:
gathers information on how the application uses sensitive data;
identifies and analyses UI elements of the application;
binds UI elements with their corresponding behavior.
The thesis illustrates how results obtained from the framework can be used to identify problems ranging from small usability issues to malicious behavior of real-world applications.



Generating Renderers
(Advisor: Prof. Philipp Slusallek)
Friday, 27.11.2020, 15:30 h


Physically-based renderers are large pieces of software, and require a lot of computing power to produce pictures from complex 3D scenes. Part of this is due to the fact that in these renderers, optimizing performance is often diffi-cult because of the large configurability in the type of scene that is given as an input. For instance, optimizing for SIMD execution is difficult when the intersection functions need to perform dynamic dispatches to allow different primitive types. Another issue is that these renderers are written using currently available techniques and tools for software development, which are severely lacking when it comes to expressing parallelism or exploiting low-level hardware features. The direct consequence of that problem is that renderer implementations often rely on non-portable compiler-specific intrinsics, making them difficult to port and adapt to new architectures and platforms. The traditional solution to these problems is to design domain-specific languages that al-low the programmer to encode domain-specific knowledge into the code. With that knowledge, a domain-specific compiler can then optimize that code and choose the best strategy for execution on a given piece of hardware. However, writing domain-specific compilers is an arduous task, as it requires both an ex-tensive knowledge of the domain, and a deep understanding of compiler construction.
In this talk, we will present another way to encode domain-specific knowledge, and describe a rendering framework based on it. The general idea is to build generic, high-level and declarative rendering code in a langu-age supporting partial evaluation. Then, such code can be optimized for a target machine and particular use-case by provi-ding partial evaluation annotations. Those annotations instruct the compiler to specialize the entire code according to the available knowledge of the input: In the case of rendering, for instance, this means that shaders are specialized when their inputs are known, and in general, all redundant computations are eliminated. Additionally, partial evaluation allows to use the knowledge of the target architecture to adapt the execution strategies of various part of the renderer. The results of this technique show that the generated renderers are faster and more por-table than renderers written with state-of-the-art competing libraries, and that in compa-rison, the presented work requires less implementation effort.

Nataniel P. BORGES Junior

Learning the Language of Apps
(Advisor: Prof. Andreas Zeller)
Thursday, 26.11.2020, 14:00 h


To explore the functionality of an app, automated test generators systematically identify and interact with its user interface (UI) elements. A key challenge is to synthesize inputs which effectively and effciently cover app behavior. To do so, a test generator has to choose which elements to interact with but, which interactions to do on each element and which input values to type. In summary, to better test apps, a test generator should know the app’s language , that is, the language of its graphical interactions and the language of its textual inputs. In this work, we show how a test generator can learn the language of apps and how this knowledge is modeled to create tests. We demonstrate how to learn the language of the graphical input prior to testing by combining machine learning and static analysis, and how to rene this knowledge during testing using reinforcement learning. In our experiments, statically learned models resulted in 50% less ineffective actions an average increase in test (code) coverage of 19%, while refining these through reinforcement learning resulted in an additional test (code) coverage of up to 20% . We learn the language of textual inputs, by identifying the semantics of input felds in the UI and querying the web for real-world values. In our experiments, real-world values increase test (code) coverage by ≈ 10%; Finally, we show how to use context-free grammars to integrate both languages into a single representation (UI grammar), giving back control to the user. This representation can then be: mined from existing tests, associated to the app source code, and used to produce new tests. 82% test cases produced by fuzzing our UI grammar can reach a UI element within the app and 70% of them can reach a specific code location.


Philipp MÜLLER

Sensing, Interpreting, and Anticipating Human Social Behaviour in the Real World
(Advisor: Prof. Andreas Bulling, now Stuttgart)
Friday, 13.11.2020, 14:00 h


A prerequisite for the creation of social machines that are able to support humans in fields like education, psychotherapy, or human resources is the ability to automatically detect and analyse human nonverbal behaviour. While promising results have been shown in controlled settings, automatically analysing unconstrained situations, e.g. in daily-life settings, remains challenging.
This thesis moves closer to the vision of social machines in the real world, making fundamental contributions along the three dimensions of sensing, interpreting and anticipating nonverbal behaviour in social interactions. First, it advances the state of the art in human visual behaviour sensing, proposing a novel unsupervised method for eye contact detection in group interactions by exploiting the connection between gaze and speaking turns. Furthermore, the thesis makes use of mobile device engagement to address the problem of calibration drift that occurs in daily-life usage of mobile eye trackers. Second, this thesis improves the interpretation of social signals by proposing datasets and methods for emotion recognition and low rapport detection in less constrained settings than previously studied. In addition, it for the first time investigates a cross-dataset evaluation setting for emergent leadership detection. Third, this thesis pioneers methods for the anticipation of eye contact in dyadic conversations, as well as in the context of mobile device interactions during daily life, thereby paving the way for interfaces that are able to proactively intervene and support interacting humans.

Aastha MEHTA

Ensuring Compliance with Data Privacy and Usage Policies in Online Services
(Advisors: Prof. Peter Druschel & Prof. Deepak Garg)
Tuesday, 03.11.2020, 14:30 h


Online services collect and process a variety of sensitive personal data that is subject to complex privacy and usage policies. Complying with the policies is critical and often legally binding for service providers, but it is challenging as applications are prone to many disclosure threats. In this thesis, I present two compliance systems, Qapla and Pacer, that ensure efficient policy compliance in the face of direct and side-channel disclosures, respectively.
Qapla prevents direct disclosures in database-backed applications (e.g., personnel management systems), which are subject to complex access control, data linking, and aggregation policies. Conventional methods inline policy checks with application code. Qapla instead specifies policies directly on the database and enforces them in a database adapter, thus separating compliance from the application code.
Pacer prevents network side-channel leaks in cloud applications. A tenant’s secrets may leak via its network traffic shape, which can be observed at shared network links (e.g., network cards, switches). Pacer implements a cloaked tunnel abstraction, which hides secret-dependent variation in tenant’s traffic shape, but allows variations based on non-secret information, enabling secure and efficient use of network resources in the cloud.
Both systems require modest development efforts, and incur moderate performance overheads, thus demonstrating their usability.


Frederic RABER

Supporting Lay Users in Privacy Decisions When Sharing Sensitive Data
(Advisor: Prof. Antonio Krüger)
Wednesday, 23.09.2020, 16:00 h


The first part of the thesis focuses on assisting users in choosing their privacy settings, by using machine learning to derive the optimal set of privacy settings for the user. In contrast to other work, our approach uses context factors as well as individual factors to provide a personalized set of privacy settings. The second part consists of a set of intelligent user interfaces to assist the users throughout the complete privacy journey, from defining friend groups that allow targeted information sharing; through user interfaces for selecting information recipients, to find possible errors or unusual settings, and to refine them; up to mechanisms to gather in-situ feedback on privacy incidents, and investigating how to use these to improve a user’s privacy in the future. Our studies have shown that including tailoring the privacy settings significantly increases the correctness of the predicted privacy settings; whereas the user interfaces have been shown to significantly decrease the amount of unwanted disclosures.

Thorsten WILL

From condition-specific interactions towards the differential complexome of proteins
(Advisor: Prof. Volkhard Helms)
Tuesday, 22.09.2020, 10:00 h


While capturing the transcriptomic state of a cell is a comparably simple effort with modern sequencing techniques, mapping protein interactomes and complexomes in a sample-specific manner is currently not feasible on a large scale. To understand crucial biological processes, however, knowledge on the physical interplay between proteins can be more interesting than just their mere expression. In my thesis, I present four software tools that unlock the cellular wiring in a condition-specific manner and promise a deeper understanding of what happens upon cell fate transitions.
PPIXpress allows to exploit the abundance of existing expression data to generate specific interactomes, which can even consider alternative splicing events when protein isoforms can be related to the presence of causative protein domain interactions of an underlying model. As an addition to this work, I developed the convenient differential analysis tool PPICompare to determine rewiring events and their causes within the inferred interaction networks between grouped samples.
Furthermore, I demonstrate a new implementation of the combinatorial protein complex prediction algorithm DACO that features a significantly reduced runtime. This improvement facilitates an application of the method for a large number of samples and the resulting sample-specific complexes can ultimately be assessed quantitatively with the novel differential protein complex analysis tool CompleXChange.

Dushyant MEHTA

Real-time 3D Human Body Pose Estimation from Monocular RGB Input
(Advisor: Prof. Christian Theobalt)
Monday, 14.09.2020, 14:00 h


Human motion capture finds extensive application in movies, games, sports and biomechanical analysis. However, existing motion capture systems require extensive on-body, and/or external instrumentation, which is not only cumbersome but expensive as well, putting it out of the reach of most creators. The ubiquity and ease of deployment of RGB cameras makes monocular RGB based human motion capture an extremely useful problem to solve, which would lower the barrier-to-entry for content creators to employ motion capture tools, and enable newer applications of human motion capture. This thesis demonstrates the first real-time monocular RGB based motion-capture solutions that work in general scene settings. They are based on developing neural network based approaches to address the ill-posed problem of estimating 3D human pose from a single RGB image, in combination with model based fitting. In particular, the contributions of this work make advances towards three key aspects of real-time monocular RGB based motion capture, namely speed, accuracy, and the ability to work for general scenes.

Alexander LÜCK

Stochastic spatial modelling of DNA methylation patterns and moment-based parameter estimation
(Advisor: Prof. Verena Wolf)
Friday, 11.09.2020, 14:30 h


In this thesis, we introduce and analyze spatial stochastic models for DNA methylation, an epigenetic mark with an important role in development. The underlying mechanisms controlling methylation are only partly understood. Several mechanistic models of enzyme activities responsible for methylation have been proposed. Here, we extend existing hidden Markov models (HMMs) for DNA methylation by describing the occurrence of spatial methylation patterns with stochastic automata networks. We perform numerical analysis of the HMMs applied to (non-)hairpin bisulfite sequencing KO data and accurately predict the wild-type data from these results. We find evidence that the activities of Dnmt3a/b responsible for de novo methylation depend on the left but not on the right CpG neighbors.


Synthesizing Stream Control
(Advisor: Prof. Bernd Finkbeiner)
Tuesday, 08.09.2020, 14:00 h


The management of reactive systems requires controllers, which coordinate time, data streams, and data transformations, all joint by the high level perspective of their control flow. This flow is required to drive the system correctly and continuously, which turns the development into a challenge. The process is error-prone, time consuming, unintuitive, and costly. An attractive alternative is to synthesize the system instead, where only the desired behavior needs to be specified. We introduce Temporal Stream Logic (TSL), a logic which exclusively argues about the system’s control, while treating data and functional transfor-mations as interchangeable black-boxes. In TSL it is possible to specify control flow prop-erties independently of the complexity of the handled data. Furthermore, with TSL a syn-thesis engine can check for realizability, even without a concrete implementation of the data transformations at hand. We present a modular development framework that uses syn-thesis to identify the high level control. If successful, the created control flow is extended with concrete data transformations in order to be compiled into a final executable.
Nevertheless, current synthesis approaches cannot replace existing manual development work flows immediately. The developer still may create incomplete or faulty specifications at first, that need the be refined after a subsequent inspection. In the worst case, require-ments are contradictory or miss important assumptions resulting in unrealizable specifica-tions. In both scenarios, additional feedback from the synthesis engine is needed to debug the errors. To this end, we also explore two further improvements. On the one hand, we consider output sensitive synthesis metrics. On the other hand, we analyse the extension with delay, whose requirement is a frequent reason for unrealizability.


Privacy Enhancing Technologies: Protocol verification, implementation and specification
(Advisor: Prof. Michael Backes)
Monday, 07.09.2020, 11:00 h


In this thesis, we present novel methods for verifying, implementing, and specifying protocols. In particular, we focus properties modeling data protection and the protection of privacy. In the first part of the thesis, the author introduces protocol verification and presents a model for verification that encompasses so-called Zero-Knowledge (ZK) proofs. These ZK proofs are a cryptographic primitive that is particularly suited for hiding information and hence serves the protection of privacy. The here presented model gives a list of criteria which allows the transfer of verification results from the model to the implementation if the criteria are met by the implementation. In particular, the criteria are less demanding than the ones of previous work regarding ZK proofs. The second part of the thesis contributes to the area of protocol implementations. Hereby, ZK proofs are used in order to improve multi-party computations. The third and last part of the thesis explains a novel approach for specifying data protection policies. Instead of relying on policies, this approach relies on actual legislation. The advantage of relying on legislation is that often a fair balancing is introduced which is typically not contained in regulations or policies.



Sequence to Graph Alignment: Theory, Practice and Applications
(Advisor: Prof. Tobias Marschall, now Uni Düsseldorf)
Tuesday, 25.08.2020, 10:00 h


Genome graphs, also called sequence graphs, provide a way of explicitly representing genomic variation and are used in several applications in bioinformatics. Due to the growing importance of sequence graphs, methods for handling graph-based data structures are becoming more important. In this work I examine the generalization of sequence alignment to graphs. First, I present a theoretical basis for quick bit-parallel sequence-to-graph alignment. The bit-parallel method outperforms previous algorithms by a factor of 3-21x in runtime depending on graph topology. Next, I present GraphAligner, a practical tool for aligning sequences to graphs. GraphAligner enables sequence-to-graph alignment to scale to mammalian sized genomes and is an order of magnitude faster than previous graph alignment tools. Finally, I present two applications where GraphAligner is an essential part. First, AERON is a tool for quantifying RNA expression and detecting gene fusion events with long reads. Second, I present a hybrid graph-based genome assembly pipeline that uses novel methods to combine short and long read sequencing technologies.


Understanding and Evolving the Rust Programming Language
(Advisor: Prof. Derek Dreyer)
Friday, 21.08.2020, 11:00 h


Rust is a young systems programming language that aims to fill the gap between high-level languages with strong safety guarantees and low-level languages that give up on safety in exchange for more control. Rust promises to achieve memory and thread safety while at the same time giving the programmer control over data layout and memory management. This dissertation presents two projects establishing the first formal foundations for Rust, enabling us to better understand and evolve this language: RustBelt and Stacked~Borrows. RustBelt is a formal model of Rust’s type system, together with a soundness proof establishing memory and thread safety. The model is also able to verify some intricate APIs from the Rust standard library that internally encapsulate the use of unsafe language features. Stacked Borrows is a proposed extension of the Rust specification which enables the compiler to use Rust types to better analyze and optimize the code it is compiling. The adequacy of this specification is not only evaluated formally, but also by running real Rust code in an instrumented version of Rust’s Miri interpreter that implements the Stacked Borrows semantics. RustBelt is built on top of Iris, a language-agnostic framework for building higher-order concurrent separation logics. This dissertation gives an introduction to Iris and explains how to build complex high-level reasoning principles from a few simple ingredients. In RustBelt, this technique is used to introduce the lifetime logic, a separation logic account of borrowing, which plays a key role in the Rust type system.


Alexander GRESS

Integration of protein three-dimensional structure into the workflow of interpretation of genetic variants
(Advisor: Prof. Olga Kalinina)
Friday, 24.07.2020, 12:00 h


In this thesis I present my work on the integration of protein 3D structure information into the methodological workflow of computational biology. We developed an algorithmic pipeline that is able to map protein sequences to protein structures, providing an additional source of information. We used this pipeline in order to analyze the effects of genetic variants from the perspective of protein 3D structures. We analyzed genetic variants associated with diseases and compared their structural arrangements to that of neutral variants. Additionally, we discussed how structural information can improve methods that aim to predict the consequences of genetic variants.


Computational models of gene expression regulation
(Advisor: Prof. Marcel Schulz, now Frankfurt)
Tuesday, 21.07.2020, 10:00 h


To date, many efforts have been put into elucidating the genetic or epigenetic defects that result in various diseases. Even though there has been a lot achieved through these efforts, there are still many unknowns about gene regulation. In this thesis, I describe our work on various problems about understanding the gene regulation. In the first project we proposed a new approach for predicting transcription factor binging across different tissues and cell lines. Next, we focused on a special group of genes, namely bidirectional genes. Through adopting appropriate regularization incorporated into our statistical model, we were able to achieve interpretable predictive models linking the histone modification data with gene expression. The two subsequent projects aimed at taking advantage of single cell sequencing data. By investigating the single cell expression patterns of bidirectional genes, we proposed a novel hypothetical model describing several transcription states existing in bidirectional genes. Finally, we developed a multi-task learning framework suitable for analyzing single cell data from which we were able to derive potential new discoveries through deciphering the complexities of the regulation mechanism.


Nephroplastoma in MRI Data
(Advisor: Prof. Joachim Weickert)
Thursday, 09.07.2020, 16:15 h


The main objective of this work is the mathematical analysis of nephroblastoma in MRI sequences. At the beginning we provide two different datasets for segmentation and classification. Based on the first dataset, we analyze the current clinical practice regarding therapy planning on the basis of annotations of a single radiologist. We can show with our benchmark that this approach is not optimal and that there may be significant differences between human annotators and even radiologists. In addition, we demonstrate that the approximation of the tumor shape currently used is too coarse granular and thus prone to errors. We address this problem and develop a method for interactive segmentation that allows an intuitive and accurate annotation of the tumor.
While the first part of this thesis is mainly concerned with the segmentation of Wilms‘ tumors, the second part deals with the reliability of diagnosis and the planning of the course of therapy. The second data set we compiled allows us to develop a method that dramatically improves the differential diagnosis between nephroblastoma and its precursor lesion nephroblastomatosis. Finally, we can show that even the standard MRI modality for Wilms‘ tumors is sufficient to estimate the developmental tendencies of nephroblastoma under chemotherapy.

Yongqin XIAN

Learning from Limited Labeled Data – Zero-Shot and Few-Shot Learning
(Advisor: Prof. Bernt Schiele)
Tuesday, 07.07.2020, 16:00 h


Human beings have the remarkable ability to recognize novel visual concepts after observing only few or zero examples of them. Deep learning, however, often requires a large amount of labeled data to achieve a good performance. Labeled instances are expensive, difficult and even infeasible to obtain because the distribution of training instances among labels naturally exhibits a long tail. Therefore, it is of great interest to investigate how to learn efficiently from limited labeled data. This thesis concerns an important subfield of learning from limited labeled data, namely, low-shot learning. The setting assumes the availability of many labeled examples from known classes and the goal is to learn novel classes from only a few~(few-shot learning) or ze-ro~(zero-shot learning) training examples of them. To this end, we have developed a series of multi-modal learning approaches to facilitate the knowledge transfer from known classes to novel classes for a wide range of visual recognition tasks including image classifica-tion, semantic image segmentation and video action recognition. More specifically, this thesis mainly makes the following contributions. First, as there is no agreed upon zero-shot image classification benchmark, we define a new benchmark by unifying both the evaluation protocols and data splits of publicly available datasets. Second, in order to tackle the labeled data scarcity, we propose feature generation frameworks that synthesize data in the visual feature space for novel classes. Third, we extend zero-shot learning and few-shot learning to the semantic segmentation task and propose a challenging benchmark for it. We show that incorporating semantic information into a semantic segmentation network is effective in segmenting novel classes. Finally, we develop better video representation for the few-shot video classification task and leverage weakly-labeled videos by an efficient retrieval method.


Causal Inference on Discrete Data
(Advisor: Prof. Jilles Vreeken)
Monday, 06.07.2020, 14:00 h


Causal inference is one of the fundamental problems in science. A particularly interesting setting is to tell cause from effect between a pair of random variables X and Y given a sample from their joint distribution. For a long period of time, it was thought to be impossible to distinguish between causal structures X → Y and Y → X from observational data as the factorisation of the joint distribution is the same in both directions. In the past decade, researchers have made a long stride in this direction by exploiting sophisticated properties of the joint distribution. Most of the existing methods, however, are for continuous real-valued data.
In the first part of the thesis, we consider bivariate causal inference on different discrete data settings—univariate i.i.d., univariate non-i.i.d., and multivariate i.i.d. pairs. To this end, we build upon the principle of algorithmic independence of conditionals (AIC), which states that marginal distribution of the cause is algorithmically independent of conditional distribution of the effect given the cause. However, as Kolmogorov complexity is not computable, we approximate the AIC from above through the statistically sound Minimum Description Length (MDL) principle. On univariate i.i.d. and non-i.i.d. pairs, where causal mechanisms are simple, we use refined MDL codes that are minimax optimal w.r.t. a model class. We resort to crude MDL codes on a pair of multivariate i.i.d. variables.
Although useful, saying that there exists a causal relationship from a set of variables towards a certain variable of interest does not always fully satisfy one’s curiosity; for a domain expert it is of particular interest to know those conditions that are most effective, such as the combinations of drugs and their dosages that are most effective towards recovery. Motivated by this problem, in the second part of this thesis, we consider discovering statistically reliable causal rules from observational data. Overall, extensive evaluations show that methods proposed in this thesis are highly accurate, and discover meaningful causations from real-world data.

Jan-Hendrik LANGE

Multicut Optimization Guarantees & Geometry of Lifted Multicuts
(Advisor: Prof. Björn Andres, now Dresden)
Friday, 03.07.2020, 16:30 h


The multicut problem, a graph partitioning model purely based on similarities and dissimilarities defined on the edges, has gained attention in recent years in areas such as network analysis and computer vision. Its key feature is that the number of components of the partition need not be specified, but is determined as part of the optimization process given the edge information. In this thesis talk we address two challenges associated with this NP-hard combinatorial optimization problem.
Firstly, the methods that are commonly employed to solve practical multicut problems are fast heuristics, due to the large size of the instances. While the provided solutions are of high quality empirically, they come without any (non-trivial) guarantees, neither in terms of the objective value nor in terms of the variable values. In the first part of the talk we develop efficient algorithms as well as theory that both explain and exploit the structural simplicity of practical instances in order to compute non-trivial guarantees. Our results include lower bounds and parts of optimal solutions that are obtained efficiently as well as theoretical insights into the tightness of linear programming relaxations.
Secondly, multicuts only encode the same-cluster relationship for pairs of nodes that are adjacent in the underlying graph. A generalization of the multicut problem that allows to model such inter-cluster connectivity for arbitrary pairs of nodes is known as the lifted multicut problem. In the second part of the talk, we study the (convex hull of the) feasible set, the lifted multicut polytope. We investigate under which conditions the fundamental inequalities associated with lifted multicuts define facets of the lifted multicut polytope. For the special case of trees, we further draw a connection to pseudo-Boolean optimization and give a tighter description of the lifted multicut polytope, which is exact when the tree is a simple path. For the moral lineage tracing problem, which is a closely related clustering formulation for joint segmentation and tracking of biological cells in images, we present a tighter description of the associated polytope and an improved branch-and-cut method.


Inferring phylogenetic trees under the general Markov model via a minimum spanning tree backbone
(Advisor: Prof. Thomas Lengauer)
Thursday, 02.07.2020, 10:00 h


Phylogenetic trees are models of the evolutionary history of species. Phylogenetic trees are typ-ically leaf- labeled with all species placed on leaves, a model that may not be appropriate for rapidly evolving pathogens that may share ancestor-descendant relationships. In this thesis we model evolutionary relationships using so-called generally labeled phylogenetic trees that allow sampled species to be at interior nodes of the tree. The widely adopted strategy to infer phylo-genetic trees involves computationally intensive optimization of continuous-time hidden Mar-kov models (HMM) that make unrealistic assumptions about gene evolution. We present a method called SEM-GM that performs phylogeny inference under a more realistic discrete-time HMM on trees known as the general Markov model. Additionally, we presented a minimum spanning tree (MST) framework called MST-backbone which improves the scalability of SEM-GM. MST-backbone (SEM-GM) scales linearly with the number of leaves. However, the loca-tion of the root as inferred under the GM model is not realistic on empirical data, suggesting that the GM model may be overtrained. MST-backbone was inspired by the topological rela-tionship between MSTs and phylogenetic trees. We discovered that the topological relationship does not necessarily hold if there is no unique MST. We proposed so-called vertex-order based MSTs (VMSTs) that guarantee a topological relationship with phylogenetic trees.


Andreas ABEL

Automatic Generation of Models of Microarchitecture
(Advisor: Prof. Jan Reineke)
Friday, 12.06.2020, 10:00 h


Detailed microarchitectural models are necessary to predict, explain, or optimize the perfor-mance of software running on modern microprocessors. Building such models often requires a significant manual effort, as the documentation provided by hardware manufacturers is typical-ly not precise enough. The goal of this thesis is to develop techniques for generating microar-chitectural models automatically.
In the first part, we focus on recent x86 microarchitectures. We implement a tool to accurately evaluate small microbenchmarks using hardware performance counters. We then describe tech-niques to automatically generate microbenchmarks for measuring the performance of individual instructions and for characterizing cache architectures. We apply our implementations to more than a dozen different microarchitectures. In the second part of the thesis, we study more general techniques to obtain models of hardware components. In particular, we propose the concept of gray-box learning, and we develop a learning algorithm for Mealy machines that exploits prior knowledge about the system to be learned. Finally, we show how this algorithm can be adapted to minimize incompletely specified Mealy machines—-a well-known NP-complete problem. Our implementation outperforms existing exact minimization techniques by several orders of magnitude on a number of hard benchmarks; it is even competitive with state-of-the-art heuristic approaches.


Multi-omics integrative analyses for decision support systems in personalized cancer treatment
(Advisor: Prof. Hans-Peter Lenhof)
Wednesday, 10.06.2020, 09:30 h


Cancer is a heterogeneous class of diseases caused by the complex interplay of (epi-)genetic aberrations and is difficult to treat due to its heterogeneity. In this thesis, we present a tool suite of novel methods and computational tools for the genetic and molecular characterization of tumors to support decision making in personalized cancer treatments. The first tool included in this tool suite is GeneTrail2, a web service for the statistical analysis of molecular signatures. While GeneTrail2 focuses on the evaluation of aberrant biological pathways and signal cascades, RegulatorTrail identifies those transcriptional regulators that appear to have a high impact on these pathogenic processes. With DrugTargetInspector (DTI), we focus specifically on personalized medicine and translational research. DTI offers comprehensive functions for the evaluation of target molecules, the associated signaling cascades, and the corresponding drugs. Finally, we present ClinOmicsTrailbc, an analysis tool for stratification of breast cancer treatments. ClinOmicsTrailbc supports clinicians with a comprehensive evaluation of on- and off-label drugs as well as immune therapies. In summary, this work presents novel methods and computational tools for the integrative analysis of multi-omics data for translational research and clinical decision support, assisting researchers and clinicians in finding the best possible treatment options in a deeply personalized way.


Towards Efficient Analysis of Markov Automata
(Advisor: Prof. Holger Hermanns)
Thursday, 04.06.2020, 17:00 h


One of the most expressive formalisms to model concurrent systems is Markov automata. They serve as a semantics for many higher-level formalisms, such as generalised stochastic Petri nets and dynamic fault trees. Two of the most challenging problems for Markov automata to date are (i) the optimal time-bounded reachability probability and (ii) the optimal long-run average rewards. In this thesis, we aim at designing efficient sound techniques to analyse them. We approach the problem of time-bounded reachability from two different angles. First, we study the properties of the optimal solution and exploit this knowledge to construct an efficient algorithm that approximates the optimal values up to a guaranteed error bound. This algorithm is exhaustive, i. e. it computes values for each state of the Markov automaton. This may be a limitation for very large or even infinite Markov automata. To address this issue we design a second algorithm that approximates the optimal solution by only working with part of the total state-space. For the problem of long-run average rewards there exists a polynomial algorithm based on linear programming. Instead of chasing a better theoretical complexity bound we search for a practical solution based on an iterative approach. We design a value iteration algorithm that in our empirical evaluation turns out to scale several orders of magnitude better than the linear programming based approach.


Snapshot: Friend or Foe of Data Management — On Optimizing Transaction Processing in Database and Blockchain Systems
(Advisor: Prof. Jens Dittrich)
Wednesday, 03.06.2020, 14:00 h


Data management is a complicated task. Due to a wide range of data management tasks, businesses often need a sophisticated data management infrastructure with a plethora of distinct systems to fulfill their requirements. Moreover, since snapshot is an essential ingredient in solving many data management tasks such as checkpointing and recovery, they have been widely exploited in almost all major data management systems that have appeared in recent years. However, snapshots do not always guarantee exceptional performance. In this dissertation, we will see two different faces of the snapshot, one where it has a tremendous positive impact on the performance and usability of the system, and another where an incorrect usage of the snapshot might have a significant negative impact on the performance of the system. This dissertation consists of three loosely-coupled parts that represent three distinct projects that emerged during this doctoral research.
In the first part, we analyze the importance of utilizing snapshots in relational database systems. We identify the bottlenecks in state-of-the-art snapshotting algorithms, propose two snapshotting techniques, and optimize the multi-version concurrency control for handling hybrid workloads effectively. Our snapshotting algorithm is up to 100x faster and reduces the latency of analytical queries by up to 4x in comparison to the state-of-the-art techniques. In the second part, we recognize strict snapshotting used by Fabric as a critical bottleneck, and replace it with MVCC and propose some additional optimizations to improve the throughput of the permissioned-blockchain system by up to 12x under highly contended workloads. In the last part, we propose ChainifyDB, a platform that transforms an existing database infrastructure into a blockchain infrastructure. ChainifyDB achieves up to 6x higher throughput in comparison to another state-of-the-art permissioned blockchain system. Furthermore, its external concurrency control protocol outperforms the internal concurrency control protocol of PostgreSQL and MySQL, achieving up to 2.6x higher throughput in a blockchain setup in comparison to a standalone isolated setup. We also utilize snapshots in ChainifyDB to support recovery, which has been missing so far from the permissioned-blockchain world.


Alexander HEWER

Registration and Statistical Analysis of the Tongue Shape during Speech Production
(Advisor: Dr. Ingmar Steiner)
Wednesday, 27.05.2020, 16:15 h


The human tongue plays an important role in everyday life. In particular, it contributes to speech production by interacting with other articulators, like the palate and the teeth. Thus, it is of great interest to understand its shape and motion during this process. Magnetic resonance imaging (MRI) is nowadays regarded as the state-of-the-art modality for analyzing the vocal tract during speech production.
Results of such an analysis can be used to construct statistical tongue models that are able to synthesize realistic tongue motion and shape.
In my defense talk, I will present some highlights of my research that focused on deriving such models from static MRI data. For example, I will show how MRI scans can be denoised for further processing by using a diffusion based approach. We will also learn how image segmentation and template matching can help to estimate the tongue shape information from such data.
Finally, we will see how a statistical tongue model can be utilized to reconstruct the whole three-dimensional tongue shape from sparse motion capture data.



A type-theory for higher-order amortized analysis
(Advisor: Prof. Deepak Garg)
Wednesday, 15.04.2020, 14:00 h


Amortized analysis is a popular algorithmic technique used to analyze the resource consumption of a program. It basically works by using a ghost state called the /potential/ and showing that the available potential is sufficient to account for the resource consumption of the program.
In this thesis we present λ-amor, a type-theory for amortized resource analysis of higher-order functional programs. Verification in λ-amor is based on the use of a novel type theoretic construct to represent potential at the level of types and then building an affine type-theory around it. With λ-amor we show that, type-theoretic amortized analysis can be performed using well understood concepts from sub-structural and modal type theories. Yet, it yields an extremely expressive framework which can be used for resource analysis of higher-order programs, both in a strict and lazy setting. We show embeddings of two very different styles (one based on /effects/ and the other on /coeffects/) of type-theoretic resource analysis frameworks into λ-amor. We show that λ-amor is sound (using a logical relations model) and complete for cost analysis of PCF programs (using one of the embeddings).
Next, we apply ideas from λ-amor to develop another type theory (called λ-cg) for a very different domain — Information Flow Control (IFC). λ-cg uses a similar type-theoretic construct (which λ-amor uses for the potential) to represent confidentiality label (the ghost state for IFC).
Finally, we abstract away from the specific ghost states (potential and confidentiality label) and describe how to develop a type-theory for a general ghost state with a monoidal structure.


Cross-layer Latency-Aware and -Predictable Data Communication
(Advisor: Prof. Thorsten Herfet)
Tuesday, 07.04.2020, 14:00 h


Cyber-physical systems are making their way into more aspects of everyday life. These systems are increasingly distributed and hence require networked communication to coordinatively fulfil control tasks. Providing this in a robust and resilient manner demands for latency-awareness and -predictability at all layers of the communication and computation stack. This thesis addresses how these two latency-related properties can be implemented at the transport layer to serve control applications in ways that traditional approaches such as TCP or RTP cannot. Thereto, the Predictably Reliable Real-time Transport (PRRT) protocol is presented, including its unique features (e.g. partially reliable, ordered, in-time delivery, and latency-avoiding congestion control) and unconventional APIs. This protocol has been intensively evaluated using the X-Lap toolkit that has been specifically developed to support protocol designers in improving latency, timing, and energy characteristics of protocols in a cross-layer, intra-host fashion. PRRT effectively circumvents latency-inducing bufferbloat using X-Pace , an implementation of the cross-layer pacing approach presented in this thesis. This is shown using experimental evaluations on real Internet paths. Apart from PRRT, this thesis presents means to make TCP-based transport aware of individual link latencies and increases the predictability of the end-to-end delays using Transparent Transmission Segmentation.


Kathrin STARK

Mechanising Syntax with Binders in Coq
(Advisor: Prof. Gert Smolka)
Friday, 14.02.2020, 15:00 h, in building E1 7, room 0.01


Mechanising binders in general-purpose proof assistants such as Coq is cumbersome and difficult. Yet binders, substitutions, and instantiation of terms with substitutions are a critical ingredient of many programming languages. Any practicable mechanisation of the meta-theory of the latter hence requires a lean formalisation of the former. We investigate the topic from three angles:
First, we realise formal systems with binders based on both pure and scoped de Bruijn algebras together with basic syntactic rewriting lemmas and automation. We automate this process in a compiler called Autosubst; our final tool supports many-sorted, variadic, and modular syntax. Second, we justify our choice of realisation and mechanise a proof of convergence of the sigma-calculus, a calculus of explicit substitutions that is complete for equality of the de Bruijn algebra corresponding to the lambda-calculus.
Third, to demonstrate the practical usefulness of our approach, we provide concise, transparent, and accessible mechanised proofs for a variety of case studies refined to de Bruijn substitutions.

Abhimitra MEKA

Live Inverse Rendering
(Advisor: Prof. Christian Theobalt)
Monday, 03.02.2020, 13:45 h, in building E1 4, room 0.19


Seamlessly integrating graphics into real environments requires the estimation of the fundamental light transport components of a scene – geometry, reflectance and illumination. While the theory and practices for estimating environmental geometry and self-localization on mobile devices has progressed rapidly, the task of estimating scene reflectance and illumination from monocular images or videos in real-time (termed live inverse rendering) is still at a nascent stage. The challenge is that of designing efficient representations and models for these appearance parameters and solving the resulting high-dimensional, non-linear and under-constrained system of equations at frame rate.
This thesis talk will comprehensively explore various representations, formulations, algorithms and systems for addressing these challenges in monocular inverse rendering. Starting with simple assumptions on the light transport model – of Lambertian surface reflectance and single light bounce scenario – the talk will expand in several directions by including 3D geometry, multiple light bounces, non-Lambertian isotropic surface reflectance and data-driven reflectance representation to address various facets of this problem. In the first part, the talk will explore the design of fast parallel non-linear GPU optimization schemes for solving both sparse and dense set of equations underlying the inverse rendering problem. In the next part, the current advances in machine learning methods will be applied to design novel formulations and loss-energies to give a significant push to the state-of-the-art of reflectance and illumination estimation. Several real-time applications of illumination-aware scene editing, including relighting and material-cloning, will also be shown to be possible for first time by the new models proposed in this thesis.


Mathias FLEURY

Formalization of Logical Calculi in Isabelle/HOL
(Advisor: Prof. Christoph Weidenbach)
Tuesday, 28.01.2020, 10:00 h, in building E1 4, room 0.24


I develop a formal framework for propositional satifisfiability with the conflict-driven clause learning (CDCL) procedure using the Isabelle/HOL proof assistant. The framework offers a convenient way to prove metatheorems and experiment with variants, including the Davis-Putnam-Logemann-Loveland procedure. The most noteworthy aspects of my work are the inclusion of rules for forget, restart and the refinement approach.
Through a chain of refinements, I connect the abstract CDCL calculus first to a more concrete calculus, then to a SAT solver expressed in a simple functional programming language, and finally to a SAT solver in an imperative language, with total correctness guarantees. The imperative version relies on the two-watched-literal data structure and other optimizations found in modern solvers.

Maximilian JOHN

Of Keyboards and Beyond – Optimization in Human-Computer-Interaction
(Advisor: Dr. Andreas Karrenbauer)
Wednesday, 15.01.2020, 16:30 h, in building E1 4, room 0.24


This talk covers optimization challenges in the context of human-computer interaction, specifically in UI design. In particular, we focus on the design of the new French keyboard layout, discuss the special challenges of this national-scale project and our algorithmic contributions. Exploiting the special structure of this design problem, we propose an optimization framework that efficiently computes keyboard layouts and provides very good optimality guarantees in form of tight lower bounds. The optimized layout that we showed to be nearly optimal was the basis of the new French keyboard standard recently published in the National Assembly in Paris. Finally, we introduce a modeling language for mixed integer programs that especially focuses on the challenges and features that appear in participatory optimization problems similar to the French keyboard design process.


3D Exemplar-based Image Inpainting in Electron Microscopy
(Advisor: Prof. Philipp Slusallek)
Monday, 13.01.2020, 14:00 h, in building D3 4 (DFKI), VisCenter room


In electron microscopy (EM) a common problem is the non-availability of data, which causes artefacts in reconstructions. In this thesis the goal is to generate artificial data where missing in EM by using exemplar-based inpainting (EBI). We implement an accelerated 3D version tailored to applications in EM, which reduces reconstruction times from days to minutes.
We develop intelligent sampling strategies to find optimal data as input for reconstruction methods. Further, we investigate approaches to reduce electron dose and acquisition time. Sparse sampling followed by inpainting is the most promising approach. As common evaluation measures may lead to misinterpretation of results in EM and falsify a subsequent analysis, we propose to use application driven metrics and demonstrate this in a segmentation task.
A further application of our technique is the artificial generation of projections in tilt- based EM. EBI is used to generate missing projections, such that the full angular range is covered. Subsequent reconstructions are significantly enhanced in terms of resolution, which facilitates further analysis of samples.
In conclusion, EBI proves promising when used as an additional data generation step to tackle the non-availability of data in EM, which is evaluated in selected applications. Enhancing adaptive sampling methods and refining EBI, especially considering the mutual influence, promotes higher throughput in EM using less electron dose while not lessening quality.