PhD Thesis Defenses

PhD thesis defenses are a public affair and open to anyone who is interested. Attending them is a great way to get to know the work going on by your peers in the various research groups. On this page you will find a list of upcoming and past defense talks.

Please go here for electronic access to most of the doctoral dissertations from Saarbrücken Computer Science going back to about 1990.

 

2018

August

Seong Joon OH
Image Manipulation against Learned Models: Privacy and Security Implications
(Advisor: Prof. Bernt Schiele)

Monday, 06.08.2018, 16:00 h, in building E1 4, room 0.24

Abstract:

Machine learning is transforming the world. Its application areas span privacy sensitive and security critical tasks such as human identification and self-driving cars. These applications raise privacy and security related questions that are not fully understood or answered yet: Can automatic person recognisers identify people in photos even when their faces are blurred? How easy is it to find an adversarial input for a self-driving car that makes it drive off the road?
This thesis contributes one of the first steps towards a better understanding of such concerns in the presence of data manipulation. From the point of view of user’s privacy, we show the inefficacy of common obfuscation methods like face blurring, and propose more advanced techniques based on head inpainting and adversarial examples. We discuss the duality of model security and user privacy problems and describe the implications of research in one area for the other. Finally, we study the knowledge aspect of the data manipulation problem: the more one knows about the target model, the more effective manipulations one can craft. We propose a game theoretic framework to systematically represent the partial knowledge on the target model and derive privacy and security guarantees. We also demonstrate that one can reveal architectural details and training hyperparameters of a model only by querying it, leading to even more effective data manipulations against it.

July

Pascal BERRANG
Quantifying and Mitigating Privacy Risks in Biomedical Data
(Advisor: Prof. Michael Backes)

Wednesday, 25.07.2018, 14:00 h, in building E9 1 (CISPA), room 001

Abstract:

The decreasing costs of molecular profiling have fueled the biomedical research community with a plethora of new types of biomedical data, allowing for a breakthrough towards a more precise and personalized medicine. However, the release of these intrinsically highly sensitive, interdependent data poses a new severe privacy threat.
In this thesis, we provide means to quantify and protect the privacy of individuals’ biomedical data. Besides the genome, we specifically focus on two of the most important epigenetic elements influencing human health: microRNAs and DNA methylation. We quantify the privacy for multiple realistic attack scenarios. Our results underline that the privacy risks inherent to biomedical data have to be taken seriously. Moreover, we present and evaluate solutions to preserve the privacy of individuals. Our mitigation techniques stretch from the differentially private release of epigenetic data, considering its utility, up to cryptographic constructions to securely, and privately evaluate a random forest on a patient’s data.

Ruben BECKER
On Flows, Paths, Roots, and Zeros
(Advisor: Prof. Kurt Mehlhorn)

Monday, 23.07.2018, 14:00 h, in building E1 7, room 001

Abstract:

This thesis has two parts; in the first of which we give new results for various network flow problems. (1) We present a novel dual ascent algorithm for min-cost flow and show that an implementation of it is very efficient on certain instance classes. (2) We approach the problem of numerical stability of interior point network flow algorithms by giving a path following method that works with integer arithmetic solely and is thus guaranteed to be free of any nu-merical instabilities. (3) We present a gradient descent approach for the undirected transship-ment problem and its special case, the single source shortest path problem (SSSP). For distrib-uted computation models this yields the first SSSP-algorithm with near-optimal number of communication rounds.
The second part deals with fundamental topics from algebraic computation. (1) We give an algorithm for computing the complex roots of a complex polynomial. While achieving a com-parable bit complexity as previous best results, our algorithm is simple and promising to be of practical impact. It uses a test for counting the roots of a polynomial in a region that is based on Pellet’s theorem. (2) We extend this test to polynomial systems, i.e., we develop an algorithm that can certify the existence of a k-fold zero of a zero-dimensional polynomial system within a given region. For bivariate systems, we show experimentally that this approach yields signifi-cant improvements when used as inclusion predicate in an elimination method.

Bojan KODRIC
Incentives in Dynamic Markets
(Advisor: Prof. Martin Hoefer, now Uni Frankfurt)

Monday, 23.07.2018, 11:00 h, in building E1 7, room 001

Abstract:

In this thesis, we consider a variety of combinatorial optimization problems within a common theme of uncertainty and selfish behavior.
In our first scenario, the input is collected from selfish players. Here, we study extensions of the so-called smoothness framework for mechanisms, a very useful technique for bounding the inefficiency of equilibria, to the cases of varying mechanism availability and participation of risk-averse players. In both of these cases, our main results are general theorems for the class of (λ,μ)-smooth mechanisms. We show that these mechanisms guarantee at most a (small) constant factor performance loss in the extended settings.
In our second scenario, we do not have access to the exact numerical input. Within this context, we explore combinatorial extensions of the well-known secretary problem under the assumption that the incoming elements only reveal their ordinal position within the set of previously arrived elements. We first observe that many existing algorithms for special matroid structures maintain their competitive ratio in the ordinal model. In contrast, we provide a lower bound for algorithms that are oblivious to the matroid structure. Finally, we design new algorithms that obtain constant competitive ratios for a variety of combinatorial problems.

June

Manuel REINERT
Cryptographic Techniques for Privacy and Access Control in Cloud-Based Applications
(Advisor: Prof. Matteo Maffei, now Uni Wien)

Friday, 29.06.2018, 15:30 h, in building E9 1 (CISPA), room 001

Abstract:

Digitization is one of the key challenges for today’s industries and society. It affects more and more business areas and also user data and, in particular, sensitive information. Due to its sensitivity, it is important to treat personal information as secure and private as possible yet enabling cloud-based software to use that information when requested by the user.
In this thesis, we focus on the privacy-preserving outsourcing and sharing of data, the querying of outsourced protected data, and the usage of personal information as an access control mechanism for rating platforms, which should be protected from coercion attacks. In those three categories, we present cryptographic techniques and protocols that push the state of the art. In particular, we first present multi-client oblivious RAM (ORAM), which augments standard ORAM with selective data sharing through access control, confidentiality, and integrity. Second, we investigate on recent work in frequency-hiding order-preserving encryption and show that the state of the art misses rigorous treatment, allowing for simple attacks against the security of the existing scheme. As a remedy, we show how to fix the security definition and that the existing scheme, slightly adapted, fulfills it. Finally, we design and develop a coercion-resistant rating platform. Coercion-resistance has been dealt with mainly in the context of electronic voting yet also affects other areas of digital life such as rating platforms.

Maryam NAZARIEH
Understanding Regulatory Mechanisms Underlying Stem Cells Helps to Identify Cancer Biomarks
(Advisor: Prof. Volkhard Helms)

Thursday, 28.06.2018, 16:30 h, in building E2 1, room 001

Abstract:

In this thesis, we present novel algorithms to unravel regulatory mechanisms underlying stem cell differentiation and cancerogenesis. Inspired by the tightly interwoven topology of the regulators controlling the pluripotency network in mouse embryonic stem cells, we formulated the problem where a set of master regulatory genes in a regulatory network is identified by solving either of two combinatorial optimization problems namely a minimum dominating set or a minimum connected dominating set in weakly and strongly connected components. Then we applied the developed methods to regulatory cancer networks to identify disease-associated genes and anti-cancer drug targets in breast cancer and hepatocellular carcinoma. As not all the nodes in the solutions are critical, we developed a prioritization method to rank a set of candidate genes which are related to a certain disease based on systematic analysis of the genes that are differentially expressed in tumor and normal conditions. Moreover, we demonstrated that the topological features in regulatory networks surrounding differentially expressed genes are highly consistent in terms of using the output of several analysis tools for identifying differentially expressed genes. We compared two randomization strategies for TF-miRNA co-regulatory networks to infer significant network motifs underlying cellular identity. We showed that the edge-type conserving method surpasses the non-conserving method in terms of biological relevance and centrality overlap. We presented several web servers and software packages that were made publicly available at no cost. The Cytoscape plugin of minimum connected dominating set identifies a set of key regulatory genes in a user-provided regulatory network based on a heuristic approach. The ILP formulations of minimum dominating set and minimum connected dominating set return the optimal solutions for the aforementioned problems. The web servers TFmiR and TFmiR2 construct disease-, tissue-, process-specific networks for the sets of deregulated genes and miRNAs provided by a user. They highlight topological hotspots and offer detection of three- and four-node FFL motifs as a separate web service for mouse and human.

Alexander ANDREYCHENKO
Model Reconstruction for Moment-based Stochastic Chemical Kinetics
(Advisor: Prof. Verena Wolf)

Monday, 25.06.2018, 12:00 h, in building E1 7, room 0.01

Abstract:

Based on the theory of stochastic chemical kinetics, the inherent randomness and stochasticity of biochemical reaction networks can be accurately described by discrete-state continuous-time Markov chains, where each chemical reaction corresponds to a state transition of the process. However, the analysis of such processes is computationally expensive and sophisticated numerical methods are required. The main complication comes due to the largeness problem of the state space, so that analysis techniques based on an exploration of the state space are often not feasible and the integration of the moments of the underlying probability distribution has become a very popular alternative. In this thesis we propose an analysis framework in which we integrate a number of moments of the process instead of the state probabilities. This results in a more time-efficient simulation of the time evolution of the process. In order to regain the state probabilities from the moment representation, we combine the moment-based simulation (MM) with a maximum entropy approach: the maximum entropy principle is applied to derive a distribution that fits best to a given sequence of moments. We further extend this approach by incorporating the conditional moments (MCM) which allows not only to reconstruct the distribution of the species present in high amount in the system, but also to approximate the probabilities of species with low molecular counts. For the given distribution reconstruction framework, we investigate the numerical accuracy and stability using case studies from systems biology, compare two different moment approximation methods (MM and MCM), examine if it can be used for the reaction rates estimation problem and describe the possible future applications.

Abhishek BICHHAWAT
Practical Dynamic Information Flow Control
(Advisor: Prof. Christian Hammer, now Uni Potsdam)

Friday, 22.06.2018, 16:30 h, in building E2 1, room 007

Abstract:

Computer systems and applications handle a plethora of private user information. These are often assisted by a set of bugs with both malicious and non-malicious intent leading to information leaks. Information flow control has been studied extensively as an approach to mitigate such leaks but has focussed mainly on static analyses. However, some of the applications are developed using dynamic languages like JavaScript that make the static analyses techniques ineffective. Although there has been a growing interest to develop dynamic analysis techniques, it still isn’t at the helm of information flow security in such settings; the prime reason being that these techniques either over-approximate or are too restrictive in most cases. This thesis focuses on improving the usability of dynamic information flow analyses. It begins by presenting a sound improvement and enhancement of the permissive-upgrade strategy. It, then, presents a sound and precise dynamic information flow analysis for handling complex features like exceptions in higher-order languages. To address the issue of permissiveness, this thesis develops a sound approach to bound leaks dynamically while allowing information release in accordance to a developer-specified budget. It then applies these techniques to a Web browser and explores a policy specification mechanism for Web applications.

Shilpa GARG
Computational Haplotyping: theory and practice
(Advisor: Prof. Tobias Marschall)

Thursday, 21.06.2018, 16:30 h, in building E2 1, room 007

Abstract:

Genomics has paved a new way to comprehend life and its evolution, and also to investigate causes of diseases and their treatment. One of the important problems in genomic analyses is haplotype assembly. Constructing complete and accurate haplotypes plays an essential role in understanding population genetics and how species evolve.In this thesis, we focus on computational approaches to haplotype assembly from third generation sequencing technologies. This involves huge amounts of sequencing data, and such data contain errors due to the single molecule sequencing protocols employed. Taking advantage of combinatorial formulations helps  to correct for these errors to solve the haplotyping problem. Various computational techniques such as dynamic programming, parameterized algorithms, and graph algorithms are used to solve this problem.
This thesis presents several contributions concerning the area of haplotyping. First, a novel algorithm based on dynamic programming is proposed to provide approximation guarantees for phasing a single individual. Second, an integrative approach is introduced to combining multiple sequencing datasets to generating complete and accurate haplotypes. The effectiveness of this integrative approach is demonstrated on a real human genome. Third, we provide a novel efficient approach to phasing pedigrees and demonstrate its advantages in comparison to phasing a single individual.Fourth, we present a generalized graph-based framework for performing haplotype-aware de novo assembly. Specifically, this generalized framework consists of a hybrid pipeline for generating accurate and complete haplotypes from data stemming from multiple sequencing technologies, one that provides accurate reads and other that provides long reads.

Sandy HEYDRICH
A tale of two packing problems: Improved algorithms and tighter bounds for online bin packing and the geometric knapsack problem
(Advisor: Prof. Rob van Stee, now Uni Siegen)

Tuesday, 12.06.2018, 16:15 h, in building E1 4, room 024

Abstract:

In this thesis, we deal with two packing problems: the online bin packing and the geometric knapsack problem. In online bin packing, the aim is to pack a given number of items of different size into a minimal number of containers. The items need to be packed one by one without knowing future items. For online bin packing in one dimension, we present a new family of algorithms that constitutes the first improvement over the previously best algorithm in almost 15 years. While the algorithmic ideas are intuitive, an elaborate analysis is required to prove its competitive ratio. We also give a lower bound for the competitive ratio of this family of algorithms. For online bin packing in higher dimensions, we discuss lower bounds for the competitive ratio and show that the ideas from the one-dimensional case cannot be easily transferred to obtain better two-dimensional algorithms. In the geometric knapsack problem, one aims to pack a maximum weight subset of given rectangles into one square container. For this problem, we consider offline approximation algorithms. For geometric knapsack with square items, we improve the running time of the best known PTAS and obtain an EPTAS. This shows that large running times caused by some standard techniques for geometric packing problems are not always necessary and can be improved. Finally, we show how to use resource augmentation to compute optimal solutions in EPTAS-time, thereby improving upon the known PTAS for this case.

Georg NEIS
Compositional Compiler Correctness Via Parametric Simulations
(Advisor: Prof. Derek Dreyer)

Thursday, 07.06.2018, 14:00 h, in building E1 5, room 0.29

Abstract:

Compiler verification is essential for the construction of fully verified software, but most prior work (such as CompCert) has focused on verifying whole-program compilers. To support separate compilation and to enable linking of results from different verified compilers, it is important to develop a compositional notion of compiler correctness that is modular (preserved under linking), transitive (supports multi-pass compilation), and flexible (applicable to compilers that use different intermediate languages or employ non-standard program transformations). In this thesis, we formalize such a notion of correctness based on parametric simulations, thus developing a novel approach to compositional compiler verification.

May

Andreas TEUCKE
An Approximation and Refinement Approach to First-Order Automated Reasoning
(Advisor: Prof. Christoph Weidenbach)

Wednesday, 16.05.2018, 14:00 h, in building E1 5, room 0.02

Abstract:

With the goal of lifting model-based guidance from the propositional setting to first-order logic, I have developed an approximation theorem proving approach based on counterexample-guided abstraction refinement. A given clause set is transformed into a simplified form where satisfiability is decidable.This approximation extends the signature and preserves unsatisfiability:if the simplified clause set is satisfiable, so is the original clause set. A resolution refutation generated by a decision procedure on the simplified clause set can then either be lifted to a refutation in the original clause set, or it guides a refinement excluding the previously found unliftable refutation. This way the approach is refutationally complete. I have implemented my approach based on the SPASS theorem prover. On certain satisfiable problems, the implementation shows the ability to beat established provers such as SPASS, iProver, and Vampire.

Georg TAMM
Architectures for Ubiquitous 3D on Heterogeneous Computing Platforms
(Advisor: Prof. Philipp Slusallek)

Wednesday, 09.05.2018, 14:00 h, in building D3 4,  VisCenter DFKI

Abstract:

Today, a wide scope for 3D graphics applications exists, including domains such as scientific visualization, 3D-enabled web pages, and entertainment. At the same time, the devices and platforms that run and display the applications are more heterogeneous than ever. Display environments range from mobile devices to desktop systems and ultimately to distributed displays. While the capability of the client devices may vary considerably, the visualization experiences running on them should be consistent. The field of application should dictate how and on what devices users access the application, not the technical requirements to realize the 3D output.
The goal of this thesis is to examine the diverse challenges involved in providing consistent and scalable visualization experiences to heterogeneous computing platforms and display setups. We developed a comprehensive set of rendering architectures in the major domains of scientific and medical visualization, web-based 3D applications, and movie virtual production. To provide the required service quality, performance, and scalability for different client devices and displays, our architectures focus on the efficient utilization and combination of the available client, server, and network resources. We present innovative solutions that incorporate methods for hybrid and distributed rendering as well as means to stream rendering results. We establish the browser as a promising platform for accessible and portable visualization services. We collaborated with experts from the medical field and the movie industry to evaluate the usability of our technology in real-world scenarios.

Pascal MONTAG
Integrated Timing Verification for Distributed Embedded Real-Time Systems
(Advisor: Prof. Reinhard Wilhelm)

Tuesday, 08.05.2018, 14:00 h, in building E1 1,  room 407

Abstract:

The verification of timing properties with commercial tools based on static analysis has been available for safety-critical automotive systems for more than a decade. Although they offer many other advantagesin addition to safety guarantees, these tools are rarely applied in the automotive industry. This thesis analyses different possible reasons for this lack of application and suggests solutions where possible. Amongst the results of this thesis are technical findings, like the further increase of automation (e.g. for loop-bound detection) or the automatic reduction of WCET overestimation for highly variant systems. With regard to the development process, technical as well as non-technical solutions are presented that target the actual users of the timing verification, which range from software developers to system architects. The central result of this thesis is the integration of existing and newly developed timing verification methods and tools into a holistic development process approach called Timing Oriented Application Development (TOAD).

April

Wenbin LI
From Perception over Anticipation to Manipulation
(Advisor: Dr. Mario Fritz)

Wednesday, 25.04.2018, 16:00 h, in building E1 4,  room 019

Abstract:

From autonomous driving cars to surgical robots, robotic system has enjoyed significant growth over the past decade. With the rapid development in robotics alongside the evolution in the related fields, such as computer vision and machine learning, integrating perception, anticipation and manipulation is key to the success of future robotic system. In this talk, we summarize our efforts of exploring different ways of such integration to extend the capabilities of a robotic system to take on more challenging real world tasks. On anticipation and perception, we address the recognition of ongoing activity from videos. In particular we focus on long-duration and complex activities and hence propose a new challenging dataset to facilitate the work. On manipulation with perception, we propose an efficient framework for programming a robot to use human tools and evaluate it on a Baxter research robot. Finally, combining perception, anticipation and manipulation, we focus on a block stacking task. We first explore how to guide robot to place a single block into the scene without collapsing the existing structure and later introduce the target stacking task where the agent stacks blocks to reproduce a tower shown in an image. We validate our model on both synthetic and real-world settings.

Kerstin NEININGER
Towards the understanding of transcriptional and translational regulatory complexity
(Advisor: Prof. Dr. Volkhard Helms)

Wednesday, 11.04.2018, 10:00 h, in building E2 1,  room 001

Abstract:

Considering the same genome within every cell, the observed phenotypic diversity can only arise from highly regulated mechanisms beyond the encoded DNA sequence. We investigated several mechanisms of protein biosynthesis and analyzed DNA methylation patterns, alternative translation sites, and genomic mutations. As chromatin states are determined by epigenetic modifications and nucleosome occupancy,we conducted a structural superimposition approach between DNA methyltransferase 1 (DNMT1) and the nucleosome, which suggests that DNA methylation is dependent on accessibility of DNMT1 to nucleosome–bound DNA. Considering translation, alternative non–AUG translation initiation was observed. We developed reliable prediction models to detect these alternative start sites in a given mRNA sequence. Our tool PreTIS provides initiation confidences for all frame–independent non–cognate and AUG starts. Despite these innate factors, specific sequence variations can additionally affect a phenotype. We conduced a genome–wide analysis with millions of mutations and found an accumulation of SNPs next to transcription starts that could relate to a gene–specific regulatory signal. We also report similar conservation of canonical and alternative translation sites, highlighting the relevance of alternative mechanisms. Finally, our tool MutaNET automates variation analysis by scoring the impact of individual mutations on cell function while also integrating a gene regulatory network.

March

Arunav MISHRA
Biomedical Knowledge Base Construction form Text and its Application in Knowledge-based Systems
(Advisor: Prof. Dr. Klaus Berberich, now HTW Saar)

Monday, 12.03.2018, 10:00 h, in building E1 4,  room 024

Abstract:

Today in this Big Data era, overwhelming amounts of textual information across different sources with a high degree of redundancy has made it hard for a consumer to retrospect on past events. A plausible solution is to link semantically similar information contained across the different sources to enforce a structure thereby providing multiple access paths to relevant information. Keeping this larger goal in view, this work uses Wikipedia and online news articles as two prominent yet disparate information sources to address the following three problems:
• We address a linking problem to  connect Wikipedia excerpts to news articles by casting it into an IR task. Our novel approach integrates time, geolocations, and entities with text to identify relevant documents thatcan be linked to a given excerpt.
• We address an unsupervised extractive multi-document summarization task to generate a fixed-length event digest that facilitates efficient consumption of information contained within a large set of documents. Our novel approach proposes an ILP for global inference across text, time, geolocations, and entities associated with the event.
• To estimate temporal focus of short event descriptions, we present a semi-supervised approach that leverages redundancy within a longitudinal news collection to estimate accurate probabilistic time models.
Extensive experimental evaluations demonstrate the effectiveness and viability of our proposed approaches towards achieving the larger goal.

Patrick ERNST
Biomedical Knowledge Base Construction form Text and its Application in Knowledge-based Systems
(Advisor: Prof. Dr. Gerhard Weikum)

Tuesday, 06.03.2018, 10:00 h, in building E1 4,  room 024

Abstract:

While general-purpose Knowledge Bases (KBs) have gone a long way in compiling comprehensive knowledge about people, events, places, etc., domain-specific KBs, such as on health, are equally important, but are less explored. Consequently, a comprehensive and expressive health KB that spans all aspects of biomedical knowledge is still missing. The main goal of this thesis is to develop principled methods for building such a KB and enabling knowledge-centric applications. We address several challenges and make the following contributions:
– To construct a health KB, we devise a largely automated and scalable pattern-based knowledge extraction method covering a spectrum of different text genres and distilling a wide variety of facts from different biomedical areas.
– To consider higher-arity relations, crucial for proper knowledge representation in advanced domain such as health, we generalize the fact-pattern duality paradigm of previous methods. A key novelty is the integration of facts with missing arguments by extending our framework to partial patterns and facts by reasoning overthe composability of partial facts.
– To demonstrate the benefits of a health KB, we devise systems for entity-aware search and analytics andfor entity-relationship-oriented exploration.
Extensive experiments and use-case studies demonstrate the viability of the proposed approaches.

February

Roland LEIßA
Language Support for Programming High-Performance Code
(Advisor: Prof. Dr. Sebastian Hack)

Thursday, 22.02.2018, 14:00 h, in building E1 7,  room 001

Abstract:

Nowadays, the computing landscape is becoming increasingly heterogeneous and this trend is currently showingno signs of turning around. In particular, hardware becomes more and more specialized and exhibits different formsof parallelism. For performance-critical codes it is indispensable to address hardware-specific peculiarities. Because of the halting problem, however, it is unrealistic to assume that a program implemented in a general-purpose programming language can be fully automatically compiled to such specialized hardware while still delivering peak performance. One form of parallelism is single instruction, multiple data (SIMD).
In this talk, we first introduce Sierra: an extension for C++ that facilitates portable and effective SIMD programming. Second, we present AnyDSL. This framework allows to embed a so-called domain-specific language (DSL) into a host language. On the one hand, a DSL offers the application developer a convenient interface; on the other hand, a DSL can perform domain-specific optimizations and effectively map DSL constructs to various architectures. In order to implement a DSL, one usually has to write or modify a compiler. With AnyDSL though, the DSL constructs are directly implemented in the host language while a partial evaluator removes any abstractions that are required in the implementation of the DSL.

Mohammad MEHDI MONIRI
Gaze and Peripheral Vision Analysis for Human-Environment Interaction: Applications in Automotive and Mixed-Reality Scenarios
(Advisor: Prof. Dr. Wolfgang Wahlster)

Wednesday, 14.02.2018, 16:00 h, in building D3 2, Reuse seminar room -2.17

Abstract:

This thesis studies eye-based user interfaces which integrate information about the user’s perceptual focus-of-attention into multimodal systems to enrich the interaction with the surrounding environment. We examine two new modalities:gaze input and output in the peripheral field of view. All modalities are considered in the whole spectrum of the mixed-reality continuum. We show the added value of these new forms of multimodal interaction in two important application domains: Automotive User Interfaces and Human-Robot Collaboration. We present experiments that analyze gaze under various conditions and help to design a 3D model for peripheral vision. Furthermore, this work presents several new algorithms for eye-based interaction, like deictic reference in mobile scenarios, for non-intrusive user identification, or exploiting the peripheral field view for advanced multimodal presentations. These algorithms have been integrated into a number of software tools for eye-based interaction, which are used to implement 15 use cases for intelligent environment applications. These use cases cover a wide spectrum of applications, from spatial interactions with a rapidly changing environment from within a moving vehicle, to mixed-reality interaction between teams of human and robots.

Jonas OBERHAUSER
Justifying The Strong Memory Semantics of Concurrent High-Level Programming Languages for System Programming
(Advisor: Prof. Dr. Wolfgang Paul)

Wednesday, 07.02.2018, 15:00 h, in E1 7, room 001

Abstract:

We justify the strong memory semantics of high-level concurrent languages, such as C11 or Java, when compiled to x86-like machines. Languages such as C11 guarantee that programs that obey a specific software discipline behave as if they were executed on a simple coherent shared memory multi-processor. Real x86 machines, on the other hand, do not provide a coherent shared memory, and compilers add slow synchronizing instructions to provide the illusion of coherency. We show that one software discipline that closely matches software disciplines of languages such as C11 and Java – in a nutshell, that racing memory accesses are annotated as such by the programmer – and one particular way to add relatively few synchronization instructions – in a nutshell, between an annotated store and an annotated load in the same thread – suffice to create this illusion.
The software discipline has to be obeyed by the program in the semantics of the high-level language, therefore allowing the verification effort to take place completely in the strong memory semantics of the high-level language. We treat a machine model with operating system support, and accordingly our theorems can be used to verify interruptible multi-core operating systems, including those where users can run unverified programs that do not obey the software discipline.

David HAFNER
Variational Image Fusion
(Advisor: Prof. Dr. Joachim Weickert)

Monday, 05.02.2018, 10:15 h, in E1 1, room 407

Abstract:

The main goal of this work is the fusion of multiple images to a single composite that offers more information than the individual input images. We approach those fusion tasks within a variational framework.
First, we present iterative schemes that are well-suited for such variational problems and related tasks. They lead to efficient algorithms that are simple to implement and well-parallelisable. Next, we design a general fusion technique that aims for an image with optimal local contrast. This is the key for a versatile method that performs well in many application areas such as multispectral imaging, decolourisation, and exposure fusion. To handle motion within an exposure set, we present the following two-step approach: First, we introduce the complete rank transform to design an optic flow approach that is robust against severe illumination changes.
Second, we eliminate remaining misalignments by means of brightness transfer functions that relate the brightness values between frames. Additional knowledge about the exposure set enables us to propose the first fully coupled method that jointly computes an aligned high dynamic range image and dense displacement fields.
Finally, we present a technique that infers depth information from differently focused images. In this context, we additionally introduce a novel second order regulariser that adapts to the image structure in an anisotropic way.

January

Linar MIKEEV
Numerical Analysis of Stochastic Biochemical Reaction Networks
(Advisor: Prof. Dr. Verena Wolf)

Wednesday, 31.01.2018, 14:00 h, in E1 7, room 001

Abstract:

Numerical solution of the chemical master equation for stochastic reaction networks typically suffers from the state space explosion problem due to the curse of dimensionality and from stiffness due to multiple time scales.The dimension of the state space equals the number of molecular species involved in the reaction network and the size of the system of differential equations equals the number of states in the corresponding continuous-time Markov chain, which is usually enormously huge and often even infinite. Thus, efficient numerical solution approaches must be able to handle huge, possibly infinite and stiff systems of differential equations efficiently.
In this thesis, we present efficient techniques for the numerical analysis of the biochemical reaction networks. We present an approximate numerical integration approach that combines a dynamical state space truncation procedure with efficient numerical integration schemes for systems of ordinary differential equations including adaptive step size selection based on local error estimates. We combine our dynamical state space truncation with the method of conditional moments, and present the implementation details and numerical results. We also incorporate ideas from importance sampling simulations into a non-simulative numerical method that approximates transient rare event probabilities based on a dynamical truncation of the state space.
Finally, we present a maximum likelihood method for the estimation of the model parameters given noisy time series measurements of molecular counts. All approaches presented in this thesis are implemented as part of the tool STAR, which allows to model and simulate the biochemical reaction networks. The efficiency and accuracy is demonstrated by numerical examples.

Ezgi CICEK
Relational Cost Analysis
(Advisor: Dr. Deepak Garg)

Monday, 22.01.2018, 16:00 h, in E1 5, room 029

Abstract:

Programming languages research has made great progress towards statically estimating the execution cost of a program. However, when one is interested in how the execution costs of two programs compare to each other (i.e., relational cost analysis), the use of unary techniques does not work well in many cases. In order to support a relational cost analysis, we must ultimately support reasoning about not only the executions of a single program, but also the executions of two programs, taking into account their similarities. This dissertation makes several contributions to the understanding and development of such a relational cost analysis. It shows how:
• Refinement types and effect systems can express functional and relational quantitative properties of pairs of programs, including the difference in execution costs.
• Relational cost analysis can be adapted to reason about dynamic stability, a measure of the update times of incremental programs as their inputs change.
• A sound and complete bidirectional type system can be developed (and implemented) for relational cost analysis.

 

 

2017

December

Vahid HASHEMI
Decision Algorithms for Modelling, Optimal Control and Verification of Probabilistic Systems
(Advisor: Prof. Holger Hermanns)

Thursday, 21.12.2017, 11:30 h, in E1 7, room 001

Abstract:

Markov Decision Processes (MDPs) constitute a mathematical framework for modelling systems featuring both probabilistic and nondeterministic behaviour. They are widely used to solve sequential decision making problems and applied successfully in operations research, artificial intelligence, and stochastic control theory, and have been extended conservatively to the model of probabilistic automata in the context of concurrent probabilistic systems. However, when modeling a physical system they suffer from several limitations. One of the most important is the inherent loss of precision that is introduced by measurement errors and discretization artifacts which necessarily happen due to incomplete knowledge about the system behavior. Interval Markov decision processes (IMDPs) generalize classical MDPs by having interval-valued transition probabilities. They provide a powerful modelling tool for probabilistic systems with an additional variation or uncertainty that reflects the absence of precise knowledge concerning transition probabilities.
In this dissertation, we focus on decision algorithms for modelling and performance evaluation of such probabilistic systems leveraging techniques from mathematical optimization. From a modelling viewpoint, we address probabilistic bisimulations to reduce the size of the system models while preserving the logical properties they satisfy. We also discuss the key ingredients to construct systems by composing them out of smaller components running in parallel. Furthermore, we introduce a novel stochastic model, Uncertain weighted Markov Decision Processes (UwMDPs), so as to capture quantities like preferences or priorities in a nondeterministic scenario with uncertainties. This model is close to the model of IMDPs but more convenient to work with in the context of compositional minimization. From a performance evaluation perspective, we consider the problem of multi-objective robust strategy synthesis for IMDPs, where the aim is to find a robust strategy that guarantees the satisfaction of multiple properties at the same time in face of the transition probability uncertainty. In this respect, we discuss the computational complexity of the problem and present a value iteration-based decision algorithm to approximate the Pareto set of achievable optimal points. Moreover, we consider the problem of computing maximal/minimal reward-bounded reachability probabilities on UwMDPs, for which we present an efficient algorithm running in pseudo-polynomial time. We demonstrate the practical effectiveness of our proposed approaches by applying them to a collection of real-world case studies using several prototypical tools.

Anna KHOREVA
Learning to Segment in Images and Videos with Different Forms of Supervision
(Advisor: Prof. Bernt Schiele)

Wednesday, 20.12.2017, 17:00 h, in E1 4, room 0.29

Abstract:

First, we develop approaches for training convolutional networks with weaker forms of supervision, such as bounding boxes or image labels, for object boundary estimation and semantic/instance labelling tasks. We propose to generate pixel-level approximate groundtruth from these weaker forms of annotations to train a network, which allows to achieve high-quality results comparable to the full supervision quality without any modifications of the network architecture or the training procedure.
Second, we address the problem of the excessive computational and memory costs inherent to solving video segmentation via graphs. We propose approaches to improve the runtime and memory efficiency as well as the output segmentation quality by learning from the available training data the best representation of the graph. In particular, we contribute with learning must-link constraints, the topology and edge weights of the graph as well as enhancing the graph nodes – superpixels – themselves.
Third, we tackle the task of pixel-level object tracking and address the problem of the limited amount of densely annotated video data for training convolutional networks. We introduce an architecture which allows training with static images only and propose an elaborate data synthesis scheme which creates a large number of training examples close to the target domain from the given first frame mask. With the proposed techniques we show that densely annotated consequent video data is not necessary to achieve high-quality temporally coherent video segmentation results.

Daniel PORTA
KomBInoS – Modellgetriebene Entwicklung von multimodalen Dialogschnittstellen für Smart Services
(Advisor: Prof. Wolfgang Wahlster)

Friday, 15.12.2017, 16:30 h, in D3 2, Reuse seminar room

Abstract:

This work is located in the context of the three research areas Smart Service World, Model-Driven Software Development and Intelligent User Interfaces. The aim of the work was to develop a holistic approach for the efficient creation of multimodal dialogue interfaces for Smart Services. To achieve this goal, KomBInoS was developed as a comprehensive framework for the model-driven creation of such user interfaces.
The framework consists of: (1) a metamodel architecture that allows both model-driven development and the composition of multimodal dialogue interfaces for Smart Services, (2) a methodical approach consisting of coordinated model transformations, possible compositional steps and manual development activities, as well as (3) an integrated tool chain as an implementation of the method.
Furthermore, a cloud-enabled runtime environment was developed for mobile use of the user interfaces created in this way. As proof-of-concept, eight sample applications and demonstrators from five research projects will be presented. In addition to the Smart Service Welt, KomBInoS was and is also used in the field of industry 4.0.

Malte SKORUPPA
Automated Security Analysis of Web Application Technologies
(Advisor: Prof. Michael Backes)

Thursday, 14.12.2017, 16:16 h, in E9 1, room 0.01

Abstract:

The Web today is a complex universe of pages and applications teeming with interactive content that we use for commercial and social purposes. Accordingly, the security of Web applications has become a concern of utmost importance. Devising automated methods to help developers to spot security flaws and thereby make the Web safer is a challenging but vital area of research.
In this thesis, we leverage static analysis methods to automatically discover vulnerabilities in programs written in JavaScript or PHP. While JavaScript is the number one language fueling the client-side logic of virtually every Web application, PHP is the most widespread language on the server side.
In the first part, we use a series of program transformations and information flow analysis to examine the JavaScript Helios voting client. Helios is a state-of-the-art voting system that has been exhaustively analyzed by the security community on a conceptual level and whose implementation is claimed to be highly secure. We expose two severe and so far undiscovered vulnerabilities.
In the second part, we present a framework allowing developers to analyze PHP code for vulnerabilities that can be freely modeled. To do so, we build so-called code property graphs for PHP and import them into a graph database. Vulnerabilities can then be modeled as appropriate database queries. We show how to model common vulnerabilities and evaluate our framework in a large-scale study, spotting hundreds of vulnerabilities.

Aruni CHOUDHARY
Approximation Algorithms for Vietoris-Rips and Cech Filtrations
(Advisor: Prof. Kurt Mehlhorn)

Monday, 11.12.2017, 11:00 h, in E1 4, room 0.19

Abstract:

Persistent Homology is the prominent tool in the field of Topological Data Analysis that aims at understanding and summarizing the shape of data coming from metric spaces. It computes a topological summary that can identify the important features of data, while being stable to imperfections in the data. Vietoris-Rips and Cech complexes are popular simplicial complexes that are used by Persistent Homology to compute the topological summary. Unfortunately, the question of computation becomes prohibitively expensive when dealing with high-dimensional metric spaces, as the size of these simplicial complexes blows up. In my thesis, we attack this problem from several different angles. We present several techniques to approximate the topological summary. Some of our techniques are tailored towards point clouds in Euclidean space and make use of high-dimensional lattice geometry. We present the first polynomial-sized approximation schemes which are independent of the ambient dimension. On the other hand, we also show that polynomial complexity is impossible when a high approximation quality is desired. In addition, we also develop results for metric spaces which have low intrinsic dimension.

Kai HUI
Automatic Methods for Low-Cost Evaluation and Position-Aware Models for Neural Information Retrieval
(Advisor: Prof. Klaus Berberich, now HTW)

Monday, 04.12.2017, 16:15 h, in E1 5, room 029

Abstract:

An information retrieval (IR) system assists people in consuming a huge amount of data, where the evaluation and the construction of such systems are important. However, there exist two difficulties: the overwhelmingly large number of query-document pairs to judge, making IR evaluation a manually laborious task; and the complicated patterns to model due to the non-symmetric, heterogeneous relationships between a query-document pair, where different interaction patterns such as term dependency and proximity have been demonstrated to be useful, yet are non-trivial for a single IR model to encode. In this thesis, we attempt to address both difficulties from the perspectives of IR evaluation and of the retrieval model respectively, by reducing the manual cost with automatic methods, by investigating the use of crowdsourcing in collecting preference judgments, and by proposing novel neural retrieval models. In particular, to address the large number of query-document pairs in IR evaluation, a low-cost selective labelling method is proposed to pick out a small subset of representative documents for manual judgments in favour of the follow-up prediction for the remaining query-document pairs; furthermore, a language-model based cascade measure framework is developed to evaluate the novelty and diversity, utilizing the content of the labeled documents to mitigate incomplete labels. In addition, we also attempt to make the preference judgments practically usable by empirically investigating different properties of the judgments when collected via crowdsourcing; and by proposing a novel judgment mechanism, making a compromise between the judgment quality and the number of judgments. Finally, to model different complicated patterns in a single retrieval model, inspired by the recent advances in deep learning, we develop novel neural IR models to incorporate different patterns of term dependency, query proximity, the density of relevance, and query coverage in a single model. We demonstrate their superior performances through evaluations on different datasets.

Dat Ba NGUYEN
Joint Models for Information and Knowledge Extraction
(Advisor: Prof. Gerhard Weikum)

Friday, 01.12.2017, 14:00 h, in E1 4, room 024

Abstract:

Information and knowledge extraction from natural language text is a key asset for question answering, semantic search, automatic summarization, and other machine reading applications. There are many sub-tasks involved such as named entity recognition, named entity disambiguation, co-reference resolution, relation extraction, event detection, discourse parsing, and others. Solving these tasks is challenging as natural language text is unstructured, noisy, and ambiguous. Key challenges, which focus on identifying and linking named entities, as well as discovering relations between them, include:
– High NERD Quality. Named entity recognition and disambiguation, NERD for short, are preformed first in the extraction pipeline. Their results may affect other downstream tasks.
– Coverage vs. Quality of Relation Extraction. Model-based information extraction methods achieve high extraction quality at low coverage, whereas open information extraction methods capture relational phrases between entities. However, the latter degrades in quality by non-canonicalized and noisy output. These limitations need to be overcome.
– On-the-fly Knowledge Acquisition. Real-world applications such as question answering, monitoring content streams, etc. demand on-the-fly knowledge acquisition. Building such an end-to-end system is challenging because it requires high throughput, high extraction quality, and high coverage.
This dissertation addresses the above challenges, developing new methods to advance the state of the art. The first contribution is a robust model for joint inference between entity recognition and disambiguation. The second contribution is a novel model for relation extraction and entity disambiguation on Wikipedia-style text. The third contribution is an end-to-end system for constructing query-driven, on-the-fly knowledge bases.

November

Alexander WIEDER
Blocking Analysis of Spin Locks under Partitioned Fixed-Priority Scheduling
(Advisor: Dr. Björn Brandenburg)

Wednesday, 29.11.2017, 16:00 h, in E1 5, room 029

Abstract:

Partitioned fixed-priority scheduling is widely used in embedded multicore real-time systems. In multicore systems, spin locks are one well-known technique used to synchronize conflicting accesses from different processor cores to shared resources (e.g., data structures). The use of spin locks can cause blocking. Accounting for blocking is a crucial part of static analysis techniques to establish correct temporal behavior.
In this thesis, we consider two aspects inherent to the partitioned fixed-priority scheduling of tasks sharing resources protected by spin locks: (1) the assignment of tasks to processor cores to ensure correct timing, and (2) the blocking analysis required to derive bounds on the blocking.
Heuristics commonly used for task assignment fail to produce assignments that ensure correct timing when shared resources protected by spin locks are used. We present an optimal approach that is guaranteed to find such an assignment if it exists (under the original MSRP analysis). Further, we present a well-performing and inexpensive heuristic.
For most spin lock types, no blocking analysis is available in prior work, which renders them unusable in real-time systems. We present a blocking analysis ap-proach that supports eight different types and is less pessimistic than prior analyses, where available. Further, we show that allowing nested requests for FIFO- and priority-ordered locks renders the blocking analysis problem NP-hard.

Mainack MONDAL
Understanding & Controlling User Privacy in Social Media via Exposure
(Advisor: Prof. Krishna Gummadi)

Monday, 20.11.2017, 16:00 h, in E1 5, room 029

Abstract:

The recent popularity of Online Social Media sites (OSM) like Facebook and Twitter have led to a renewed discussion about user privacy. In fact, numerous recent news reports and research studies on user privacy stress the OSM users‘ urgent need for better privacy control mechanisms. Thus, today, a key research question is: how do we provide improved privacy protection to OSM users for their social content? In my thesis, we propose a systematic approach to address this question. We start with the access control model, the dominant privacy model in OSMs today. We show that, while useful, the access control model does not capture many theoretical and practical aspects of privacy. Thus, we propose a new model, which we term the exposure control model. We define exposure for a piece of content as the set of people who actually view the content. We demonstrate that our model is a significant improvement over access control to capture users‘ privacy requirements. Next, we investigate the effectiveness of our model to protect users‘ privacy in three real world scenarios: (1) Understanding and controlling exposure using social access control lists (SACLs) (2) Controlling exposure by limiting large-scale social data aggregators and (3) Understanding and controlling longitudinal exposure in OSMs, i.e., how users control exposure of their old OSM content. We show that, in each of these cases, the exposure control-based approach helps us to design improved privacy control mechanisms.

Oliver NALBACH
Smarter Screen Space Shading
(Advisor: Prof. Hans-Peter Seidel)

Friday, 10.11.2017, 13:00 h, in E1 4, room 0.19

Abstract:

This dissertation introduces a range of new methods to produce images of virtual scenes in a matter of milliseconds. Imposing as few constraints as possible on the set of scenes that can be handled, e.g., regarding geometric changes over time or lighting conditions, precludes pre- computations and makes this a particularly difficult problem. We first present a general approach, called deep screen space, using which a variety of light transport aspects can be simulated within the aforementioned setting. This approach is then further extended to additionally handle scenes containing participating media like clouds. We also show how to improve the correctness of deep screen space and related algorithms by accounting for mutual visibility of points in a scene. After that, we take a completely different point of view on image generation using a learning- based approach to approximate a rendering function. We show that neural networks can hallucinate shading effects which otherwise have to be computed using costly analytic computations. Finally, we contribute a holistic framework to deal with phosphorescent materials in computer graphics, covering all aspects from acquisition of real materials, to easy editing, to image synthesis.

October

Kevin STREIT
Runtime-Adaptive Generalized Task Parallelism
(Advisor: Prof. Andreas Zeller)

Friday, 20.10.2017, 16:00 h, in E9 1, room 0.01

Abstract:

Multi core systems are ubiquitous nowadays and their number is ever increasing. And while, limited by physical constraints, the computational power of the individual cores has been stagnating or even declining for years, a solution to effectively utilize the computational power that comes with the additional cores is yet to be found. Existing approaches to automatic parallelization are often highly specialized to exploit the parallelism of specific program patterns, and thus to parallelize a small subset of programs only. In addition, frequently used invasive runtime systems prohibit the combination of different approaches, which impedes the practicality of automatic parallelization.
In the following thesis, we show that specializing to narrowly defined program patterns is not necessary to efficiently parallelize applications coming from different domains. We develop a generalizing approach to parallelization, which, driven by an underlying mathematical optimization problem, is able to make qualified parallelization decisions taking into account the involved runtime overhead. In combination with a specializing, adaptive runtime system the approach is able to match and even exceed the performance results achieved by specialized approaches.

Natalia BOLDYREV
Alignment of Multi-Cultural Knowledge Repositories
(Advisor: Prof. Gernhard Weikum)

Monday, 16.10.2017, 15:00 h,  in E1 4, room 024

Abstract:

The ability to interconnect multiple knowledge repositories within a single framework is a key asset for various use cases such as document retrieval and question answering. However, independently created repositories are inherently heterogeneous, reflecting their diverse origins. Thus, there is a need to align concepts and entities across knowledge repositories. A limitation of prior work is the assumption of high affinity between the repositories at hand, in terms of structure and terminology. The goal of this dissertation is to develop methods for constructing and curating alignments between multi-cultural knowledge repositories. The first contribution is a system, ACROSS, for reducing the terminological gap between repositories. The second contribution is two alignment methods, LILIANA and SESAME, that cope with structural diversity. The third contribution, LAIKA, is an approach to compute alignments between dynamic repositories. Experiments with a suite of Web-scale knowledge repositories show high Quality alignments. In addition, the application benefits of LILIANA and SESAME are demonstrated by use cases in search and Exploration.

Cosmina CROITURU
Graph Models for Rational Social Interaction
(Advisor: Prof. Kurt Mehlhorn)

Wednesday, 11.10.2017, 11:00 h,  in E1 4, room 019

Abstract:

This thesis covers various areas in Social Reasoning having as central hub Abstract Argumentation, viewed as a graph-based modeling of the fundamental issues that arise in defeasible domains.
The first Part of this thesis is devoted to combinatorial aspects of Dung‘s Argumentation Frameworks (AFs) related to computational issues. I prove that any AF can be syntactically augmented into a normal form preserving the semantic properties of the original arguments, by using a cubic time rewriting technique. I introduce polyhedral labellings for an AF, which is a polytope with the property that its integral points are exactly the incidence vectors of specific types of extensions. Also, a new notion of acceptability of arguments is considered – deliberative acceptability – and I provide its time computational complexity analysis.
In the second Part, I introduce a novel graph-based model for aggregating preferences. By using graph operations to describe properties of the aggregators, axiomatic characterizations of aggregators corresponding to usual majority or approval & disapproval rule are given. Integrating AF‘s semantics into our model provides a novel qualitative approach to classical social choice: argumentative aggregation of individual preferences. Also, a functional framework abstracting many-to-many two-sided markets is considered: the study of the existence of a Stable Choice Matching in a Bipartite Choice System is reduced to the study of the existence of Stable Common Fixed Points of two choice functions. A generalization of the Gale-Shapley algorithm is designed and, in order to prove its correctness, new characterization of path independence choice functions is given.
Finally, in the third Part, we extend Dung’s AFs to Opposition Frameworks, reducing the gap between Structured and Abstract Argumentation. A guarded attack calculus is developed, giving proper generalizations of Dung’s extensions.

September

Siyu TANG
People Detection and Tracking in Crowded Scenes
(Advisor: Prof. Bernt Schiele)

Friday, 29.09.2017, 15:00 h,  in E1 4, room 024

Abstract:

People are often a central element of visual scenes, particularly in real-world street scenes. Thus it has been a long-standing goal in Computer Vision to develop methods aiming at analyzing humans in visual data. Due to the complexity of real-world scenes, visual understanding of people remains challenging for machine perception. In this defense, I will discuss a number of diverse tasks that aim to enable vision systems to analyze people in realistic images and videos. In particular, I will present several novel models and algorithms which push the boundary of state-of-the-arts and result in superior performance.

Stephan FRIEDRICHS
Metastability-Containing Circuits, Parallel Distance Problems, and Terrain Guarding
(Advisor: Dr. Christoph Lenzen)

Monday, 11.09.2017, 15:30 h,  in E1 4, room 024

Abstract:

We study three problems. The first is the phenomenon of metastability in digital circuits.
This is a state of bistable storage elements, such as registers, that is neither logical 0 nor 1
and breaks the abstraction of Boolean logic. We propose a time- and value-discrete model
for metastability in digital circuits and show that it reflects relevant physical properties.
Further, we propose the fundamentally new approach of using logical masking to perform
meaningful computations despite the presence of metastable upsets and analyze what functions
can be computed in our model. Additionally, we show that circuits with masking registers
grow computationally more powerful with each available clock cycle.
The second topic are parallel algorithms, based on an algebraic abstraction of the Moore-
Bellman-Ford algorithm, for solving various distance problems. Our focus are distance
approximations that obey the triangle inequality while at the same time achieving polylogarithmic
depth and low work.
Finally, we study the continuous Terrain Guarding Problem. We show that it has a rational
discretization with a quadratic number of guard candidates, establish its membership in NP
and the existence of a PTAS, and present an efficient implementation of a solver.

Amy SIU
Knowledge-driven Entity Recognition and Disambiguation in Biomedical Text
(Advisor: Prof. Gerhard Weikum)

Monday, 04.09.2017, 16:00 h,  in E1 4, room 024

Abstract:

Entity recognition and disambiguation (ERD) for the biomedical domain are notoriously difficult problems due to the variety of entities and their often long names in many variations. Existing works focus heavily on the molecular level in two ways. First, they target scientific literature as the input text genre. Second, they target single, highly specialized entity types such as chemicals, genes, and proteins. However, a wealth of biomedical information is also buried in the vast universe of Web content. In order to fully utilize all the information available, there is a need to tap into Web content as an additional input. Moreover, there is a need to cater for other entity types such as symptoms and risk factors since Web content focuses on consumer health.
The goal of this thesis is to investigate ERD methods that are applicable to all entity types in scien-tific literature as well as Web content. In addition, we focus on under-explored aspects of the bio-medical ERD problems — scalability, long noun phrases, and out-of-knowledge base (OOKB) entities.
This thesis makes four main contributions, all of which leverage knowledge in UMLS (Unified Med-ical Language System), the largest and most authoritative knowledge base (KB) of the biomedical domain. The first contribution is a fast dictionary lookup method for entity recognition that maximiz-es throughput while balancing the loss of precision and recall. The second contribution is a semantic type classification method targeting common words in long noun phrases. We develop a custom set of semantic types to capture word usages; besides biomedical usage, these types also cope with non-biomedical usage and the case of generic, non-informative usage. The third contribution is a fast heu-ristics method for entity disambiguation in MEDLINE abstracts, again maximizing throughput but this time maintaining accuracy. The fourth contribution is a corpus-driven entity disambiguation method that addresses OOKB entities. The method first captures the entities expressed in a corpus as latent representations that comprise in-KB and OOKB entities alike before performing entity disambiguation.

August

Mittul SINGH
Handling long-term dependencies and rare words in low-resource language Modelling
(Advisor: Prof. Dietrich Klakow)

Thursday, 31.08.2017, 13:00 h,  in C7 4, first upper floor conference room

Abstract:

For low resource NLP tasks like Keyword Search and domain adaptation with small amounts of in-domain data, having well-trained language models is essential. Two major challenges faced while building these language models for such tasks are 1) how the models handle the long-term dependencies, and 2) how to represent the words which occur with a low frequency (rare words) in the text. To handle long-term dependencies in the text, we compare existing techniques and extend these techniques for domain adaptation for small corpora in Speech Recognition, leading to improvements in word error rates. Further, we formulate a new language model architecture to capture long-term dependencies, helping us understand the extent to which enumeration of dependencies can compare to more popular neural network techniques for capturing such dependencies. Next, to handle rare words in the text, we propose an unsupervised technique of generating rare-word representations, which is more general and requires less mathematical engineering than comparable methods. Finally, embedding these representations in a language model shows significant improvements in rare-word perplexity over other such models.

Ching Hoo TANG
Logics for Rule-Based Configuration Systems
(Advisor: Prof. Christoph Weidenbach)

Wednesday, 16.08.2017, 10:00 h,  in E1 5, room 002

Abstract:

Rule-based configuration systems are being successfully used in industry, such as DOPLER at Siemens. Those systems make complex domain knowledge available to users and let them derive valid, customized products out of large sets of components. However, maintenance of such systems remains a challenge. Formal models are a prerequisite for the use of automated methods of analysis.
This thesis deals with the formalization of rule-based configuration. We develop two logics whose transition semantics are suited for expressing the way systems like DOPLER operate. This is due to the existence of two types of transitions, namely user and rule transitions, and a fixpoint mechanism that determines their dynamic relationship. The first logic, PIDL, models propositional systems, while the second logic, PIDL+, additionally considers arithmetic constraints. They allow the formulation and automated verification of relevant properties of rule-based configuration systems.

Daniel WAND
Superposition: Types and Induction
(Advisor: Prof. Christoph Weidenbach)

Friday, 04.08.2017, 10:00 h,  in E1 5, room 002

Abstract:

Proof assistants are becoming widespread for formalization of theories both in computer science and mathematics. They provide rich logics with powerful type systems and machine-checked proofs which increase the confidence in the correctness in complicated and detailed proofs. However, they incur a significant overhead compared to pen-and-paper proofs.
This thesis describes work on bridging the gap between high-order proof assistants and first-order automated theorem provers by extending the capabilities of the automated theorem provers to provide features usually found in proof assistants. My first contribution is the development and implementation of a first-order superposition calculus with a polymorphic type system that supports type classes and the accompanying refutational completeness proof for that calculus. The inclusion of the type system into the superposition calculus and solvers completely removes the type encoding overhead when encoding problems from many proof assistants. My second contribution is the development of SupInd, an extension of the typed superposition calculus that supports data types and structural induction over those data types. It includes heuristics that guide the induction and conjecture strengthening techniques, which can be applied independently of the underlying calculus. I have implemented the contributions in a tool called Pirate. The evaluations of both contributions show promising results.

July

Michael DIRNBERGER
Preliminaries for Distributed Natural Computing Inspired by the Slime Mold Physarum Polycephalum
(Advisor: Prof. Kurt Mehlhorn)

Monday, 31.07.2017, 10:00 h,  in E1 4, room 0.19

Abstract:

In my talk I will discuss the Slime Mold Physarum Polycephalum within the framework of Natural Computing. In particular I will report on our efforts to better understand the highly dynamic networks of this organism and our attempts to model its distributed nature. In this context I will present evidence that our promising model mimics key features of P. Polycephalum such as flow reversals and synchronisation. Finally, I will discuss the state of our work and suggest directions for future research.

Clemens HAMMACHER
Efficient Runtime Systems for Speculative Parallelization
(Advisor: Prof. Andreas Zeller)

Thursday, 27.07.2017, 11:00 h,  in E9 1, room 0.01

Abstract:

Manual parallelization is time consuming and error-prone. Automatic parallelization on the other hand is often unable to extract substantial parallelism. Using speculation, however, most of the parallelism can be exploited even of complex programs. Speculatively parallelized programs always need a runtime system during execution in order to ensure the validity of the speculative assumptions, and to ensure the correct semantics even in the case of misspeculation. These runtime systems should influence the execution time of the parallel program as little as possible. In this thesis, we investigate to which extend state-of-the-art systems which track memory accesses explicitly in software fulfill this requirement. We describe and implement changes which improve their performance substantially. We also design two new systems utilizing virtual memory abstraction to track memory changed implicitly, thus causing less overhead during execution. One of the new systems is integrated into the Linux kernel as a kernel module, providing the best possible performance. Furthermore it provides stronger soundness guarantees than any state-of-the-art system by also capturing system calls, hence including for example file I/O into speculative isolation. In a number of benchmarks we show the performance improvements of our virtual memory based systems over the state of the art. All our extensions and newly developed speculation systems are made available as open source.

Stefan NÜRNBERGER
Mitigating the imposition of malicious behavior on code
(Advisor: Prof. Michael Backes)

Thursday, 20.07.2017, 11:00 h,  in E9 1, room 0.01

Abstract:

If vulnerabilities in software allow to alter the behaviour of a program, traditional virus scanners do not have a chance because there is no ingress of a malicious program. Even worse, the same, potentially vulnerable software, runs on millions of computers, smart phones, and tablets out there. One program flaw allows for malicious behaviour to be imposed on millions of instances at once.
This thesis presents four novel approaches that all mitigate or prevent this type of imposition of malicious behaviour on otherwise benign code. Since attacks have adapted during the writing of this thesis, the counteract techniques presented are tailored towards different stages of the still ongoing cat-and-mouse game and each technique resembles the status quo in defences at that time.

Christian EISENTRAUT
Principles of Markov Automata
(Advisor: Prof. Holger Hermanns)

Monday, 17.07.2017, 17:00 h,  in E1 7, room 001

Abstract:

A substantial amount of today’s engineering problems revolve around systems that are concurrent and stochastic by their nature. Solution approaches attacking these problems often rely on the availability of formal mathematical models that reflect such systems as comprehensively as possible. In the thesis, we develop a compositional model, Markov automata, that integrates concurrency, and probabilistic and timed stochastic behaviour. This is achieved by blending two well-studied constituent models, probabilistic automata and interactive Markov chains. A range of strong and weak bisimilarity notions are introduced and evaluated as candidate relations for a natural behavioural equivalence between systems. Among them, weak distribution bisimilarity stands out as a natural notion being more oblivious to the probabilistic branching structure than prior notions. We discuss compositionality, axiomatizations, decision and minimization algorithms, state-based characterizations and normal forms for weak distribution bisimilarity. In addition, we detail how Markov automata and weak distribution bisimilarity can be employed as a semantic basis for generalized stochastic Petri nets, in such a way that known shortcomings of their classical semantics are ironed out in their entirety.

Ashutosh MODI
Modeling Common Sense Knowledge via Scripts
(Advisor: Prof. Ivan Titov, now Amsterdam)

Friday, 07.07.2017, 12:15 h,  in C7 4, first floor conference room

Abstract:

It is generally believed that incorporation of common sense knowledge about the world would benefit natural language understanding systems. This dissertation considers modeling common sense knowledge associated with everyday activities using the concept of scripts. Scripts are sequences of events describing stereotypical human activities (i.e. scenarios), for example, cooking pasta, baking a cake, etc. Scripts have been shown to be useful in the area of natural language processing (NLP).
To model script knowledge, three main tasks need to be addressed: event ordering, event paraphrasing and event prediction. This dissertation introduces a novel probabilistic neural network based model for event ordering and event paraphrasing. The model outperforms currently prevalent graph based and count based methods. For addressing the event prediction task, we propose an extension of the event ordering model. We experimentally show the advantages of the event prediction model over existing count based methods.
A further result of this dissertation is the InScript corpus. InScript is a crowdsourced collection of stories, where each story is about only one particular script scenario. The corpus is manually annotated with script specific event types, participant types and coreference information. InScript provides a resource for analyzing the influence of script knowledge. Intuitively, script knowledge should influence language understanding. This dissertation contributes towards foundational work done for understanding the influence of script knowledge on language comprehension. From multiple experiments, we are able to empirically establish that script knowledge plays a significant role in language comprehension.

Subhabrata MUKHERJEE
Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities
(Advisor: Prof. Gerhard Weikum)

Thursday, 06.07.2017, 16:00 h, in E1 5, Room 029.

Abstract:
One of the major hurdles preventing the full exploitation of information from online communities is the widespread concern regarding the quality and credibility of user-contributed content. We propose probabilistic graphical models that can leverage the joint interplay between multiple factors — like user interactions, community dynamics, and textual content — to automatically assess the credibility of user-contributed online information, expertise of users and their evolution with user-interpretable explanation. We devise new models based on Conditional Random Fields that enable applications such as extracting reliable side-effects of drugs from user-contributed posts in health forums, and identifying credible news articles in news forums.
Online communities are dynamic, as users join and leave, adapt to evolving trends, and mature over time. To capture this dynamics, we propose generative models based on Hidden Markov Model, Latent Dirichlet Allocation, and Brownian Motion to trace the continuous evolution of user expertise and their language model over time. This allows us to identify expert users and credible content jointly over time, improving state-of-the-art recommender systems by explicitly considering the maturity of users. This enables applications such as identifying useful product reviews, and detecting fake and anomalous reviews with limited information.

June

Adam Grycner 
Constructing Lexicons of Relational Phrases
(Advisor: Prof. Gerhard Weikum)

Wednesday, 28.06.2017, 15:00 h, in E1 5 (MPI-SWS), Room 029.

Abstract:
Knowledge Bases are one of the key components of Natural Language Understanding systems. For example, DBpedia, YAGO, and Wikidata capture and organize knowledge about named entities and relations between them, which is often crucial for tasks like Question Answering and Named Entity Disambiguation. While Knowledge Bases have good coverage of prominent entities, they are often limited with respect to relations. The goal of this thesis is to bridge this gap and automatically create lexicons of textual representations of relations, namely relational phrases. The lexicons should contain information about paraphrases, hierarchy, as well as semantic types of arguments of relational phrases. The thesis makes three main contributions. The first contribution addresses disambiguating relational phrases by aligning them with the WordNet dictionary. Moreover, the alignment allows imposing the WordNet hierarchy on the relational phrases. The second contribution proposes a method for graph construction of relations using Probabilistic Graphical Models. In addition, we apply this model to relation paraphrasing. The third contribution presents a method for constructing a lexicon of relational paraphrases with fine-grained semantic typing of arguments. This method is based on information from a multilingual parallel corpus.

 

Peng Sun
Bi-(N-) Cluster Editing and Its Biomedical Applications
(Advisor: Prof. Jan Baumbach)

Wednesday, 28.06.2017, 10:00 h, in E1 7, Room 001

Abstract:
The extremely fast advances in wet-lab techniques lead to an exponential growth of heterogeneous and unstructured biological data, posing a great challenge to data integration in nowadays system biology. The traditional clustering approach, although widely used to divide the data into groups sharing common features, is less powerful in the analysis of heterogeneous data from n different sources (n>2). The co-clustering approach has been widely used for combined analyses of multiple networks to address the challenge of  heterogeneity. In this thesis, novel methods for the co-clustering of large scale heterogeneous data sets are presented in the software package n-CluE: one exact algorithm and two heuristic algorithms based on the model of bi-/n-cluster editing by modeling the input as n partite graphs and solving the clustering problem with various strategies.
In the first part of the thesis, the complexity and the fixed-parameter tractability of the extended bicluster editing model with relaxed constraints are investigated, namely the π-bicluster editing model and its NP-hardness is proven. Based on the results of this analysis, three strategies within the n-CluE software package are then established and discussed, together with the evaluations on performances and the systematic comparisons against other algorithms of the same type in solving bi-/n-cluster editing problem.
To demonstrate the practical impact, three real-world analyses using n-CluE are performed, including (a) prediction of novel genotype phenotype associations by clustering the data from Genome-Wide Association Studies; (b) comparison between n-CluE and eight other biclustering tools on GEO Omnibus microarray data sets; (c) drug repositioning predictions by co-clustering on drug, gene and disease networks. The outstanding performance of n-CluE in the real-world applications shows its strength and flexibility in integrating heterogeneous data and extracting biological relevant information in bioinformatic analyses.


Pablo GARRIDO
High-quality Face Capture, Animation and Editing from Monocular Video
(Advisor: Prof. Christian Theobalt)

Monday, 26.06.2017, 09:30 h, in E1 4 (MPI-INF), Room 019

Abstract:
Digitization of virtual faces in movies requires complex capture setups and extensive
manual work to produce superb animations and video-realistic editing. This thesis pushes
the boundaries of the digitization pipeline by proposing automatic algorithms for high-quality
3D face capture and animation, as well as photo-realistic face editing. These algorithms
reconstruct and modify faces in 2D videos recorded in uncontrolled scenarios and illumination.
In particular, advances in three main areas offer solutions for the lack of depth and overall
uncertainty in video recordings. First, contributions in capture include model-based
reconstruction of detailed, dynamic 3D geometry that exploits optical and shading cues,
multilayer parametric reconstruction of accurate 3D models in unconstrained setups based
on inverse rendering, and regression-based 3D lip shape enhancement from high-quality
data. Second, advances in animation are video-based face reenactment based on robust
appearance metrics and temporal clustering, performance-driven retargeting of detailed facial
models in sync with audio, and the automatic creation of personalized controllable 3D rigs.
Finally, advances in plausible photo-realistic editing are dense face albedo capture and mouth
interior synthesis using image warping and 3D teeth proxies. High-quality results attained on
challenging application scenarios confirm the contributions and show great potential for the automatic
creation of photo-realistic 3D faces.

Mateusz MALINOWSKI
Towards Holistic Machines: From Visual Recognition To Question Answering About Real-World Images
(Advisor: Dr. Mario Fritz)

Tuesday, 20.06.2017, 17:00 h, in E1 4, Room 024

Abstract:
Computer Vision has undergone major changes over the recent five years.
With the advances on Deep Learning and creation of large-volume datasets,
the progress becomes particularly strong on the image classification task.
Therefore, we investigate if the performance of such architectures generalizes to
more complex tasks that require a more holistic approach to scene comprehension.
The presented work focuses on learning spatial and multi-modal representations,
and the foundations of a Visual Turing Test, where the scene understanding is tested
by a series of questions about its content.
In our studies, we propose DAQUAR, the first ‘question answering about real-world
images’ dataset together with methods, termed a symbolic-based and a neural-based
visual question answering architectures, that address the problem. The symbolic-based
method relies on a semantic parser, a database of visual facts, and a bayesian
formulation that accounts for various interpretations of the visual scene.
The neural-based method is an end-to-end architecture composed of a question encoder,
image encoder, multimodal embedding, and answer decoder. This architecture has proven
to be effective in capturing language-based biases. It also becomes the standard component
of other visual question answering architectures.
Along with the methods, we also investigate various evaluation metrics that embraces
uncertainty in word’s meaning, and various interpretations of the scene and the question.

 

Sebastian HOFFMANN
Competitive Image Compression with Linear PDEs
(Advisor: Prof. Joachim Weickert)

Friday, 16.06.2017, 16:15 h, in E1 7, Room 001

Abstract:
In recent years, image compression methods with partial differential equations (PDEs) have become a promising alternative to standard methods like JPEG or JPEG2000. They exploit the ability of PDEs to recover an image from only a small amount of stored pixels.
To do so, this information is diffused into unknown image regions. Most successful codecs rely on a sophisticated nonlinear diffusion process. In this thesis, we investigate the potential of linear PDEs for image compression. This follows up recent promising compression results with the Laplace and biharmonic operator. As a central contribution, we thoroughly study the diffusion-based image reconstruction, and establish a connection to the concept of sparsity with the help of discrete Green’s functions. Apart from gaining novel theoretical insights, this offers algorithmic benefits: We obtain a fast reconstruction procedure and improved data optimisation methods. The power of linear PDEs is demonstrated by proposing three different compression approaches. Besides a designated codec for depth maps we present an algorithm for general images. The third approach aims at a fast decoding. Experiments show that linear PDEs can compete with nonlinear PDEs in a compression context, and that codecs with linear PDEs are even able to outperform standard approaches in terms of both speed and quality.

 

Viktor MATVIENKO
Image Synthesis Methods for Texture-Based Visualization of Vector Fields
(Advisor: Prof. Jens Krüger, now Uni Duisburg-Essen)

Wed, 07.06.2017, 16:00 h, DFKI building, VisCenter -1.63

Visualization of vector fields is an important field of study having a wide range of scientific and engineering applications from aircraft design to molecular chemistry. Dense or texture-based approach aims to show the complete vector field with the means of image-synthesis and, therefore, not to miss any important features of the data set.

The presented work describes novel models and techniques for texture-based vector field visualization. Application of the methods of discrete image analysis to popular visualization techniques such as Line Integral Convolution leads to new insights in the structure of these methods and their possible development. Based on a computational model for evaluation of visualization texture quality score important attributes of an efficient visualization are identified including contrast and specific spatial frequency structure. Then, visualization techniques with increasing applicability are designed aiming to optimize these quality features. Finally, a discrete probabilistic framework is suggested as a generalization of the presented visualization methods. Based on this theoretical foundation, a multi-scale three-dimensional flow visualization approach is presented. The demonstrated visualization results are evaluated using formal quality metrics combined with formal statistical methods and an expert survey.

 

Piotr DANILEWSKI
ManyDSL – One Host for All Language Needs
(Advisor: Prof. Philipp Slusallek)

Tue, 06.06.2017, 9:15h, DFKI building, VisCenter -1.63

Languages shape thoughts. To evolve our thoughts we need to evolve the languages we speak in. But what tools are there to create and upgrade the computer languages? Nowadays two main approaches exist. Dedicated language tools and parser generators allows to define new standalone languages from scratch. Alternatively, one can “abuse” sufficiently flexible host languages to embed small domain-specific languages within them.

In this work we present an alternative: ManyDSL. The user specifies the syntax of a custom language in a form of LL1 grammar. The semantics however is specified in a single functional host language – DeepCPS. In DeepCPS it is not necessary to manually create AST or any other intermediate structure. Instead, the semantic functions simply contain the actual code that the program should execute. DeepCPS features a novel, dynamic approach to staging. The staging allows for partial evaluation, and defining arbitrary phases of the compilation process. Language semantic function can specify:

  • early compilation steps such as custom variable lookup rules and type checking
  • the executable code that is compiled
  • the domain-specific optimization given as varying function specialization strategies

all this given in a single, uniform functional form.

Moreover, fragments of ManyDSL languages can be put in functions as well and can be shared in form of a library. The user may define and use many of such languages within a single project. With ManyDSL we want to encourage developers to design and use languages that best suit their needs. We believe that over time, with the help of grammar libraries, creating new languages will become accessible to every programmer.

 

Yuliya GRYADITSKAYA
High Dynamic Range Imaging: Problems of Video Exposure Bracketing, Luminance Calibration and Gloss Editing
(Advisor: Dr.-Ing. habil. Karol Myszkowski)

Fri, 02.06.2017, 10:00h, building E1 4, room 0.19

Two-dimensional, conventional images are gradually losing their hegemony, leaving room for novel formats. Among these, 8 bit images give place to high dynamic range (HDR) images, allowing to improve color gamut and visibility of details in dark and bright areas leading to a more immersive viewing experience. In the same time, light field scene representation as well is gaining importance, further propelled by the recent reappearance of virtual reality. Light field data allows changing a camera position, an aperture or a focal length in post-production. It facilitates object insertions and simplifies visual effects workflow by integrating 3D nature of visual effects with 3D nature of light fields. Content generation is one of the stumbling blocks in these realms. Sensor limitation does not allow to capture a wide dynamic range with a conventional camera. The “HDR mode“, often met on mobile devices, relies on techniques called images fusion and allows to partially overcome the limited range of a sensor. The HDR video in the same time remains a challenging problem. A solution for capturing HDR video on a conventional capturing device will be presented. Further, HDR content visualization task often requires an input to be in absolute values: whether the target media is an HDR or a standard/low dynamic range display. To this end, a calibration algorithm is proposed, that can be applied to existent imaging and does not require any additional measurement hardware. Finally, as the use of multidimensional scene representations becomes more common, a key challenge is the ability to edit or modify the appearance of the objects in the light field. A multidimensional filtering approach is described in which the specular highlights are filtered in the spatial and angular domains to target a desired increase of the material roughness.

 

Martin WEIGEL
Interactive On-Skin Devices for Expressive Touch-based Interactions
(Advisor: Prof. Jürgen Steimle)

Thu, 01.06.2017, 10:00h, building E1 7, room 001

Skin has been proposed as a large, always-available, and easy to access input surface for mobile computing. However, it is fundamentally different than prior rigid de-vices: skin is elastic, highly curved, and provides tactile sensation. This thesis ad-vances the understanding of skin as an input surface and contributes novel skin-worn devices and their interaction techniques.

We present the findings from an elicitation study on how and where people interact on their skin. The findings show that participants use various body locations for on-skin interaction. Moreover, they show that skin allows for expressive interaction using multi-touch input and skin-specific modalities.

We contribute three skin-worn device classes and their interaction techniques to enable expressive on-skin interactions: iSkin investigates multi-touch and pressure input on various body locations. SkinMarks supports touch, squeeze, and bend sen-sing with co-located visual output. The devices‘ conformality to skin enables interaction on highly challenging body locations. Finally, ExpressSkin investigates expressive interaction techniques using fluid combinations of high-resolution pressure, shear, and squeeze input. Taken together, this thesis contributes towards expressive on-skin interaction with multi-touch and skin-specific input modalities on various body locations.

May

Fabian MÜLLER
Analyzing DNA Methylation Signatures of Cell Identity
(Advisor: Prof. Thomas Lengauer)

Wed, 31.05.2017, 16:00h, building E1 5, room 029

Although virtually all cells in an organism share the same genome, regulatory mecha-nisms give rise to hundreds of different, highly specialized cell types. These mechanisms are governed by epigenetic patterns, such as DNA methylation, which determine DNA packaging, spatial organization of the genome, interactions with regulatory enzymes as well as RNA expression and which ultimately reflect the state of each individual cell. In my dissertation, I have developed computational methods for the interpretation of epigenetic signatures inscribed in DNA methylation and implemented these methods in software packages for the comprehensive analysis of genome-wide datasets. Using these tools, we characterized epigenetic variability among mature blood cells and described DNA methylation dynamics during immune memory formation and during the reprogramming of leukemia cells to a pluripotent cell state. Furthermore, we profiled the methylomes of stem and progenitor cells of the human blood system and dissected changes during cell lineage commitment. We employed machine learning methods to derive statistical models that are capable of accurately inferring cell identity and identify methylation signatures that describe the relatedness between cell types from which we derived a data-driven reconstruction of human hematopoiesis. In summary, the methods and software tools developed in this work provide a framework for interpreting the epigenomic landscape spanned by DNA methylation and for understanding epigenetic regulation involved in cell differentiation and disease.

 

Maksim LAPIN
Image Classification with Limited Training Data and Class Ambiguity
(Advisor: Prof. Bernt Schiele)

Mon, 22.05.2017, 16:00h, building E1 4, room 024

Modern image classification methods are based on supervised learning algorithms that require labeled training data. However, only a limited amount of annotated data may be available in certain applications due to scarcity of the data itself or high costs associated with human annotation. Introduction of additional information and structural constraints can help improve the performance of a learning algorithm. In this talk, we study the framework of learning using privileged information and demonstrate its relation to learning with instance weights. We also consider multitask feature learning and develop an efficient dual optimization scheme that is particularly well suited to problems with high dimensional image descriptors. Scaling annotation to a large number of image categories leads to the problem of class ambiguity where clear distinction between the classes is no longer possible. Many real world images are naturally multilabel yet the existing annotation might only contain a single label. In this talk, we propose and analyze a number of loss functions that allow for a certain tolerance in top k predictions of a learner. Our results indicate consistent improvements over the standard loss functions that put more penalty on the first incorrect prediction compared to the proposed losses.

 

Anna ROHRBACH
Generating and Grounding of Natural Language Descriptions for Visual Data
(Advisor: Prof. Bernt Schiele)

Mon, 15.05.2017, 16:00h, building E1 4, room 024

Generating natural language descriptions for visual data links computer vision and computational linguistics. Being able to generate a concise and human-readable description of a video is a step towards visual understanding. At the same time, grounding natural language in visual data provides disambiguation for the linguistic concepts, necessary for many applications. This thesis focuses on both directions and tackles three specific problems.

First, we develop recognition approaches to understand video of complex cooking activities. We propose an approach to generate coherent multi-sentence descriptions for our videos. Furthermore, we tackle the new task of describing videos at variable level of detail. Second, we present a large-scale dataset of movies and aligned professional descriptions. We propose an approach, which learns from videos and sentences to describe movie clips relying on robust recognition of visual semantic concepts. Third, we propose an approach to ground textual phrases in images with little or no localization supervision, which we further improve by introducing Multimodal Compact Bilinear Pooling for combining language and vision representations. Finally, we jointly address the task of describing videos and grounding the described people.

To summarize, this thesis advances the state-of-the-art in automatic video description and visual grounding and also contributes large datasets for studying the intersection of computer vision and computational linguistics.

 

Jan HOSANG
Analysis and Improvement of the Visual Object Detection Pipeline
(Advisor: Prof. Bernt Schiele)

Mon, 02.05.2017, 16:00h, building E1 4, room 024

Visual object detection has seen substantial improvements during the last years due to the possibilities enabled by deep learning. In this thesis, we analyse and improve different aspects of the commonly used detection pipeline: (1) We analyse ten years of research on pedestrian detection and find that improvement of feature representations was the driving factor. (2) Motivated by this finding, we adapt an end-to-end learned detector architecture from general object detection to pedestrian detection. (3) A comparison between human performance and state-of-the-art pedestrian detectors shows that pedestrian detectors still have a long way to go before reaching human level performance and diagnoses failure modes of several top performing detectors. (4) We analyse detection proposals as a preprocessing step for object detectors. By examining the relationship between localisation of proposals and final object detection performance, we define and experimentally verify a metric that can be used as a proxy for detector performance. (5) We analyse a common postprocessing step in virtually all object detectors: non-maximum suppression (NMS). We present two learnable approaches that overcome the limitations of the most common approach to NMS. The introduced paradigm paves the way to true end-to-end learning of object detectors without any post-processing.

April

Mikhail KOZHEVNIKOV
XML3D: Cross-lingual Transfer of Semantic Role Labeling Models
(Advisor: Prof. Ivan Titov, now Amsterdam)

Mon, 24.04.2017, 14:00h, building C7 4, room 1.17

Semantic role labeling is an important step in natural language understanding, offering a formal representation of events and their participants as described in natural language, without requiring the event or the participants to be grounded. Extensive annotation efforts have enabled statistical models capable of accurately analyzing new text in several major languages. Unfortunately, manual annotation for this task is complex and requires training and calibration even for professional linguists, which makes the creation of manually annotated resources for new languages very costly. The process can be facilitated by leveraging existing resources for other languages using techniques such as cross-lingual transfer and annotation projection.

This work addresses the problem of improving semantic role labeling models or creating new ones using cross-lingual transfer methods. We investigate different approaches to adapt to the availability and nature of the existing target-language resources. Specifically, cross-lingual bootstrapping is considered for the case where some annotated data is available for the target language, but using an annotation scheme different from that of the source language. In the more common setup, where no annotated data is available for the target language, we investigate the use of direct model transfer, which requires no sentence-level parallel resources. Finally, for cases where the parallel resources are of limited size or poor quality, we propose a novel method, referred to as feature representation projection, combining the strengths of direct transfer and annotation projection.

March

Kristian SONS
XML3D: Interactive 3D Graphics for the Web
(Advisor: Prof. Philipp Slusallek)

Thu, 30.03.2017, 15:00h, building D3 4, room -1.63 (VisCenter)

The web provides the basis for worldwide distribution of digital information but also established itself as a ubiquitous application platform. The idea of integrating interactive 3D graphics into the web has a long history, but eventually both fields developed largely independent of each other.

In this thesis we develop the XML3D architecture, our approach to integrate a declarative 3D scene description into exiting web technologies without sacrificing required flexibility or tying the description to a specific rendering algorithm. XML3D extends HTML5 and leverages related web technologies including Cascading Style Sheets (CSS) and the Document Object Model (DOM). On top of this seamless integration, we present novel concepts for a lean abstract model that provides the essential functionality to describe 3D scenes, a more general description of dynamic effects based on declarative dataflow graphs, more general yet programmable material descriptions, assets which are still configurable during instantiation from a 3D scene, and a container format for efficient delivery of binary 3D content to the client.

 

Sourav DUTTA
Efficient knowledge management for named entities from text
(Advisor: Prof. Gerhard Weikum)

Thu, 09.03.2017, 16:00h, building E1 4, room 0.24

The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented.

February

Erdal KUZEY
Populating knowledge bases with temporal information
(Advisor: Prof. Gerhard Weikum)

Tues, 28.02.2017, 15:00h, building E1 4, room 0.24

Recent progress in information extraction has enabled the automatic construction of large knowledge bases. Knowledge bases contain millions of entities (e.g. persons, organizations,events, etc.), their semantic classes, and facts about them. Knowledge bases have become a great asset for semantic search, entity linking, deep analytics, and question answering. However, a common limitation of current knowledge bases is the poor coverage of temporal knowledge. First of all, so far, knowledge bases have focused on popular events and ignored long tail events such as political scandals, local festivals, or protests. Secondly, they do not cover the textual phrases denoting events and temporal facts at all.

The goal of this dissertation, thus, is to automatically populate knowledge bases with this kind of temporal knowledge. The dissertation makes the following contributions to address the aforementioned limitations. The first contribution is a method for extracting events from news articles. The method reconciles the extracted events into canonicalized representations and organizes them into fine-grained semantic classes. The second contribution is a method for mining the textual phrases denoting the events and facts. The method infers the temporal scopes of these phrases and maps them to a knowledge base. Our experimental evaluations demonstrate that our methods yield high quality output compared to state-of-the-art approaches, and can indeed populate knowledge bases with temporal knowledge.

 

Artem ALEKHIN
Provably Sound Semantics Stack for Multi-Core System Programming with Kernel Threads
(Advisor: Prof. Wolfgang Paul)

Fri, 24.02.2017, 15:15h, building E1 7, room 0.01

Operating systems and hypervisors (e.g., Microsoft Hyper-V) for multi-core processor architectures are usually implemented in high-level stack-based programming languages integrated with mechanisms for the multi-threaded task execution as well as the access to low-level hardware features. Guaranteeing the functional correctness of such systems is considered to be a challenge in the field of formal verification because it requires a sound concurrent computational model comprising programming semantics and steps of specific hardware components visible for the system programmer.

In this doctoral thesis we address the aforementioned issue and present a provably sound concurrent model of kernel threads executing C code mixed with assembly, and basic thread operations (i.e., creation, switch, exit, etc.), needed for the thread management in OS and hypervisorkernels running on industrial-like multi-core machines. For the justification of the model, we establish a semantics stack, where on its bottom the multi-core instruction set architecture performing arbitrarily interleaved steps executes binary code of guests/processes being virtualized and the compiled source code of the kernel linked with a library implementing the threads. After extending an existing general theory for concurrent system simulation and by utilising the order reduction of steps under certain safety conditions, we apply the sequential compiler correctness for the concurrent mixed programming semantics, connect the adjacent layers of the model stack, show the required properties transfer between them, and provide a paper-and-pencil proof of the correctness for the kernel threads implementation with lock protected operations and the efficient thread switch based on the stack substitution.

 

Nils FLEISCHHACKER
Minimal Assumptions in Cryptography
(Advisor: Prof. Dominique Schröder, now Erlangen)

Thurs, 09.02.2017, 16:00h, building E9 1, room 0.01

Virtually all of modern cryptography relies on unproven assumptions. This is necessary, as the existence of cryptography would have wide ranging implications. In particular, it would hold that P =/= NP, which is not known to be true. Nevertheless, there clearly is a risk that the assumptions may be wrong. Therefore, an important field of research explores which assumptions are strictly necessary under different circumstances.

This thesis contributes to this field by establishing lower bounds on the minimal assumptions in three different areas of cryptography. We establish that assuming the existence of physically uncloneable functions (PUF), a specific kind of secure hardware, is not by itself sufficient to allow for secure two-party computation protocols without trusted setup. Specifically, we prove that unconditionally secure oblivious transfer can in general not be constructed from PUFs. Secondly, we establish a bound on the potential tightness of security proofs for Schnorr signatures. Essentially, no security proof based on virtually arbitrary non-interactive assumptions defined over an abstract group can be significantly tighter than the known, forking lemma based, proof. Thirdly, for very weak forms of program obfuscation, namely approximate indistinguishability obfuscation, we prove that they cannot exist with statistical security and computational assumptions are therefore necessary. This result holds unless the polynomial hierarchy collapses or one-way functions do not exist.

 

Sairam GURAJADA
Distributed Querying of Large Labeled Graphs
(Advisor: Prof. Gerhard Weikum)

Mon, 06.02.2017, 15:00h, building E1 5, room 0.29

„Labeled Graph“, where vertices and edges are labeled, is an important adaptation of „graph“ with many practical applications. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models by proposing a distributed solution to process them in an efficient and scalable manner.

• Set Reachability: A generalization of basic notion of graph reachability, set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query.

• Basic Graph Patterns (BGP): BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner.

• Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching (BGP) and navigational queries. We present a distributed solution with bimodal indexing layout that individually support efficient and scalable processing of BGP queries and navigational queries.

We propose a prototype distributed engine, coined “TriAD” (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets.

January

Javor KALOJANOV
r-Symmetry for Triangle Meshes: Detection and Applications
(Advisor: Prof. Philipp Slusallek)

Tues, 31.01.2017, 12:15h, building D3 2, DFKI VisCenter

In this thesis, we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression.

We investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposition into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. Furthermore, we show how the geometric redundancies encoded in the microtiles can be used for triangle mesh compression.

 

Xiaokun WU
Structure-aware content creation – Detection, retargeting and deformation
(Advisor: Prof. Hans-Peter Seidel)

Fri, 20.01.2017, 14:15h, building E1 4, room 019

Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields.

In this talk, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, i.e. maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns.

We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications.

Thesis defenses

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close