PhD Thesis Defenses 2017

PhD thesis defenses are a public affair and open to anyone who is interested. Attending them is a great way to get to know the work going on by your peers in the various research groups. On this page you will find a list of upcoming and past defense talks.

Please go here for electronic access to most of the doctoral dissertations from Saarbrücken Computer Science going back to about 1990.



Decision Algorithms for Modelling, Optimal Control and Verification of Probabilistic Systems
(Advisor: Prof. Holger Hermanns)

Thursday, 21.12.2017, 11:30 h, in E1 7, room 001


Markov Decision Processes (MDPs) constitute a mathematical framework for modelling systems featuring both probabilistic and nondeterministic behaviour. They are widely used to solve sequential decision making problems and applied successfully in operations research, artificial intelligence, and stochastic control theory, and have been extended conservatively to the model of probabilistic automata in the context of concurrent probabilistic systems. However, when modeling a physical system they suffer from several limitations. One of the most important is the inherent loss of precision that is introduced by measurement errors and discretization artifacts which necessarily happen due to incomplete knowledge about the system behavior. Interval Markov decision processes (IMDPs) generalize classical MDPs by having interval-valued transition probabilities. They provide a powerful modelling tool for probabilistic systems with an additional variation or uncertainty that reflects the absence of precise knowledge concerning transition probabilities.
In this dissertation, we focus on decision algorithms for modelling and performance evaluation of such probabilistic systems leveraging techniques from mathematical optimization. From a modelling viewpoint, we address probabilistic bisimulations to reduce the size of the system models while preserving the logical properties they satisfy. We also discuss the key ingredients to construct systems by composing them out of smaller components running in parallel. Furthermore, we introduce a novel stochastic model, Uncertain weighted Markov Decision Processes (UwMDPs), so as to capture quantities like preferences or priorities in a nondeterministic scenario with uncertainties. This model is close to the model of IMDPs but more convenient to work with in the context of compositional minimization. From a performance evaluation perspective, we consider the problem of multi-objective robust strategy synthesis for IMDPs, where the aim is to find a robust strategy that guarantees the satisfaction of multiple properties at the same time in face of the transition probability uncertainty. In this respect, we discuss the computational complexity of the problem and present a value iteration-based decision algorithm to approximate the Pareto set of achievable optimal points. Moreover, we consider the problem of computing maximal/minimal reward-bounded reachability probabilities on UwMDPs, for which we present an efficient algorithm running in pseudo-polynomial time. We demonstrate the practical effectiveness of our proposed approaches by applying them to a collection of real-world case studies using several prototypical tools.

Learning to Segment in Images and Videos with Different Forms of Supervision
(Advisor: Prof. Bernt Schiele)

Wednesday, 20.12.2017, 17:00 h, in E1 4, room 0.29


First, we develop approaches for training convolutional networks with weaker forms of supervision, such as bounding boxes or image labels, for object boundary estimation and semantic/instance labelling tasks. We propose to generate pixel-level approximate groundtruth from these weaker forms of annotations to train a network, which allows to achieve high-quality results comparable to the full supervision quality without any modifications of the network architecture or the training procedure.
Second, we address the problem of the excessive computational and memory costs inherent to solving video segmentation via graphs. We propose approaches to improve the runtime and memory efficiency as well as the output segmentation quality by learning from the available training data the best representation of the graph. In particular, we contribute with learning must-link constraints, the topology and edge weights of the graph as well as enhancing the graph nodes – superpixels – themselves.
Third, we tackle the task of pixel-level object tracking and address the problem of the limited amount of densely annotated video data for training convolutional networks. We introduce an architecture which allows training with static images only and propose an elaborate data synthesis scheme which creates a large number of training examples close to the target domain from the given first frame mask. With the proposed techniques we show that densely annotated consequent video data is not necessary to achieve high-quality temporally coherent video segmentation results.

Daniel PORTA
KomBInoS – Modellgetriebene Entwicklung von multimodalen Dialogschnittstellen für Smart Services
(Advisor: Prof. Wolfgang Wahlster)

Friday, 15.12.2017, 16:30 h, in D3 2, Reuse seminar room


This work is located in the context of the three research areas Smart Service World, Model-Driven Software Development and Intelligent User Interfaces. The aim of the work was to develop a holistic approach for the efficient creation of multimodal dialogue interfaces for Smart Services. To achieve this goal, KomBInoS was developed as a comprehensive framework for the model-driven creation of such user interfaces.
The framework consists of: (1) a metamodel architecture that allows both model-driven development and the composition of multimodal dialogue interfaces for Smart Services, (2) a methodical approach consisting of coordinated model transformations, possible compositional steps and manual development activities, as well as (3) an integrated tool chain as an implementation of the method.
Furthermore, a cloud-enabled runtime environment was developed for mobile use of the user interfaces created in this way. As proof-of-concept, eight sample applications and demonstrators from five research projects will be presented. In addition to the Smart Service Welt, KomBInoS was and is also used in the field of industry 4.0.

Automated Security Analysis of Web Application Technologies
(Advisor: Prof. Michael Backes)

Thursday, 14.12.2017, 16:16 h, in E9 1, room 0.01


The Web today is a complex universe of pages and applications teeming with interactive content that we use for commercial and social purposes. Accordingly, the security of Web applications has become a concern of utmost importance. Devising automated methods to help developers to spot security flaws and thereby make the Web safer is a challenging but vital area of research.
In this thesis, we leverage static analysis methods to automatically discover vulnerabilities in programs written in JavaScript or PHP. While JavaScript is the number one language fueling the client-side logic of virtually every Web application, PHP is the most widespread language on the server side.
In the first part, we use a series of program transformations and information flow analysis to examine the JavaScript Helios voting client. Helios is a state-of-the-art voting system that has been exhaustively analyzed by the security community on a conceptual level and whose implementation is claimed to be highly secure. We expose two severe and so far undiscovered vulnerabilities.
In the second part, we present a framework allowing developers to analyze PHP code for vulnerabilities that can be freely modeled. To do so, we build so-called code property graphs for PHP and import them into a graph database. Vulnerabilities can then be modeled as appropriate database queries. We show how to model common vulnerabilities and evaluate our framework in a large-scale study, spotting hundreds of vulnerabilities.

Approximation Algorithms for Vietoris-Rips and Cech Filtrations
(Advisor: Prof. Kurt Mehlhorn)

Monday, 11.12.2017, 11:00 h, in E1 4, room 0.19


Persistent Homology is the prominent tool in the field of Topological Data Analysis that aims at understanding and summarizing the shape of data coming from metric spaces. It computes a topological summary that can identify the important features of data, while being stable to imperfections in the data. Vietoris-Rips and Cech complexes are popular simplicial complexes that are used by Persistent Homology to compute the topological summary. Unfortunately, the question of computation becomes prohibitively expensive when dealing with high-dimensional metric spaces, as the size of these simplicial complexes blows up. In my thesis, we attack this problem from several different angles. We present several techniques to approximate the topological summary. Some of our techniques are tailored towards point clouds in Euclidean space and make use of high-dimensional lattice geometry. We present the first polynomial-sized approximation schemes which are independent of the ambient dimension. On the other hand, we also show that polynomial complexity is impossible when a high approximation quality is desired. In addition, we also develop results for metric spaces which have low intrinsic dimension.

Automatic Methods for Low-Cost Evaluation and Position-Aware Models for Neural Information Retrieval
(Advisor: Prof. Klaus Berberich, now HTW)

Monday, 04.12.2017, 16:15 h, in E1 5, room 029


An information retrieval (IR) system assists people in consuming a huge amount of data, where the evaluation and the construction of such systems are important. However, there exist two difficulties: the overwhelmingly large number of query-document pairs to judge, making IR evaluation a manually laborious task; and the complicated patterns to model due to the non-symmetric, heterogeneous relationships between a query-document pair, where different interaction patterns such as term dependency and proximity have been demonstrated to be useful, yet are non-trivial for a single IR model to encode. In this thesis, we attempt to address both difficulties from the perspectives of IR evaluation and of the retrieval model respectively, by reducing the manual cost with automatic methods, by investigating the use of crowdsourcing in collecting preference judgments, and by proposing novel neural retrieval models. In particular, to address the large number of query-document pairs in IR evaluation, a low-cost selective labelling method is proposed to pick out a small subset of representative documents for manual judgments in favour of the follow-up prediction for the remaining query-document pairs; furthermore, a language-model based cascade measure framework is developed to evaluate the novelty and diversity, utilizing the content of the labeled documents to mitigate incomplete labels. In addition, we also attempt to make the preference judgments practically usable by empirically investigating different properties of the judgments when collected via crowdsourcing; and by proposing a novel judgment mechanism, making a compromise between the judgment quality and the number of judgments. Finally, to model different complicated patterns in a single retrieval model, inspired by the recent advances in deep learning, we develop novel neural IR models to incorporate different patterns of term dependency, query proximity, the density of relevance, and query coverage in a single model. We demonstrate their superior performances through evaluations on different datasets.

Joint Models for Information and Knowledge Extraction
(Advisor: Prof. Gerhard Weikum)

Friday, 01.12.2017, 14:00 h, in E1 4, room 024


Information and knowledge extraction from natural language text is a key asset for question answering, semantic search, automatic summarization, and other machine reading applications. There are many sub-tasks involved such as named entity recognition, named entity disambiguation, co-reference resolution, relation extraction, event detection, discourse parsing, and others. Solving these tasks is challenging as natural language text is unstructured, noisy, and ambiguous. Key challenges, which focus on identifying and linking named entities, as well as discovering relations between them, include:
– High NERD Quality. Named entity recognition and disambiguation, NERD for short, are preformed first in the extraction pipeline. Their results may affect other downstream tasks.
– Coverage vs. Quality of Relation Extraction. Model-based information extraction methods achieve high extraction quality at low coverage, whereas open information extraction methods capture relational phrases between entities. However, the latter degrades in quality by non-canonicalized and noisy output. These limitations need to be overcome.
– On-the-fly Knowledge Acquisition. Real-world applications such as question answering, monitoring content streams, etc. demand on-the-fly knowledge acquisition. Building such an end-to-end system is challenging because it requires high throughput, high extraction quality, and high coverage.
This dissertation addresses the above challenges, developing new methods to advance the state of the art. The first contribution is a robust model for joint inference between entity recognition and disambiguation. The second contribution is a novel model for relation extraction and entity disambiguation on Wikipedia-style text. The third contribution is an end-to-end system for constructing query-driven, on-the-fly knowledge bases.


Alexander WIEDER
Blocking Analysis of Spin Locks under Partitioned Fixed-Priority Scheduling
(Advisor: Dr. Björn Brandenburg)

Wednesday, 29.11.2017, 16:00 h, in E1 5, room 029


Partitioned fixed-priority scheduling is widely used in embedded multicore real-time systems. In multicore systems, spin locks are one well-known technique used to synchronize conflicting accesses from different processor cores to shared resources (e.g., data structures). The use of spin locks can cause blocking. Accounting for blocking is a crucial part of static analysis techniques to establish correct temporal behavior.
In this thesis, we consider two aspects inherent to the partitioned fixed-priority scheduling of tasks sharing resources protected by spin locks: (1) the assignment of tasks to processor cores to ensure correct timing, and (2) the blocking analysis required to derive bounds on the blocking.
Heuristics commonly used for task assignment fail to produce assignments that ensure correct timing when shared resources protected by spin locks are used. We present an optimal approach that is guaranteed to find such an assignment if it exists (under the original MSRP analysis). Further, we present a well-performing and inexpensive heuristic.
For most spin lock types, no blocking analysis is available in prior work, which renders them unusable in real-time systems. We present a blocking analysis ap-proach that supports eight different types and is less pessimistic than prior analyses, where available. Further, we show that allowing nested requests for FIFO- and priority-ordered locks renders the blocking analysis problem NP-hard.

Mainack MONDAL
Understanding & Controlling User Privacy in Social Media via Exposure
(Advisor: Prof. Krishna Gummadi)

Monday, 20.11.2017, 16:00 h, in E1 5, room 029


The recent popularity of Online Social Media sites (OSM) like Facebook and Twitter have led to a renewed discussion about user privacy. In fact, numerous recent news reports and research studies on user privacy stress the OSM users‘ urgent need for better privacy control mechanisms. Thus, today, a key research question is: how do we provide improved privacy protection to OSM users for their social content? In my thesis, we propose a systematic approach to address this question. We start with the access control model, the dominant privacy model in OSMs today. We show that, while useful, the access control model does not capture many theoretical and practical aspects of privacy. Thus, we propose a new model, which we term the exposure control model. We define exposure for a piece of content as the set of people who actually view the content. We demonstrate that our model is a significant improvement over access control to capture users‘ privacy requirements. Next, we investigate the effectiveness of our model to protect users‘ privacy in three real world scenarios: (1) Understanding and controlling exposure using social access control lists (SACLs) (2) Controlling exposure by limiting large-scale social data aggregators and (3) Understanding and controlling longitudinal exposure in OSMs, i.e., how users control exposure of their old OSM content. We show that, in each of these cases, the exposure control-based approach helps us to design improved privacy control mechanisms.

Smarter Screen Space Shading
(Advisor: Prof. Hans-Peter Seidel)

Friday, 10.11.2017, 13:00 h, in E1 4, room 0.19


This dissertation introduces a range of new methods to produce images of virtual scenes in a matter of milliseconds. Imposing as few constraints as possible on the set of scenes that can be handled, e.g., regarding geometric changes over time or lighting conditions, precludes pre- computations and makes this a particularly difficult problem. We first present a general approach, called deep screen space, using which a variety of light transport aspects can be simulated within the aforementioned setting. This approach is then further extended to additionally handle scenes containing participating media like clouds. We also show how to improve the correctness of deep screen space and related algorithms by accounting for mutual visibility of points in a scene. After that, we take a completely different point of view on image generation using a learning- based approach to approximate a rendering function. We show that neural networks can hallucinate shading effects which otherwise have to be computed using costly analytic computations. Finally, we contribute a holistic framework to deal with phosphorescent materials in computer graphics, covering all aspects from acquisition of real materials, to easy editing, to image synthesis.


Runtime-Adaptive Generalized Task Parallelism
(Advisor: Prof. Andreas Zeller)

Friday, 20.10.2017, 16:00 h, in E9 1, room 0.01


Multi core systems are ubiquitous nowadays and their number is ever increasing. And while, limited by physical constraints, the computational power of the individual cores has been stagnating or even declining for years, a solution to effectively utilize the computational power that comes with the additional cores is yet to be found. Existing approaches to automatic parallelization are often highly specialized to exploit the parallelism of specific program patterns, and thus to parallelize a small subset of programs only. In addition, frequently used invasive runtime systems prohibit the combination of different approaches, which impedes the practicality of automatic parallelization.
In the following thesis, we show that specializing to narrowly defined program patterns is not necessary to efficiently parallelize applications coming from different domains. We develop a generalizing approach to parallelization, which, driven by an underlying mathematical optimization problem, is able to make qualified parallelization decisions taking into account the involved runtime overhead. In combination with a specializing, adaptive runtime system the approach is able to match and even exceed the performance results achieved by specialized approaches.

Alignment of Multi-Cultural Knowledge Repositories
(Advisor: Prof. Gernhard Weikum)

Monday, 16.10.2017, 15:00 h,  in E1 4, room 024


The ability to interconnect multiple knowledge repositories within a single framework is a key asset for various use cases such as document retrieval and question answering. However, independently created repositories are inherently heterogeneous, reflecting their diverse origins. Thus, there is a need to align concepts and entities across knowledge repositories. A limitation of prior work is the assumption of high affinity between the repositories at hand, in terms of structure and terminology. The goal of this dissertation is to develop methods for constructing and curating alignments between multi-cultural knowledge repositories. The first contribution is a system, ACROSS, for reducing the terminological gap between repositories. The second contribution is two alignment methods, LILIANA and SESAME, that cope with structural diversity. The third contribution, LAIKA, is an approach to compute alignments between dynamic repositories. Experiments with a suite of Web-scale knowledge repositories show high Quality alignments. In addition, the application benefits of LILIANA and SESAME are demonstrated by use cases in search and Exploration.

Graph Models for Rational Social Interaction
(Advisor: Prof. Kurt Mehlhorn)

Wednesday, 11.10.2017, 11:00 h,  in E1 4, room 019


This thesis covers various areas in Social Reasoning having as central hub Abstract Argumentation, viewed as a graph-based modeling of the fundamental issues that arise in defeasible domains.
The first Part of this thesis is devoted to combinatorial aspects of Dung‘s Argumentation Frameworks (AFs) related to computational issues. I prove that any AF can be syntactically augmented into a normal form preserving the semantic properties of the original arguments, by using a cubic time rewriting technique. I introduce polyhedral labellings for an AF, which is a polytope with the property that its integral points are exactly the incidence vectors of specific types of extensions. Also, a new notion of acceptability of arguments is considered – deliberative acceptability – and I provide its time computational complexity analysis.
In the second Part, I introduce a novel graph-based model for aggregating preferences. By using graph operations to describe properties of the aggregators, axiomatic characterizations of aggregators corresponding to usual majority or approval & disapproval rule are given. Integrating AF‘s semantics into our model provides a novel qualitative approach to classical social choice: argumentative aggregation of individual preferences. Also, a functional framework abstracting many-to-many two-sided markets is considered: the study of the existence of a Stable Choice Matching in a Bipartite Choice System is reduced to the study of the existence of Stable Common Fixed Points of two choice functions. A generalization of the Gale-Shapley algorithm is designed and, in order to prove its correctness, new characterization of path independence choice functions is given.
Finally, in the third Part, we extend Dung’s AFs to Opposition Frameworks, reducing the gap between Structured and Abstract Argumentation. A guarded attack calculus is developed, giving proper generalizations of Dung’s extensions.


People Detection and Tracking in Crowded Scenes
(Advisor: Prof. Bernt Schiele)

Friday, 29.09.2017, 15:00 h,  in E1 4, room 024


People are often a central element of visual scenes, particularly in real-world street scenes. Thus it has been a long-standing goal in Computer Vision to develop methods aiming at analyzing humans in visual data. Due to the complexity of real-world scenes, visual understanding of people remains challenging for machine perception. In this defense, I will discuss a number of diverse tasks that aim to enable vision systems to analyze people in realistic images and videos. In particular, I will present several novel models and algorithms which push the boundary of state-of-the-arts and result in superior performance.

Metastability-Containing Circuits, Parallel Distance Problems, and Terrain Guarding
(Advisor: Dr. Christoph Lenzen)

Monday, 11.09.2017, 15:30 h,  in E1 4, room 024


We study three problems. The first is the phenomenon of metastability in digital circuits.
This is a state of bistable storage elements, such as registers, that is neither logical 0 nor 1
and breaks the abstraction of Boolean logic. We propose a time- and value-discrete model
for metastability in digital circuits and show that it reflects relevant physical properties.
Further, we propose the fundamentally new approach of using logical masking to perform
meaningful computations despite the presence of metastable upsets and analyze what functions
can be computed in our model. Additionally, we show that circuits with masking registers
grow computationally more powerful with each available clock cycle.
The second topic are parallel algorithms, based on an algebraic abstraction of the Moore-
Bellman-Ford algorithm, for solving various distance problems. Our focus are distance
approximations that obey the triangle inequality while at the same time achieving polylogarithmic
depth and low work.
Finally, we study the continuous Terrain Guarding Problem. We show that it has a rational
discretization with a quadratic number of guard candidates, establish its membership in NP
and the existence of a PTAS, and present an efficient implementation of a solver.

Knowledge-driven Entity Recognition and Disambiguation in Biomedical Text
(Advisor: Prof. Gerhard Weikum)

Monday, 04.09.2017, 16:00 h,  in E1 4, room 024


Entity recognition and disambiguation (ERD) for the biomedical domain are notoriously difficult problems due to the variety of entities and their often long names in many variations. Existing works focus heavily on the molecular level in two ways. First, they target scientific literature as the input text genre. Second, they target single, highly specialized entity types such as chemicals, genes, and proteins. However, a wealth of biomedical information is also buried in the vast universe of Web content. In order to fully utilize all the information available, there is a need to tap into Web content as an additional input. Moreover, there is a need to cater for other entity types such as symptoms and risk factors since Web content focuses on consumer health.
The goal of this thesis is to investigate ERD methods that are applicable to all entity types in scien-tific literature as well as Web content. In addition, we focus on under-explored aspects of the bio-medical ERD problems — scalability, long noun phrases, and out-of-knowledge base (OOKB) entities.
This thesis makes four main contributions, all of which leverage knowledge in UMLS (Unified Med-ical Language System), the largest and most authoritative knowledge base (KB) of the biomedical domain. The first contribution is a fast dictionary lookup method for entity recognition that maximiz-es throughput while balancing the loss of precision and recall. The second contribution is a semantic type classification method targeting common words in long noun phrases. We develop a custom set of semantic types to capture word usages; besides biomedical usage, these types also cope with non-biomedical usage and the case of generic, non-informative usage. The third contribution is a fast heu-ristics method for entity disambiguation in MEDLINE abstracts, again maximizing throughput but this time maintaining accuracy. The fourth contribution is a corpus-driven entity disambiguation method that addresses OOKB entities. The method first captures the entities expressed in a corpus as latent representations that comprise in-KB and OOKB entities alike before performing entity disambiguation.


Mittul SINGH
Handling long-term dependencies and rare words in low-resource language Modelling
(Advisor: Prof. Dietrich Klakow)

Thursday, 31.08.2017, 13:00 h,  in C7 4, first upper floor conference room


For low resource NLP tasks like Keyword Search and domain adaptation with small amounts of in-domain data, having well-trained language models is essential. Two major challenges faced while building these language models for such tasks are 1) how the models handle the long-term dependencies, and 2) how to represent the words which occur with a low frequency (rare words) in the text. To handle long-term dependencies in the text, we compare existing techniques and extend these techniques for domain adaptation for small corpora in Speech Recognition, leading to improvements in word error rates. Further, we formulate a new language model architecture to capture long-term dependencies, helping us understand the extent to which enumeration of dependencies can compare to more popular neural network techniques for capturing such dependencies. Next, to handle rare words in the text, we propose an unsupervised technique of generating rare-word representations, which is more general and requires less mathematical engineering than comparable methods. Finally, embedding these representations in a language model shows significant improvements in rare-word perplexity over other such models.

Ching Hoo TANG
Logics for Rule-Based Configuration Systems
(Advisor: Prof. Christoph Weidenbach)

Wednesday, 16.08.2017, 10:00 h,  in E1 5, room 002


Rule-based configuration systems are being successfully used in industry, such as DOPLER at Siemens. Those systems make complex domain knowledge available to users and let them derive valid, customized products out of large sets of components. However, maintenance of such systems remains a challenge. Formal models are a prerequisite for the use of automated methods of analysis.
This thesis deals with the formalization of rule-based configuration. We develop two logics whose transition semantics are suited for expressing the way systems like DOPLER operate. This is due to the existence of two types of transitions, namely user and rule transitions, and a fixpoint mechanism that determines their dynamic relationship. The first logic, PIDL, models propositional systems, while the second logic, PIDL+, additionally considers arithmetic constraints. They allow the formulation and automated verification of relevant properties of rule-based configuration systems.

Daniel WAND
Superposition: Types and Induction
(Advisor: Prof. Christoph Weidenbach)

Friday, 04.08.2017, 10:00 h,  in E1 5, room 002


Proof assistants are becoming widespread for formalization of theories both in computer science and mathematics. They provide rich logics with powerful type systems and machine-checked proofs which increase the confidence in the correctness in complicated and detailed proofs. However, they incur a significant overhead compared to pen-and-paper proofs.
This thesis describes work on bridging the gap between high-order proof assistants and first-order automated theorem provers by extending the capabilities of the automated theorem provers to provide features usually found in proof assistants. My first contribution is the development and implementation of a first-order superposition calculus with a polymorphic type system that supports type classes and the accompanying refutational completeness proof for that calculus. The inclusion of the type system into the superposition calculus and solvers completely removes the type encoding overhead when encoding problems from many proof assistants. My second contribution is the development of SupInd, an extension of the typed superposition calculus that supports data types and structural induction over those data types. It includes heuristics that guide the induction and conjecture strengthening techniques, which can be applied independently of the underlying calculus. I have implemented the contributions in a tool called Pirate. The evaluations of both contributions show promising results.


Preliminaries for Distributed Natural Computing Inspired by the Slime Mold Physarum Polycephalum
(Advisor: Prof. Kurt Mehlhorn)

Monday, 31.07.2017, 10:00 h,  in E1 4, room 0.19


In my talk I will discuss the Slime Mold Physarum Polycephalum within the framework of Natural Computing. In particular I will report on our efforts to better understand the highly dynamic networks of this organism and our attempts to model its distributed nature. In this context I will present evidence that our promising model mimics key features of P. Polycephalum such as flow reversals and synchronisation. Finally, I will discuss the state of our work and suggest directions for future research.

Efficient Runtime Systems for Speculative Parallelization
(Advisor: Prof. Andreas Zeller)

Thursday, 27.07.2017, 11:00 h,  in E9 1, room 0.01


Manual parallelization is time consuming and error-prone. Automatic parallelization on the other hand is often unable to extract substantial parallelism. Using speculation, however, most of the parallelism can be exploited even of complex programs. Speculatively parallelized programs always need a runtime system during execution in order to ensure the validity of the speculative assumptions, and to ensure the correct semantics even in the case of misspeculation. These runtime systems should influence the execution time of the parallel program as little as possible. In this thesis, we investigate to which extend state-of-the-art systems which track memory accesses explicitly in software fulfill this requirement. We describe and implement changes which improve their performance substantially. We also design two new systems utilizing virtual memory abstraction to track memory changed implicitly, thus causing less overhead during execution. One of the new systems is integrated into the Linux kernel as a kernel module, providing the best possible performance. Furthermore it provides stronger soundness guarantees than any state-of-the-art system by also capturing system calls, hence including for example file I/O into speculative isolation. In a number of benchmarks we show the performance improvements of our virtual memory based systems over the state of the art. All our extensions and newly developed speculation systems are made available as open source.

Mitigating the imposition of malicious behavior on code
(Advisor: Prof. Michael Backes)

Thursday, 20.07.2017, 11:00 h,  in E9 1, room 0.01


If vulnerabilities in software allow to alter the behaviour of a program, traditional virus scanners do not have a chance because there is no ingress of a malicious program. Even worse, the same, potentially vulnerable software, runs on millions of computers, smart phones, and tablets out there. One program flaw allows for malicious behaviour to be imposed on millions of instances at once.
This thesis presents four novel approaches that all mitigate or prevent this type of imposition of malicious behaviour on otherwise benign code. Since attacks have adapted during the writing of this thesis, the counteract techniques presented are tailored towards different stages of the still ongoing cat-and-mouse game and each technique resembles the status quo in defences at that time.

Principles of Markov Automata
(Advisor: Prof. Holger Hermanns)

Monday, 17.07.2017, 17:00 h,  in E1 7, room 001


A substantial amount of today’s engineering problems revolve around systems that are concurrent and stochastic by their nature. Solution approaches attacking these problems often rely on the availability of formal mathematical models that reflect such systems as comprehensively as possible. In the thesis, we develop a compositional model, Markov automata, that integrates concurrency, and probabilistic and timed stochastic behaviour. This is achieved by blending two well-studied constituent models, probabilistic automata and interactive Markov chains. A range of strong and weak bisimilarity notions are introduced and evaluated as candidate relations for a natural behavioural equivalence between systems. Among them, weak distribution bisimilarity stands out as a natural notion being more oblivious to the probabilistic branching structure than prior notions. We discuss compositionality, axiomatizations, decision and minimization algorithms, state-based characterizations and normal forms for weak distribution bisimilarity. In addition, we detail how Markov automata and weak distribution bisimilarity can be employed as a semantic basis for generalized stochastic Petri nets, in such a way that known shortcomings of their classical semantics are ironed out in their entirety.

Ashutosh MODI
Modeling Common Sense Knowledge via Scripts
(Advisor: Prof. Ivan Titov, now Amsterdam)

Friday, 07.07.2017, 12:15 h,  in C7 4, first floor conference room


It is generally believed that incorporation of common sense knowledge about the world would benefit natural language understanding systems. This dissertation considers modeling common sense knowledge associated with everyday activities using the concept of scripts. Scripts are sequences of events describing stereotypical human activities (i.e. scenarios), for example, cooking pasta, baking a cake, etc. Scripts have been shown to be useful in the area of natural language processing (NLP).
To model script knowledge, three main tasks need to be addressed: event ordering, event paraphrasing and event prediction. This dissertation introduces a novel probabilistic neural network based model for event ordering and event paraphrasing. The model outperforms currently prevalent graph based and count based methods. For addressing the event prediction task, we propose an extension of the event ordering model. We experimentally show the advantages of the event prediction model over existing count based methods.
A further result of this dissertation is the InScript corpus. InScript is a crowdsourced collection of stories, where each story is about only one particular script scenario. The corpus is manually annotated with script specific event types, participant types and coreference information. InScript provides a resource for analyzing the influence of script knowledge. Intuitively, script knowledge should influence language understanding. This dissertation contributes towards foundational work done for understanding the influence of script knowledge on language comprehension. From multiple experiments, we are able to empirically establish that script knowledge plays a significant role in language comprehension.

Subhabrata MUKHERJEE
Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities
(Advisor: Prof. Gerhard Weikum)

Thursday, 06.07.2017, 16:00 h, in E1 5, Room 029.

One of the major hurdles preventing the full exploitation of information from online communities is the widespread concern regarding the quality and credibility of user-contributed content. We propose probabilistic graphical models that can leverage the joint interplay between multiple factors — like user interactions, community dynamics, and textual content — to automatically assess the credibility of user-contributed online information, expertise of users and their evolution with user-interpretable explanation. We devise new models based on Conditional Random Fields that enable applications such as extracting reliable side-effects of drugs from user-contributed posts in health forums, and identifying credible news articles in news forums.
Online communities are dynamic, as users join and leave, adapt to evolving trends, and mature over time. To capture this dynamics, we propose generative models based on Hidden Markov Model, Latent Dirichlet Allocation, and Brownian Motion to trace the continuous evolution of user expertise and their language model over time. This allows us to identify expert users and credible content jointly over time, improving state-of-the-art recommender systems by explicitly considering the maturity of users. This enables applications such as identifying useful product reviews, and detecting fake and anomalous reviews with limited information.


Adam Grycner 
Constructing Lexicons of Relational Phrases
(Advisor: Prof. Gerhard Weikum)

Wednesday, 28.06.2017, 15:00 h, in E1 5 (MPI-SWS), Room 029.

Knowledge Bases are one of the key components of Natural Language Understanding systems. For example, DBpedia, YAGO, and Wikidata capture and organize knowledge about named entities and relations between them, which is often crucial for tasks like Question Answering and Named Entity Disambiguation. While Knowledge Bases have good coverage of prominent entities, they are often limited with respect to relations. The goal of this thesis is to bridge this gap and automatically create lexicons of textual representations of relations, namely relational phrases. The lexicons should contain information about paraphrases, hierarchy, as well as semantic types of arguments of relational phrases. The thesis makes three main contributions. The first contribution addresses disambiguating relational phrases by aligning them with the WordNet dictionary. Moreover, the alignment allows imposing the WordNet hierarchy on the relational phrases. The second contribution proposes a method for graph construction of relations using Probabilistic Graphical Models. In addition, we apply this model to relation paraphrasing. The third contribution presents a method for constructing a lexicon of relational paraphrases with fine-grained semantic typing of arguments. This method is based on information from a multilingual parallel corpus.


Peng Sun
Bi-(N-) Cluster Editing and Its Biomedical Applications
(Advisor: Prof. Jan Baumbach)

Wednesday, 28.06.2017, 10:00 h, in E1 7, Room 001

The extremely fast advances in wet-lab techniques lead to an exponential growth of heterogeneous and unstructured biological data, posing a great challenge to data integration in nowadays system biology. The traditional clustering approach, although widely used to divide the data into groups sharing common features, is less powerful in the analysis of heterogeneous data from n different sources (n>2). The co-clustering approach has been widely used for combined analyses of multiple networks to address the challenge of  heterogeneity. In this thesis, novel methods for the co-clustering of large scale heterogeneous data sets are presented in the software package n-CluE: one exact algorithm and two heuristic algorithms based on the model of bi-/n-cluster editing by modeling the input as n partite graphs and solving the clustering problem with various strategies.
In the first part of the thesis, the complexity and the fixed-parameter tractability of the extended bicluster editing model with relaxed constraints are investigated, namely the π-bicluster editing model and its NP-hardness is proven. Based on the results of this analysis, three strategies within the n-CluE software package are then established and discussed, together with the evaluations on performances and the systematic comparisons against other algorithms of the same type in solving bi-/n-cluster editing problem.
To demonstrate the practical impact, three real-world analyses using n-CluE are performed, including (a) prediction of novel genotype phenotype associations by clustering the data from Genome-Wide Association Studies; (b) comparison between n-CluE and eight other biclustering tools on GEO Omnibus microarray data sets; (c) drug repositioning predictions by co-clustering on drug, gene and disease networks. The outstanding performance of n-CluE in the real-world applications shows its strength and flexibility in integrating heterogeneous data and extracting biological relevant information in bioinformatic analyses.

High-quality Face Capture, Animation and Editing from Monocular Video
(Advisor: Prof. Christian Theobalt)

Monday, 26.06.2017, 09:30 h, in E1 4 (MPI-INF), Room 019

Digitization of virtual faces in movies requires complex capture setups and extensive
manual work to produce superb animations and video-realistic editing. This thesis pushes
the boundaries of the digitization pipeline by proposing automatic algorithms for high-quality
3D face capture and animation, as well as photo-realistic face editing. These algorithms
reconstruct and modify faces in 2D videos recorded in uncontrolled scenarios and illumination.
In particular, advances in three main areas offer solutions for the lack of depth and overall
uncertainty in video recordings. First, contributions in capture include model-based
reconstruction of detailed, dynamic 3D geometry that exploits optical and shading cues,
multilayer parametric reconstruction of accurate 3D models in unconstrained setups based
on inverse rendering, and regression-based 3D lip shape enhancement from high-quality
data. Second, advances in animation are video-based face reenactment based on robust
appearance metrics and temporal clustering, performance-driven retargeting of detailed facial
models in sync with audio, and the automatic creation of personalized controllable 3D rigs.
Finally, advances in plausible photo-realistic editing are dense face albedo capture and mouth
interior synthesis using image warping and 3D teeth proxies. High-quality results attained on
challenging application scenarios confirm the contributions and show great potential for the automatic
creation of photo-realistic 3D faces.

Towards Holistic Machines: From Visual Recognition To Question Answering About Real-World Images
(Advisor: Dr. Mario Fritz)

Tuesday, 20.06.2017, 17:00 h, in E1 4, Room 024

Computer Vision has undergone major changes over the recent five years.
With the advances on Deep Learning and creation of large-volume datasets,
the progress becomes particularly strong on the image classification task.
Therefore, we investigate if the performance of such architectures generalizes to
more complex tasks that require a more holistic approach to scene comprehension.
The presented work focuses on learning spatial and multi-modal representations,
and the foundations of a Visual Turing Test, where the scene understanding is tested
by a series of questions about its content.
In our studies, we propose DAQUAR, the first ‘question answering about real-world
images’ dataset together with methods, termed a symbolic-based and a neural-based
visual question answering architectures, that address the problem. The symbolic-based
method relies on a semantic parser, a database of visual facts, and a bayesian
formulation that accounts for various interpretations of the visual scene.
The neural-based method is an end-to-end architecture composed of a question encoder,
image encoder, multimodal embedding, and answer decoder. This architecture has proven
to be effective in capturing language-based biases. It also becomes the standard component
of other visual question answering architectures.
Along with the methods, we also investigate various evaluation metrics that embraces
uncertainty in word’s meaning, and various interpretations of the scene and the question.


Sebastian HOFFMANN
Competitive Image Compression with Linear PDEs
(Advisor: Prof. Joachim Weickert)

Friday, 16.06.2017, 16:15 h, in E1 7, Room 001

In recent years, image compression methods with partial differential equations (PDEs) have become a promising alternative to standard methods like JPEG or JPEG2000. They exploit the ability of PDEs to recover an image from only a small amount of stored pixels.
To do so, this information is diffused into unknown image regions. Most successful codecs rely on a sophisticated nonlinear diffusion process. In this thesis, we investigate the potential of linear PDEs for image compression. This follows up recent promising compression results with the Laplace and biharmonic operator. As a central contribution, we thoroughly study the diffusion-based image reconstruction, and establish a connection to the concept of sparsity with the help of discrete Green’s functions. Apart from gaining novel theoretical insights, this offers algorithmic benefits: We obtain a fast reconstruction procedure and improved data optimisation methods. The power of linear PDEs is demonstrated by proposing three different compression approaches. Besides a designated codec for depth maps we present an algorithm for general images. The third approach aims at a fast decoding. Experiments show that linear PDEs can compete with nonlinear PDEs in a compression context, and that codecs with linear PDEs are even able to outperform standard approaches in terms of both speed and quality.


Image Synthesis Methods for Texture-Based Visualization of Vector Fields
(Advisor: Prof. Jens Krüger, now Uni Duisburg-Essen)

Wed, 07.06.2017, 16:00 h, DFKI building, VisCenter -1.63

Visualization of vector fields is an important field of study having a wide range of scientific and engineering applications from aircraft design to molecular chemistry. Dense or texture-based approach aims to show the complete vector field with the means of image-synthesis and, therefore, not to miss any important features of the data set.

The presented work describes novel models and techniques for texture-based vector field visualization. Application of the methods of discrete image analysis to popular visualization techniques such as Line Integral Convolution leads to new insights in the structure of these methods and their possible development. Based on a computational model for evaluation of visualization texture quality score important attributes of an efficient visualization are identified including contrast and specific spatial frequency structure. Then, visualization techniques with increasing applicability are designed aiming to optimize these quality features. Finally, a discrete probabilistic framework is suggested as a generalization of the presented visualization methods. Based on this theoretical foundation, a multi-scale three-dimensional flow visualization approach is presented. The demonstrated visualization results are evaluated using formal quality metrics combined with formal statistical methods and an expert survey.


ManyDSL – One Host for All Language Needs
(Advisor: Prof. Philipp Slusallek)

Tue, 06.06.2017, 9:15h, DFKI building, VisCenter -1.63

Languages shape thoughts. To evolve our thoughts we need to evolve the languages we speak in. But what tools are there to create and upgrade the computer languages? Nowadays two main approaches exist. Dedicated language tools and parser generators allows to define new standalone languages from scratch. Alternatively, one can “abuse” sufficiently flexible host languages to embed small domain-specific languages within them.

In this work we present an alternative: ManyDSL. The user specifies the syntax of a custom language in a form of LL1 grammar. The semantics however is specified in a single functional host language – DeepCPS. In DeepCPS it is not necessary to manually create AST or any other intermediate structure. Instead, the semantic functions simply contain the actual code that the program should execute. DeepCPS features a novel, dynamic approach to staging. The staging allows for partial evaluation, and defining arbitrary phases of the compilation process. Language semantic function can specify:

  • early compilation steps such as custom variable lookup rules and type checking
  • the executable code that is compiled
  • the domain-specific optimization given as varying function specialization strategies

all this given in a single, uniform functional form.

Moreover, fragments of ManyDSL languages can be put in functions as well and can be shared in form of a library. The user may define and use many of such languages within a single project. With ManyDSL we want to encourage developers to design and use languages that best suit their needs. We believe that over time, with the help of grammar libraries, creating new languages will become accessible to every programmer.


High Dynamic Range Imaging: Problems of Video Exposure Bracketing, Luminance Calibration and Gloss Editing
(Advisor: Dr.-Ing. habil. Karol Myszkowski)

Fri, 02.06.2017, 10:00h, building E1 4, room 0.19

Two-dimensional, conventional images are gradually losing their hegemony, leaving room for novel formats. Among these, 8 bit images give place to high dynamic range (HDR) images, allowing to improve color gamut and visibility of details in dark and bright areas leading to a more immersive viewing experience. In the same time, light field scene representation as well is gaining importance, further propelled by the recent reappearance of virtual reality. Light field data allows changing a camera position, an aperture or a focal length in post-production. It facilitates object insertions and simplifies visual effects workflow by integrating 3D nature of visual effects with 3D nature of light fields. Content generation is one of the stumbling blocks in these realms. Sensor limitation does not allow to capture a wide dynamic range with a conventional camera. The “HDR mode“, often met on mobile devices, relies on techniques called images fusion and allows to partially overcome the limited range of a sensor. The HDR video in the same time remains a challenging problem. A solution for capturing HDR video on a conventional capturing device will be presented. Further, HDR content visualization task often requires an input to be in absolute values: whether the target media is an HDR or a standard/low dynamic range display. To this end, a calibration algorithm is proposed, that can be applied to existent imaging and does not require any additional measurement hardware. Finally, as the use of multidimensional scene representations becomes more common, a key challenge is the ability to edit or modify the appearance of the objects in the light field. A multidimensional filtering approach is described in which the specular highlights are filtered in the spatial and angular domains to target a desired increase of the material roughness.


Interactive On-Skin Devices for Expressive Touch-based Interactions
(Advisor: Prof. Jürgen Steimle)

Thu, 01.06.2017, 10:00h, building E1 7, room 001

Skin has been proposed as a large, always-available, and easy to access input surface for mobile computing. However, it is fundamentally different than prior rigid de-vices: skin is elastic, highly curved, and provides tactile sensation. This thesis ad-vances the understanding of skin as an input surface and contributes novel skin-worn devices and their interaction techniques.

We present the findings from an elicitation study on how and where people interact on their skin. The findings show that participants use various body locations for on-skin interaction. Moreover, they show that skin allows for expressive interaction using multi-touch input and skin-specific modalities.

We contribute three skin-worn device classes and their interaction techniques to enable expressive on-skin interactions: iSkin investigates multi-touch and pressure input on various body locations. SkinMarks supports touch, squeeze, and bend sen-sing with co-located visual output. The devices‘ conformality to skin enables interaction on highly challenging body locations. Finally, ExpressSkin investigates expressive interaction techniques using fluid combinations of high-resolution pressure, shear, and squeeze input. Taken together, this thesis contributes towards expressive on-skin interaction with multi-touch and skin-specific input modalities on various body locations.


Analyzing DNA Methylation Signatures of Cell Identity
(Advisor: Prof. Thomas Lengauer)

Wed, 31.05.2017, 16:00h, building E1 5, room 029

Although virtually all cells in an organism share the same genome, regulatory mecha-nisms give rise to hundreds of different, highly specialized cell types. These mechanisms are governed by epigenetic patterns, such as DNA methylation, which determine DNA packaging, spatial organization of the genome, interactions with regulatory enzymes as well as RNA expression and which ultimately reflect the state of each individual cell. In my dissertation, I have developed computational methods for the interpretation of epigenetic signatures inscribed in DNA methylation and implemented these methods in software packages for the comprehensive analysis of genome-wide datasets. Using these tools, we characterized epigenetic variability among mature blood cells and described DNA methylation dynamics during immune memory formation and during the reprogramming of leukemia cells to a pluripotent cell state. Furthermore, we profiled the methylomes of stem and progenitor cells of the human blood system and dissected changes during cell lineage commitment. We employed machine learning methods to derive statistical models that are capable of accurately inferring cell identity and identify methylation signatures that describe the relatedness between cell types from which we derived a data-driven reconstruction of human hematopoiesis. In summary, the methods and software tools developed in this work provide a framework for interpreting the epigenomic landscape spanned by DNA methylation and for understanding epigenetic regulation involved in cell differentiation and disease.


Maksim LAPIN
Image Classification with Limited Training Data and Class Ambiguity
(Advisor: Prof. Bernt Schiele)

Mon, 22.05.2017, 16:00h, building E1 4, room 024

Modern image classification methods are based on supervised learning algorithms that require labeled training data. However, only a limited amount of annotated data may be available in certain applications due to scarcity of the data itself or high costs associated with human annotation. Introduction of additional information and structural constraints can help improve the performance of a learning algorithm. In this talk, we study the framework of learning using privileged information and demonstrate its relation to learning with instance weights. We also consider multitask feature learning and develop an efficient dual optimization scheme that is particularly well suited to problems with high dimensional image descriptors. Scaling annotation to a large number of image categories leads to the problem of class ambiguity where clear distinction between the classes is no longer possible. Many real world images are naturally multilabel yet the existing annotation might only contain a single label. In this talk, we propose and analyze a number of loss functions that allow for a certain tolerance in top k predictions of a learner. Our results indicate consistent improvements over the standard loss functions that put more penalty on the first incorrect prediction compared to the proposed losses.


Generating and Grounding of Natural Language Descriptions for Visual Data
(Advisor: Prof. Bernt Schiele)

Mon, 15.05.2017, 16:00h, building E1 4, room 024

Generating natural language descriptions for visual data links computer vision and computational linguistics. Being able to generate a concise and human-readable description of a video is a step towards visual understanding. At the same time, grounding natural language in visual data provides disambiguation for the linguistic concepts, necessary for many applications. This thesis focuses on both directions and tackles three specific problems.

First, we develop recognition approaches to understand video of complex cooking activities. We propose an approach to generate coherent multi-sentence descriptions for our videos. Furthermore, we tackle the new task of describing videos at variable level of detail. Second, we present a large-scale dataset of movies and aligned professional descriptions. We propose an approach, which learns from videos and sentences to describe movie clips relying on robust recognition of visual semantic concepts. Third, we propose an approach to ground textual phrases in images with little or no localization supervision, which we further improve by introducing Multimodal Compact Bilinear Pooling for combining language and vision representations. Finally, we jointly address the task of describing videos and grounding the described people.

To summarize, this thesis advances the state-of-the-art in automatic video description and visual grounding and also contributes large datasets for studying the intersection of computer vision and computational linguistics.


Analysis and Improvement of the Visual Object Detection Pipeline
(Advisor: Prof. Bernt Schiele)

Mon, 02.05.2017, 16:00h, building E1 4, room 024

Visual object detection has seen substantial improvements during the last years due to the possibilities enabled by deep learning. In this thesis, we analyse and improve different aspects of the commonly used detection pipeline: (1) We analyse ten years of research on pedestrian detection and find that improvement of feature representations was the driving factor. (2) Motivated by this finding, we adapt an end-to-end learned detector architecture from general object detection to pedestrian detection. (3) A comparison between human performance and state-of-the-art pedestrian detectors shows that pedestrian detectors still have a long way to go before reaching human level performance and diagnoses failure modes of several top performing detectors. (4) We analyse detection proposals as a preprocessing step for object detectors. By examining the relationship between localisation of proposals and final object detection performance, we define and experimentally verify a metric that can be used as a proxy for detector performance. (5) We analyse a common postprocessing step in virtually all object detectors: non-maximum suppression (NMS). We present two learnable approaches that overcome the limitations of the most common approach to NMS. The introduced paradigm paves the way to true end-to-end learning of object detectors without any post-processing.


XML3D: Cross-lingual Transfer of Semantic Role Labeling Models
(Advisor: Prof. Ivan Titov, now Amsterdam)

Mon, 24.04.2017, 14:00h, building C7 4, room 1.17

Semantic role labeling is an important step in natural language understanding, offering a formal representation of events and their participants as described in natural language, without requiring the event or the participants to be grounded. Extensive annotation efforts have enabled statistical models capable of accurately analyzing new text in several major languages. Unfortunately, manual annotation for this task is complex and requires training and calibration even for professional linguists, which makes the creation of manually annotated resources for new languages very costly. The process can be facilitated by leveraging existing resources for other languages using techniques such as cross-lingual transfer and annotation projection.

This work addresses the problem of improving semantic role labeling models or creating new ones using cross-lingual transfer methods. We investigate different approaches to adapt to the availability and nature of the existing target-language resources. Specifically, cross-lingual bootstrapping is considered for the case where some annotated data is available for the target language, but using an annotation scheme different from that of the source language. In the more common setup, where no annotated data is available for the target language, we investigate the use of direct model transfer, which requires no sentence-level parallel resources. Finally, for cases where the parallel resources are of limited size or poor quality, we propose a novel method, referred to as feature representation projection, combining the strengths of direct transfer and annotation projection.


Kristian SONS
XML3D: Interactive 3D Graphics for the Web
(Advisor: Prof. Philipp Slusallek)

Thu, 30.03.2017, 15:00h, building D3 4, room -1.63 (VisCenter)

The web provides the basis for worldwide distribution of digital information but also established itself as a ubiquitous application platform. The idea of integrating interactive 3D graphics into the web has a long history, but eventually both fields developed largely independent of each other.

In this thesis we develop the XML3D architecture, our approach to integrate a declarative 3D scene description into exiting web technologies without sacrificing required flexibility or tying the description to a specific rendering algorithm. XML3D extends HTML5 and leverages related web technologies including Cascading Style Sheets (CSS) and the Document Object Model (DOM). On top of this seamless integration, we present novel concepts for a lean abstract model that provides the essential functionality to describe 3D scenes, a more general description of dynamic effects based on declarative dataflow graphs, more general yet programmable material descriptions, assets which are still configurable during instantiation from a 3D scene, and a container format for efficient delivery of binary 3D content to the client.


Sourav DUTTA
Efficient knowledge management for named entities from text
(Advisor: Prof. Gerhard Weikum)

Thu, 09.03.2017, 16:00h, building E1 4, room 0.24

The evolution of search from keywords to entities has necessitated the efficient harvesting and management of entity-centric information for constructing knowledge bases catering to various applications such as semantic search, question answering, and information retrieval. The vast amounts of natural language texts available across diverse domains on the Web provide rich sources for discovering facts about named entities such as people, places, and organizations. A key challenge, in this regard, entails the need for precise identification and disambiguation of entities across documents for extraction of attributes/relations and their proper representation in knowledge bases. Additionally, the applicability of such repositories not only involves the quality and accuracy of the stored information, but also storage management and query processing efficiency. This dissertation aims to tackle the above problems by presenting efficient approaches for entity-centric knowledge acquisition from texts and its representation in knowledge repositories. This dissertation presents a robust approach for identifying text phrases pertaining to the same named entity across huge corpora, and their disambiguation to canonical entities present in a knowledge base, by using enriched semantic contexts and link validation encapsulated in a hierarchical clustering framework. This work further presents language and consistency features for classification models to compute the credibility of obtained textual facts, ensuring quality of the extracted information. Finally, an encoding algorithm, using frequent term detection and improved data locality, to represent entities for enhanced knowledge base storage and query performance is presented.


Populating knowledge bases with temporal information
(Advisor: Prof. Gerhard Weikum)

Tues, 28.02.2017, 15:00h, building E1 4, room 0.24

Recent progress in information extraction has enabled the automatic construction of large knowledge bases. Knowledge bases contain millions of entities (e.g. persons, organizations,events, etc.), their semantic classes, and facts about them. Knowledge bases have become a great asset for semantic search, entity linking, deep analytics, and question answering. However, a common limitation of current knowledge bases is the poor coverage of temporal knowledge. First of all, so far, knowledge bases have focused on popular events and ignored long tail events such as political scandals, local festivals, or protests. Secondly, they do not cover the textual phrases denoting events and temporal facts at all.

The goal of this dissertation, thus, is to automatically populate knowledge bases with this kind of temporal knowledge. The dissertation makes the following contributions to address the aforementioned limitations. The first contribution is a method for extracting events from news articles. The method reconciles the extracted events into canonicalized representations and organizes them into fine-grained semantic classes. The second contribution is a method for mining the textual phrases denoting the events and facts. The method infers the temporal scopes of these phrases and maps them to a knowledge base. Our experimental evaluations demonstrate that our methods yield high quality output compared to state-of-the-art approaches, and can indeed populate knowledge bases with temporal knowledge.


Provably Sound Semantics Stack for Multi-Core System Programming with Kernel Threads
(Advisor: Prof. Wolfgang Paul)

Fri, 24.02.2017, 15:15h, building E1 7, room 0.01

Operating systems and hypervisors (e.g., Microsoft Hyper-V) for multi-core processor architectures are usually implemented in high-level stack-based programming languages integrated with mechanisms for the multi-threaded task execution as well as the access to low-level hardware features. Guaranteeing the functional correctness of such systems is considered to be a challenge in the field of formal verification because it requires a sound concurrent computational model comprising programming semantics and steps of specific hardware components visible for the system programmer.

In this doctoral thesis we address the aforementioned issue and present a provably sound concurrent model of kernel threads executing C code mixed with assembly, and basic thread operations (i.e., creation, switch, exit, etc.), needed for the thread management in OS and hypervisorkernels running on industrial-like multi-core machines. For the justification of the model, we establish a semantics stack, where on its bottom the multi-core instruction set architecture performing arbitrarily interleaved steps executes binary code of guests/processes being virtualized and the compiled source code of the kernel linked with a library implementing the threads. After extending an existing general theory for concurrent system simulation and by utilising the order reduction of steps under certain safety conditions, we apply the sequential compiler correctness for the concurrent mixed programming semantics, connect the adjacent layers of the model stack, show the required properties transfer between them, and provide a paper-and-pencil proof of the correctness for the kernel threads implementation with lock protected operations and the efficient thread switch based on the stack substitution.


Minimal Assumptions in Cryptography
(Advisor: Prof. Dominique Schröder, now Erlangen)

Thurs, 09.02.2017, 16:00h, building E9 1, room 0.01

Virtually all of modern cryptography relies on unproven assumptions. This is necessary, as the existence of cryptography would have wide ranging implications. In particular, it would hold that P =/= NP, which is not known to be true. Nevertheless, there clearly is a risk that the assumptions may be wrong. Therefore, an important field of research explores which assumptions are strictly necessary under different circumstances.

This thesis contributes to this field by establishing lower bounds on the minimal assumptions in three different areas of cryptography. We establish that assuming the existence of physically uncloneable functions (PUF), a specific kind of secure hardware, is not by itself sufficient to allow for secure two-party computation protocols without trusted setup. Specifically, we prove that unconditionally secure oblivious transfer can in general not be constructed from PUFs. Secondly, we establish a bound on the potential tightness of security proofs for Schnorr signatures. Essentially, no security proof based on virtually arbitrary non-interactive assumptions defined over an abstract group can be significantly tighter than the known, forking lemma based, proof. Thirdly, for very weak forms of program obfuscation, namely approximate indistinguishability obfuscation, we prove that they cannot exist with statistical security and computational assumptions are therefore necessary. This result holds unless the polynomial hierarchy collapses or one-way functions do not exist.


Distributed Querying of Large Labeled Graphs
(Advisor: Prof. Gerhard Weikum)

Mon, 06.02.2017, 15:00h, building E1 5, room 0.29

„Labeled Graph“, where vertices and edges are labeled, is an important adaptation of „graph“ with many practical applications. An enormous research effort has been invested in to the task of managing and querying graphs, yet a lot challenges are left unsolved. In this thesis, we advance the state-of-the-art for the following query models by proposing a distributed solution to process them in an efficient and scalable manner.

• Set Reachability: A generalization of basic notion of graph reachability, set reachability deals with finding all reachable pairs for a given source and target sets. We present a non-iterative distributed solution that takes only a single round of communication for any set reachability query.

• Basic Graph Patterns (BGP): BGP queries are a common mode of querying knowledge graphs, biological datasets, etc. We present a novel distributed architecture that relies on the concepts of asynchronous executions, join-ahead pruning, and a multi-threaded query processing framework to process BGP queries in an efficient and scalable manner.

• Generalized Graph Patterns (GGP). These queries combine the semantics of pattern matching (BGP) and navigational queries. We present a distributed solution with bimodal indexing layout that individually support efficient and scalable processing of BGP queries and navigational queries.

We propose a prototype distributed engine, coined “TriAD” (Triple Asynchronous and Distributed) that supports all the aforementioned query models. We also provide a detailed empirical evaluation of TriAD in comparison to several state-of-the-art systems over multiple real-world and synthetic datasets.


r-Symmetry for Triangle Meshes: Detection and Applications
(Advisor: Prof. Philipp Slusallek)

Tues, 31.01.2017, 12:15h, building D3 2, DFKI VisCenter

In this thesis, we attempt to give insights into how rigid partial symmetries can be efficiently computed and used in the context of inverse modeling of shape families, shape understanding, and compression.

We investigate a certain type of local similarities between geometric shapes. We analyze the surface of a shape and find all points that are contained inside identical, spherical neighborhoods of a radius r. This allows us to decompose surfaces into canonical sets of building blocks, which we call microtiles. We show that the microtiles of a given object can be used to describe a complete family of related shapes. Each of these shapes is locally similar to the original, but can have completely different global structure. This allows for using r-microtiling for inverse modeling of shape variations and we develop a method for shape decomposition into rigid, 3D manufacturable building blocks that can be used to physically assemble shape collections. Furthermore, we show how the geometric redundancies encoded in the microtiles can be used for triangle mesh compression.


Xiaokun WU
Structure-aware content creation – Detection, retargeting and deformation
(Advisor: Prof. Hans-Peter Seidel)

Fri, 20.01.2017, 14:15h, building E1 4, room 019

Nowadays, access to digital information has become ubiquitous, while three-dimensional visual representation is becoming indispensable to knowledge understanding and information retrieval. Three-dimensional digitization plays a natural role in bridging connections between the real and virtual world, which prompt the huge demand for massive three-dimensional digital content. But reducing the effort required for three-dimensional modeling has been a practical problem, and long standing challenge in compute graphics and related fields.

In this talk, we propose several techniques for lightening up the content creation process, which have the common theme of being structure-aware, i.e. maintaining global relations among the parts of shape. We are especially interested in formulating our algorithms such that they make use of symmetry structures, because of their concise yet highly abstract principles are universally applicable to most regular patterns.

We introduce our work from three different aspects in this thesis. First, we characterized spaces of symmetry preserving deformations, and developed a method to explore this space in real-time, which significantly simplified the generation of symmetry preserving shape variants. Second, we empirically studied three-dimensional offset statistics, and developed a fully automatic retargeting application, which is based on verified sparsity. Finally, we made step forward in solving the approximate three-dimensional partial symmetry detection problem, using a novel co-occurrence analysis method, which could serve as the foundation to high-level applications.

Thesis defenses