PhD Thesis Defenses 2011

PhD thesis defenses are a public affair and open to anyone who is interested. Attending them is a great way to get to know the work going on by your peers in the various research groups. On this page you will find a list of upcoming and past defense talks.

Please go here for electronic access to most of the doctoral dissertations from Saarbrücken Computer Science going back to about 1990.

 

December

Michael FELD
A Speaker Classification Framework for Non-intrusive User Modeling: Speech- based Personalization of In-Car Services
(Advisor: Prof. Wolfgang Wahlster)
Fri, 23 Dec 2011, 14:00h, building D3 2, DFKI, Reuse seminar room

Speaker Classification, i.e. the automatic detection of certain characteristics of a person based on his or her voice, has a variety of applications in modern computer technology and artificial intelligence: As a non-intrusive source for user modeling, it can be employed for personalization of human-machine interfaces in numerous domains.

This dissertation presents a principled approach to the design of a novel Speaker Classification system for automatic age and gender recognition which meets these demands. Based on literature studies, methods and concepts dealing with the underlying pattern recognition task are developed. The final system consists of an incremental GMM-SVM supervector architecture with several optimizations. An extensive data-driven experiment series explores the parameter space and serves as evaluation of the component. Further experiments investigate the language- independence of the approach.

As an essential part of this thesis, a framework is developed that implements all tasks associated with the design and evaluation of Speaker Classification in an integrated development environment that is able to generate efficient runtime modules for multiple platforms. Applications from the automotive field and other domains demonstrate the practical benefit of the technology for personalization, e.g. by increasing local danger warning lead time for elderly drivers.

 

Thomas KLEINBAUER
Generating Automated Meeting Summaries
(Advisor: Prof. Wolfgang Wahlster)
Fri, 23 Dec 2011, 10:00h, building D3 2, DFKI, Reuse seminar room

The thesis at hand introduces a novel approach for the generation of abstractive summaries of meetings. While the automatic generation of document summaries has been studied for some decades now, the novelty of this thesis is mainly the application to the meeting domain (instead of text documents) as well as the use of a lexicalized representation formalism on the basis of Frame Semantics. This allows us to generate summaries abstractively (instead of extractively). The thesis begins with an overall motivation of the research domain, and a description of the central research questions. After that, the notion of a “summary” is discussed in general, and different dimensions of summaries are compared, before we give a broad overview over related work in the field of automatic summarization.

Then, we introduce the necessary theories for this approach and the data sets used. Following that, we discuss the architecture of the MEESU system which has been developed in the course of this work, as well as the theory and implementation of the contained components. The system has been evaluated using a novel extrinsic evaluation approach which is detailed next. The thesis concludes with a summary and a discussion of possible points for future work.

 

Andreas Markus BRANDMAIER
Permutation Distribution Clustering and Structural Equation Model Trees
(Advisor: Prof. Antonio Krüger)
Wed, 21 Dec 2011, 15:00h, in building E1 1, R 4.07

In this talk, I present two novel methodologies for the exploratory analysis of psychological data sets intended to support researchers in informed theory development. Both methods rely on trees to uncover hierarchical structure in the data and are grounded on statistical and information-theoretic selection criteria. Permutation Distribution Clustering (PDC) is a novel approach for clustering time series data without prior hypotheses. It facilitates the discovery of subgroups that differ with respect to theirpermutation distribution. Importantly, PDC is efficient to compute and invariant to monotonic transformations, and is therefore useful for the generation of hypotheses when no prior models of the data are available. Structural Equation Model Trees (SEM Trees) answer the need for model modification in order to refine descriptions of observed phenomena. SEM Trees constitute a method of model-based recursive partitioning based on Structural Equation Models (SEM). By harnessing the full potential of SEM, they represent a general data analysis technique that can be used for a large range of multivariate models that are frequently employed in psychological research. Together, these techniques can be used to inform the generation and modification of hypotheses for later testing.

 

Marc SCHRÖDER
The SEMAINE API – A component integration framework for a naturally interacting and emotionally competent Embodied Conversational Agent
(Advisor: Prof. Hans Uszkoreit)
Wed, 21 Dec 2011, 13:00h, Building C7 4, R 1.17

The present thesis addresses the topic area of Embodied Conversational Agents (ECAs) with capabilities for natural interaction with a human user and emotional competence with respect to the perception and generation of emotional expressivity. The focus is on the technological underpinnings that facilitate the implementation of a real-time system with these capabilities, built from re-usable components. The thesis comprises three main contributions. First, it describes a new component integration framework, the SEMAINE API, which makes it easy to build emotion- oriented systems from components which interact with one another using standard and pre-standard XML representations. Second, it presents a prepare-and-trigger system architecture which substantially speeds up the time to animation for system utterances that can be pre-planned. Third, it reports on the W3C Emotion Markup Language, an upcoming web standard for representing emotions in technological systems. We assess critical aspects of system performance, showing that the framework provides a good basis for implementing real-time interactive ECA systems, and illustrate by means of three examples that the SEMAINE API makes it is easy to build new emotion-oriented systems from new and existing components.

 

Sascha PARDUHN
Shape Analysis for Algorithm Animation: Strengths and Weaknesses
(Advisors: Prof. Raimund Seidel and Prof. Reinhard Wilhelm)
Wed, 21 Dec 2011, 11:00h, in Building E1 3, HS 02

We present a non-traditional approach to algorithm visualization that is based on shape analysis, a static program analysis, that so far has been mostly used to verify program properties. Our approach differs from traditional visualizations: Instead of simulating a concrete program execution, we use the invariants produced by shape analysis to look at all possible program executions at the same time. This allows us to show meta level properties like correctness very easily, but at the price of a higher level of abstraction than most traditional visualizations. For example when sorting we do not model individual data values, but instead maintain a relation that tells us if a data value is smaller than another.

Shape analysis is not a new technique, but its application to algorithm visualization is: The analysis output can quite naturally be interpreted as a set of graphs, however, suitable presentation of the shape analysis output requires preparation of the data. This includes methods to reduce the size of the analysis output and methods to make it easier to understand. The inherent abstraction also raises new problems. We discuss these problems and how they influence the layout of our visualization. We also give guidelines for creating analyses specifically with visualization in mind and provide a detailed description of an example analysis, that was created according to those guidelines. Lastly we implemented a visualization tool that was used as a testbed for many of the methods mentioned in the thesis and we will present a short overview of its features.

 

Art TEVS
Deformable Shape Matching
(Advisor: Prof. Hans-Peter Seidel)
Wed, 21 Dec 2011, 10:00h, in building E1 4, R 019

Deformable shape matching has become an important building block in academia as well as in industry. Given two three dimensional shapes A and B the deformation function faligning A with B has to be found. The function is discretized by a set of corresponding point pairs. Unfortunately, the computation cost of a brute-force search of correspondences is exponential. Additionally, to be of any practical use the algorithm has to be able to deal with data coming directly from 3D scanner devices which suffers from acquisition problems like noise, holes as well as missing any information about topology.

This dissertation presents novel solutions for solving shape matching:
First, an algorithm estimating correspondences using a randomized search strategy is shown. Additionally, a planning step dramatically reducing the matching costs is incorporated. Using ideas of these both contributions, a method for matching multiple shapes at once is shown. The method facilitates the reconstruction of shape and motion from noisy data acquired with dynamic 3D scanners. Considering shape matching from another perspective a solution is shown using Markov Random Fields (MRF). Formulated as MRF, partial as well as full matches of a shape can be found. Here, belief propagation is utilized for inference computation in the MRF. Finally, an approach significantly reducing the space-time complexity of belief propagation for a wide spectrum of computer vision tasks is presented.

 

Madhusudan MANJUNATH
A Riemann-Roch Theory for Sublattices of the Root Lattice An, Graph Automorphisms and Counting Cycles in Graphs
(Advisor: Prof. Kurt Mehlhorn)
Tues, 20 Dec 2011, 14:15h, in Building E1 4, MPI, R023

This thesis consists of two independent parts. In the first part of the thesis, we develop a Riemann-Roch theory for sublattices of the root lattice $A_n$ extending the work of Baker and Norine ({\textit Advances in Mathematics}, 215({\textbf 2}): 766-788, 2007) and study questions that arise from this theory. Our theory is based on the study of critical points of a certain simplicial distance function on a lattice and establishes connections between the Riemann-Roch theory and the Voronoi diagrams of lattices under certain simplicial distance functions. In particular, we provide a new geometric approach for the study of the Laplacian of graphs. As a consequence, we obtain a geometric proof of the Riemann-Roch theorem for graphs and generalise the result to other sub-lattices of $A_n$. Furthermore, we use the geometric approach to study the problem of computing the rank of a divisor on a finite multigraph $G$ to obtain an algorithm that runs in polynomial time for a fixed number of vertices, in particular with running time $2^{O(n \log n)}\text{poly} (\text{size}(G))$ where $n$ is the number of vertices of $G$. Motivated by this theory, we study a dimensionality reduction approach to the graph automorphism problem and we also obtain an algorithm for the related problem of counting automorphisms of graphs that is based on exponential sums.
In the second part of the thesis, we develop an approach, based on complex-valued

hash functions, to count cycles in graphs in the data streaming model. Our algorithm is based on the idea of computing instances of complex-valued random variables over the given stream and improves drastically upon the na\“ive sampling algorithm.

 

Daniel GRUND
Static Cache Analysis for Real-time Systems: LRU, FIFO, PLRU
(Advisor: Prof. Reinhard Wilhelm)
Tues, 20 Dec 2011, 13:00h, in building E1 4 (MPI), R 019

Static cache analysis is an indispensable part of static timing analysis, which is employed to verify the timing behavior of programs in safety-critical systems.
We extend the applicability of cache analysis to caches with FIFO and PLRU replacement policies. These are more often employed than the LRU policy, on which previous work has focused, yet they are disproportionately more difficult to analyze. We identify a generic framework for cache analysis that couples several cache analyses in order to increase their precision by cooperation; though without abandoning separation of concerns. For the FIFO policy we contribute two must- and one may-analysis. One must-analysis exploits information about cache misses to subsequently be able to predict cache hits. The other two analyses are based on the novel principle of static phase detection: If memory access sequences can be partitioned into phases—subsequences with the same or similar accesses—one can predict hits or misses, respectively. Instantiating the framework with the three analyses results in an analysis that tightly approximates the FIFO behavior and clearly outperforms previous approaches.

For the PLRU policy we contribute a strongest complete abstraction, which distinguishes cache states if and only if they exhibit different replacement behavior. Based thereon, we present an incomplete analysis that is able to exclude spurious logarithmic-time eviction of cache contents.

 

Carola WINZEN
Toward a Complexity Theory for Randomized Search Heuristics: Black-Box Models
(Advisor: Prof. Kurt Mehlhorn)
Fri, 16 Dec 2011, 11:00h, in building E1 4 (MPI), R 024

Randomized search heuristics such as evolutionary algorithms, simulated annealing, and ant colony optimization are a broadly used class of general-purpose algorithms. Analyzing them via classical methods of theoretical computer science is a growing field. While several strong runtime analysis results have appeared in the last 20 years, a powerful complexity theory for such algorithms is yet to be developed.

In the first part of this talk we survey different existing complexity notions for randomized search heuristics. We present new results indicating that additional requirements must be added to the existing models in order to obtain more meaningful complexity measures.

In the second part of the talks we enrich the existing complexity notions by the additional restriction that not the actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the black-box algorithm. Many randomized search heuristics belong to this class of algorithms. We show that the new ranking-based model gives more realistic complexity estimates for some problems.

 

Katarzyna BOZEK
Analysis of HIV-host interaction on different scales
(Advisor: Prof. Thomas Lengauer)
Thurs, 8 Dec 2011, 14:00h, Building E2 1, Room 001

The human immunodeficiency virus depends on molecular pathways of the host for efficient replication and spread. The intricate network of host-virus interactions shapes the virus’ evolution by driving the pathogen to evade immune recognition and constraining it to maintain its capacity to replicate. Study of the HIV-host interactions provides important insights into viral evolution, pathogenicity and potential treatment strategies. This thesis presents an analysis of HIV-host interactions on several scales, ranging from individual protein interactions to whole genomes. On the scale of individual interaction we analyze structural and physical determinants of the interaction between host TRIM5 and virus capsid – an interaction of potential therapeutic interest due to the capacity of TRIM5 to block retroviral infections. On the scale of viral population we present two studies of a highly variable region of the virus genome involved in the interaction with host cell coreceptors upon virus cell entry. The studies provide insights into the virus evolution and the physicochemical and structural properties related to its interaction with cellular coreceptors. On the scale of the single cell we develop models of HIV cell entry involving virus, host and environmental factors.

The models represent a comprehensive picture of the virus phenotype and allow one to view the variability of virus phenotypes on 2D phenotype maps. On the genomic scale we perform a large-scale analysis of all HIV-host interactions. This study reveals insights into general patterns of the host-pathogen evolution and suggests candidate host proteins involved in interactions potentially important for the infection and interesting for further study on other scales. Interactions and processes crucial for the HIV infection reemerge across the scales pointing to the importance of integrative, multi-scale studies of host-pathogen biology.

 

Jasmina BOGOJESKA
Statistical Learning Methods for Bias-aware HIV Therapy Screening
(Advisor: Prof. Thomas Lengauer)
Thurs, 8 Dec 2011, 13:00h, in building E2 1

HIV patients are treated by administration of combinations of antiretroviral drugs. Therapy selection can be supported by statistical methods that predict the outcomes of candidate therapies. However, these methods are based on clinical data sets that are biased in many ways. The main sources of bias are the evolving trends of treating HIV patients, the sparse, uneven therapy representation, the different treatment backgrounds of the clinical samples and the differing abundances of the

various therapy-experience levels. In this thesis we focus on the problem of devising bias-aware statistical learning methods for HIV therapy screening — predicting the effectiveness of HIV combination therapies. For this purpose we develop five novel approaches that when predicting outcomes of HIV therapies address the aforementioned biases in the clinical data sets. Three of the approaches aim for good prediction performance for every drug combination independent of its abundance in the HIV clinical data set. To achieve this, they balance the sparse and uneven therapy representation by using different routes of sharing common knowledge among related therapies. The remaining two approaches additionally account for the bias originating from the differing treatment histories of the samples making up the HIV clinical data sets. For this purpose, both methods predict the response of an HIV combination therapy by taking not only the most recent (target) therapy but also available information from preceding therapies into account. In this way they provide good predictions for advanced patients in mid to late stages of HIV treatment and for rare drug combinations.

 

November

Martin BOKELOH
Symmetry in 3D Shapes – Analysis and Applications to Model Synthesis
(Advisor: Prof. Hans-Peter Seidel)
Tues, 29 Nov 2011, 10:00h, in building E1 4 (MPI), R 019

Symmetry is an essential property of a shapes’ appearance and presents a source of information for structure-aware deformation and model synthesis. This thesis proposes feature-based methods to detect symmetry and regularity in 3D shapes and demonstrates the utilization of symmetry information for content generation. First, we will introduce two novel feature detection techniques that extract salient keypoints and feature lines for a 3D shape respectively. Further, we will propose a randomized, feature-based approach to detect symmetries and decompose the shape into recurring building blocks. Then, we will present the concept of docking sites that allows us to derive a set of shape operations from an exemplar and will produce similar shapes. This is a key insight of this thesis and opens up a new perspective on inverse procedural modeling. Finally, we will present an interactive, structure-aware deformation technique based entirely on regular patterns.

 

Christian MÜLLER
Complete Formal Hardware Verification of Interfaces for a FlexRay-like Bus
(Advisor: Prof. Wolfgang Paul)
Mon, 7 Nov 2011, 16:15h, building E1 1, R 4.07

We report in this thesis the first complete formal verification of a bus interface at the gate and register level. The presented bus interface allows to implement a time- triggered system consisting of several units interconnected by a bus. Time-triggered systems work decentralized, allow some grade of fault-tolerance against a bounded number of single errors and show a predictable recurrent behavior. We use a hardware model for multiple clock domains obtained by formalization of data sheets for hardware components, and we review known results and proof techniques about the essential components of such bus interfaces: among others serial interfaces, clock synchronization and bus control. Combining such results into a single proof leads to an amazingly subtle theory about the realization of direct connections between units (as assumed in existing correctness proofs for components of interfaces) by properly controlled time-triggered buses. It also requires an induction arguing simultaneously about bit transmission across clock domains, clock synchronization and bus control. The design of the bus controller can be automatically translated into Verilog and deployed on FPGAs.

 

Massimiliano MARCON
System Designs for Bulk and User-generated Content Delivery in the Internet
(Advisor: Dr. Krishna Gummadi)
Fri, 4 Nov 2011, 14:00 h, MPI-SWS Wartburg, Martin-Luther-Str. 12, 5th floor conference room

This thesis proposes and evaluates new system designs to support two emerging

Internet workloads:

(a) bulk content, such as downloads of large media and scientific libraries, and (b) user-generated content (UGC), such asphotos and videos that users share online, typically on online social networks (OSNs).
Bulk content accounts for a large and growing fraction of today’s Internet traffic. Due to the high cost of bandwidth, delivering bulk content in the Internet is expensive.

To reduce the cost of bulk transfers, I proposed traffic shaping and scheduling designs that exploit the delay-tolerant nature of bulk transfers to allow ISPs to deliver bulk content
opportunistically.I evaluated my proposals through software prototypes and simulations driven by real-world traces from commercial and academic ISPs and found that they result in considerable reductions in transit costs or increased link utilization.
The amount of user-generated content (UGC) that people share online has been rapidly growing in the past few years. Most users share UGC using online social networking websites (OSNs), whichcan impose arbitrary terms of use, privacy policies, and limitations on the content shared on their websites. To solve this problem, I evaluated the feasibilityof a system that allows users to share UGC directly from the home, thus enabling them to regain control of the content that they share online. Using data from popular OSN websites and a testbed deployed in 10 households, I showed that current trends bode well for the delivery of personal UGC from users‘ homes. I also designed and deployed Stratus, a prototype system that uses home gateways to share UGC directly from the home.

 

October

Imran RAUF
Polynomially Solvable Cases of Hypergraph Transversal and Related Problems
(Advisor: Prof. Kurt Mehlhorn)
Tues, 25 Oct 2011, 09:00h , in building E1 4 (MPI), Room 024

This thesis is mainly concerned with the hypergraph transversal problem, which asks to generate all minimal transversals of a given hypergraph. While the current best upper bound on the complexity of the problem is quasi-polynomial in the combined input and output sizes, it is shown to be solvable in output polynomial time for a number of hypergraph classes. We extend this polynomial frontier to the hypergraphs induced by hyperplanes and constant-sided polytopes in fixed dimension R^d and hypergraphs for which every minimal transversal and hyperedge intersection is bounded. We also show the problem to be fixed parameter tractable with respect to the minimum integer k such that the input hypergraph is k- degenerate, and also with respect to its maximum complementary degree. Whereas we improve the known bounds when the parameter is the maximum degree of a hypergraph. We also study the readability of a monotone Boolean function which is defined as the minimum integer r such that it can be represented by an AND-OR- formula with every variable occurrence is bounded by r. We prove that it is NP-hard to approximate the readability of even a depth three Boolean formula. We also give tight sublinear upper bounds on the readability of a monotone Boolean function given in CNF (or DNF) form, parameterized by the number of terms in the CNF and the maximum number of variables in the intersection of any constant number of terms. For interval DNF’s we give much tighter logarithmic bounds on the readability. Finally, we discuss an implementation of a quasi-polynomial algorithm for the hypergraph transversal problem that runs in polynomial space. We found our implementation to be competitive with all but one previous implementation on various datasets.

 

Heiko FRIEDRICH
Ray Tracing Techniques for Computer Games and Isosurface Visualization
(Advisor: Prof. Philipp Slusallek)
Fri, 21 Oct 2011, 16:00h, in building D3 4 (DFKI), Room -1.63

Ray tracing is a powerful image synthesis technique, that has been used for high- quality offline rendering since decades. In recent years, this technique has become more important for realtime applications, but still plays only a minor role in many areas. Some of the reasons are that ray tracing is compute intensive and has to rely on preprocessed data structures to achieve fast performance. This dissertation investigates methods to broaden the applicability of ray tracing and is divided into two parts. The first part explores the opportunities offered by ray tracing based game technology in the context of current and expected future performance levels. In this regard, novel methods are developed to efficiently support certain kinds of dynamic scenes, while avoiding the burden to fully recompute the required data structures. Furthermore, todays ray tracing performance levels are below what is needed for 3D games. Therefore, the multi-core CPU of the Playstation 3 is investigated, and an optimized ray tracing architecture presented to take steps towards the required performance. In part two, the focus shifts to isosurface raytracing. Isosurfaces are particularly important to understand the distribution of certain values in volumetric data. Since the structure of volumetric data sets is diverse, op- timized algorithms and data structures are developed for rectilinear as well as unstructured data sets which allow for realtime rendering of isosurfaces including advanced shading and visualization effects. This also includes tech- niques for out-of-core and time-varying data sets.

 

September

Levi VALGAERTS
Variationelle 3D-Rekonstruktion aus Stereobildpaaren und Stereobildfolgen
(Advisor: Prof. Joachim Weickert)
Thurs, 29 Sept 2011, 14:15 h, in building E1 1, room 4.07

This talk deals with 3-D reconstruction and 3-D motion estimation from stereo images using variational methods based on dense optical flow. In the first part of the talk, we will investigate a novel application for dense optical flow, namely the estimation of the fundamental matrix of a stereo image pair. By exploiting the high interdependency between the recovered stereo geometry and the established image correspondences, we propose a coupled refinement of the fundamental matrix and the optical flow, thereby improving the accuracy of both. As opposed to many existing techniques, our method does not solve for the camera pose and scene structure separately, but recovers them in a single optimisation step. True to our principle of joint optimisation, we finally couple the dense 3-D reconstruction of the scene to the estimation of its 3-D motion. This is achieved in the second part, of the talk, where we integrate spatial and temporal information from multiple stereo pairs in a novel scene flow model.

 

David SCHULER
Assessing Test Quality
(Advisor: Prof. Andreas Zeller)
Thurs, 29 Sept 2011, 10:00 h, in building E1 1, room 4.07

This talk focuses on methods that assess the quality of a test suite’s checks. I will present the Javalanche framework that enables automated and efficient mutation testing for real-life programs. In addition, it addresses the problem of equivalent mutants through impact metrics. Furthermore, I will introduce checked coverage, which is a metric that determines parts of the code that were not only executed, but that actually contribute to results checked by the test suite.

 

Dominik DIETRICH
Assertion Level Proof Planning with Compiled Strategies
(Advisor: Prof. Jörg Siekmann)
Tues, 27 Sept 2011, 10:00 h, in new DFKI building, room +2.30, Raum Turing I

The objective of this thesis is to ease the formalization of proofs by being able to verify as well as to automatically construct abstract human-style proofs. This is achieved by lifting the logical basis to

the abstract assertion level, which has been identified as a
style of reasoning that can be found in textbooks. A case study shows that automatic reasoning procedures benefit from the abstract assertion level reasoning. In addition, we develop a strategy language that allows the specification of abstract underspecified declarative proof patterns within the proof document and supports their refinement. Case studies show that complex reasoning patterns can concisely be specified within the developed language. Together, the complementary methods provide a framework to automate declarative proofs at the assertion level.