Category Archives: Calendar

Theory Seminar 2008-08-08

Two New Results in Computational Geometry
Don Sheehy, CMU
August 8, 2008, 3:30PM, Wean 7220

Abstract:

In the meshing problem, we decompose a geometric domain into as few simplices as possible while ensuring each simplex achieves some quality (roundness) guarantee. For years, a proof of “size optimality” has been the most important stamp of approval needed by any new meshing algorithm, where size is measured as the number of simplices in the output. However, the lower bound used for these optimality proofs is not in general bounded by any function of the input size. This leads us to think that perhaps “size optimality” should not be the last word in mesh size analysis. In the first part of this talk, I will introduce well-paced point sets and prove that for this general class of inputs, standard meshing technology will produce linear size output. I will then show how to use this result to construct linear-size Delaunay meshes in any dimension by giving up quality guarantees exactly where the traditional size lower bound would dictate superlinear output.

In the second half of this double feature, I will change gears and present a data structure for Approximate Nearest Neighbor (ANN) search that achieves spatial adaptivity. That is, the algorithm can find constant-approximations to the nearest neighbor in log(d(p,q)) time per query, where q is the query point, p is the answer to the previous query, and d(p,q) is the number of points in a reasonably sized box containing p and q. Thus, we get worst-case O(log n)-time queries, but if the queries are spatially correlated, we can do even better. This is the first data structure to achieve spatial adaptivity for ANN.

(both results to appear at CCCG 2008)

Thesis Oral 2008-07-28

July 28, 2008

Virginia Panayotova Vassilevska

10:00 AM, 1507 Newell-Simon Hall

Thesis Oral

Title: Efficient Algorithms for Path Problems in Weighted Graphs

Abstract:

Problems related to computing optimal paths have been abundant in computer science since its emergence as a field. Yet for a large number of such problems we still do not know whether the state-of-the-art algorithms are the best possible. A notable example of this phenomenon is the all pairs shortest paths problem in a directed graph with real edge weights. The best algorithm (modulo small polylogarithmic improvements) for this problem runs in cubic time, a running time known since the 1960s. Our grasp of many such fundamental algorithmic questions is far from optimal, and the major goal of this thesis is to bring some new insights into efficiently solving path problems in graphs.

We focus on several path problems optimizing different measures: shortest paths, maximum bottleneck paths, minimum nondecreasing paths, and various extensions. For the all-pairs versions of these path problems we use an algebraic approach. We obtain improved algorithms using reductions to fast matrix multiplication. For maximum bottleneck paths and minimum nondecreasing paths we are the first to break the cubic barrier, obtaining truly subcubic strongly polynomial algorithms. We also consider a nonalgebraic, combinatorial approach, which is considered more efficient in practice compared to methods based on fast matrix multiplication. We present a data structure which maintains a matrix so that products with given sparse vectors can be computed efficiently. This allows us to obtain good running times for several path problems in unweighted sparse graphs.

This thesis also gives algorithms for some single source path problems. We obtain the first linear time algorithm for the single source minimum nondecreasing paths problem. We give some extensions to this, including algorithms to find shortest minimum nondecreasing paths and cheapest minimum nondecreasing paths. Besides finding optimal paths, we consider the problem of finding optimal cycles. In particular, we focus on the problem of finding in a weighted graph a triangle of maximum weight sum. We obtain the first truly subcubic algorithm for finding a maximum weight triangle in a node-weighted graph. We also present algorithms for the edge-weighted case. These algorithms immediately imply good algorithms for finding maximum weight k-cliques, or arbitrary maximum weight pattern subgraphs of fixed size.

Thesis Committee:
Guy Blelloch, Chair
Anupam Gupta
Manuel Blum
Uri Zwick, Tel Aviv University

Theory Seminar 2008-07-18

Analyzing splay trees and other data structures via forbidden substructure
Seth Pettie, University of Michigan
July 18, 2008, 3:30PM, Wean 7220

Abstract:

In this talk I’ll present a new way to analyze splay trees (and other dynamic data structures) that is not based on potential functions or direct counting arguments. The three-part strategy is to (1) transcribe the operations of the data structure as some combinatorial object, (2) show the object has some forbidden substructure, and (3) to prove upper bounds on the size of such a combinatorial object. As an example of this strategy, we show that splay trees execute a sequence of N deque operations (push, pop, inject, and eject) in O(Na^* (N)) time, where a^* is the iterated-inverse-Ackermann function. (This bound is within a tiny a^*(N) factor of that conjectured by Tarjan in 1985.) The proof uses known bounds on the length of generalized Davenport-Schinzel sequences.

Theory Seminar 2008-06-11

Wednesday June 11th, 2008
Wean 8220
1:30pm

Title: Graph partitioning into isolated, high conductance clusters: Theory, computation and applications to preconditioning.

Yiannis Koutis, CMU

Abstract:

We study the problem of decomposing a weighted graph with $n$ vertices into a collection $P$ of vertex disjoint clusters such that, for all clusters $C$ in $P$, the graph induced by the vertices in $C$ and the edges leaving $C$, has conductance bounded below by a constant $\phi$. We show that for constant average degree graphs we can compute a decomposition $P$ such that $|P| < n/a$, where $a$ is a constant, in $O(\log n)$ parallel time with $O(n)$ work. We show how these decompositions can be used in the first known linear work parallel and quite practical construction of provably good preconditioners for the important class of fixed degree graph Laplacians. On a more theoretical note, we present upper bounds on the Euclidean distance of eigenvectors of the normalized Laplacian from the space of vectors which consists of the cluster-wise constant vectors.

Theory Lunch 2008-05-14

Date: 2008-05-14 12:00
Speaker: Yiannis Koutis
Title: Faster algebraic algorithms for path and packing problems
Place: NSH 1507

Abstract:

We study the problem of deciding whether an n-variate polynomial, presented as an arithmetic circuit G, contains a degree k square-free term with an odd coefficient. We show that if G can be evaluated over the integers modulo 2^(k+1) in time t and space s, the problem can be decided with constant probability in O((kn+t)2^k) time and O(kn+s) space. Based on this, we present new and faster algorithms for several parameterized problems, among which: (i) an O(2^(mk)) algorithm for the m-set k-packing problem and (ii) an O(2^(3k/2)) algorithm for the simple k-path problem, or an O(2^k) algorithm if the graph has an induced k-subgraph with an odd number of Hamiltonian paths.

Theory Lunch 2008-05-07

Date: 2008-05-07 12:00
Speaker: Karl Wimmer
Title: Polynomial regression under arbitrary product spaces
Place: NSH 1507

Abstract:

Recently, Kalai et. al gave a variant of the “Low-Degree Algorithm” for agnostic learning (learning with arbitrary classification noise) under the uniform distribution on {0,1}^n. One result of their work is an agnostic learning algorithm with respect to the class of linear threshold functions under certain restricted instance distributions, including the uniform distribution on {0,1}^n.

In this talk, we extend these ideas to product distributions on instance spaces X_1 x … X_n. We develop a variant of the “Low-Degree Algorithm” for these distributions, and we show that our algorithm agnostically learns with respect to the class of threshold functions under these distributions. We prove this by extending the “noise sensitivity method” to arbitrary product spaces, showing that threshold functions over arbitrary product spaces are no more noise sensitive than their Boolean counterparts.

ACO Seminar 2008-05-02

Title:Packing in Multipartite Graphs
Speaker: Ryan Martin, Iowa State
When: May 2, 11:30-12:30
Where: Hamburg Hall, Room 237

Abstract:

We present some results on packing graphs in dense multipartite graphs. This is a question very similar to the Hajnal-Szemeredi theorem, which gives sufficient minimum-degree conditions for an $n$-vertex graph to have a subgraph consisting of $\lfloor n/r\rfloor$ vertex-disjoint copies of $K_r$. This is a packing, or tiling, of the graph by copies of $K_r$. The Hajnal-Szemeredi theorem has been generalized to finding minimum-degree conditions that guarantee packings of non-complete graphs, notably by Alon and Yuster and by Kuhn and Osthus. We consider a multipartite version of this problem. That is, given an $r$-partite graph with $N$ vertices in each partition, what is the minimum-degree required of the bipartite graph induced by each pair of color-classes so that the graph contains $N$ vertex-disjoint copies of $K_r$? The question has been answered for $r=3,4$, provided $r$ is sufficiently large. When $r=3$ and $N$ is sufficiently large, a degree condition of $(2/3)N$ is sufficient with the exception of a single tripartite graph when $N$ is an odd multiple of $3$. When $r=4$ and $N$ is sufficiently large, a degree condition of $(3/4)N$ is sufficient and there is no exceptional graph. There are also bounds on the degree condition for higher $r$ by Csaba and Mydlarz. This question has also been generalized to finding minimum-degree conditions for packings of some arbitrary $r$-colorable graph in an $r$-partite. The case $r=2$ is highly nontrivial for packing arbitrary bipartite graphs and was answered very precisely by Zhao. The case $r=3$ is even more complex and we provide some tight bounds on the required degree condition. This talk includes joint work with Cs. Magyar, with E. Szemeredi and with Y. Zhao.

ACO Seminar 2008-05-01

Title: Scarf’s Lemma and the Stable Paths Problem
Speaker: Penny Haxell, Waterloo
When: May 1, 12:30-13:30
Where: Porter Hall 125B

Abstract:

We address a question in graphs called the stable paths problem, which is an abstraction of a network routing problem concerning the Border Gateway Protocol (BGP). The main tool we use is Scarf’s Lemma. This talk will describe Scarf’s Lemma and how it is related to other results more familiar to combinatorialists, and then will explain its implications for the stable paths problem.

Theory Seminar 2008-05-02

Friday May 2nd, 2008
3:30 PM
7500 Wean Hall

Nash Bargaining via Flexible Budget Markets

Vijay V. Vazirani, Georgia Tech

In his seminal 1950 paper, John Nash defined the bargaining problem; the ensuing theory of bargaining lies today at the heart of game theory. In this work, we initiate an algorithmic study of Nash bargaining problems.

We consider a class of Nash bargaining problems whose solution can be stated as a convex program. For these problems, we show that there corresponds a market whose equilibrium allocations yield the solution to the convex program and hence the bargaining problem. For several of these markets, we give combinatorial, polynomial time algorithms, using the primal-dual paradigm.

Unlike the traditional Fisher market model, in which buyers spend a fixed amount of money, in these markets, each buyer declares a lower bound on the amount of utility she wishes to derive. The amount of money she actually spends is a specific function of this bound and the announced prices of goods.

Over the years, a fascinating theory has started forming around a convex program given by Eisenberg and Gale in 1959. Besides market equilibria, this theory touches on such disparate topics as TCP congestion control and efficient solvability of nonlinear programs by combinatorial means. Our work shows that the Nash bargaining problem fits harmoniously in this collage of ideas.

Thesis Oral 2008-04-30

Iterative Methods in Combinatorial Optimization

Mohit Singh

Wednesday, April 30, 2008, 3:30 pm, 384 Posner

Abstract:

Linear programming has been a successful tool in combinatorial optimization to achieve polynomial time algorithms for problems in P and also to achieve good approximation algorithms for problems which are NP-hard. We demonstrate that iterative methods give a general framework to analyze linear programming formulations of polynomial time solvable problems as well as NP-hard problems.

In this thesis, we focus on degree bounded network design problems. The most well-studied problem in this class is the Minimum Bounded Degree Spanning Tree problem defined as follows. Given a weighted undirected graph with degree bound B, the task is to find a spanning tree of minimum cost that satisfies the degree bound. We present a polynomial time algorithm that returns a spanning tree of optimal cost and maximum degree B+1. This generalizes a result of Furer and Raghavachari to weighted graphs, and thus settles a 15-year-old conjecture of Goemans affirmatively. This is also the best possible result for the problem in polynomial time unless P=NP.

We also study degree bounded versions of general network design problems including the minimum bounded degree Steiner tree problem, the minimum bounded degree Steiner forest problem, minimum bounded degree k-edge connected subgraph problem and the minimum bounded degree arborescence problem. We show that iterative methods give bi-criteria approximation algorithms that return a solution whose cost is within a small constant factor of the optimal solution and the degree bounds are violated by an additive factor in undirected graphs and a small multiplicative factor in directed graphs. These results also imply first additive approximation algorithms for various degree constrained network design problems in undirected graphs.

We also show the generality of the iterative methods and apply it to the degree constrained matroid problem, multi-criteria spanning tree problem, multi-criteria matroid basis problem and the generalized assignment problem achieving or matching best known approximation algorithms for them.

Thesis Committee:
Prof. R. Ravi, Carnegie Mellon University (Chair)
Prof. Gerard Cornuejols, Carnegie Mellon University
Prof. Alan Frieze, Carnegie Mellon University
Prof. Michel Goemans, Massachusetts Institute of Technology
Prof. Anupam Gupta, Carnegie Mellon University

Theory Lunch 2008-04-30

April 30, 2008
Varun Gupta
12:00 PM, 1507 Newell-Simon Hall
Title: Optimal size-based scheduling with selfish users

Abstract:

We consider the online single-server job scheduling problem. It is known that to minimize the average response time of jobs in this setting, at all times the job with the shortest remaining service time must be scheduled. This requires that the server knows about the sizes of all the jobs. However, in the scenario where the server does not know the sizes of the jobs whereas the jobs know their own sizes, the server can not rely on the jobs to truthfully reveal their sizes since a job may reduce its own response time by misreporting. While there are mechanisms in the literature that achieve truthful revelation, such mechanisms are based on imposing a tax and hence involve “real” money – which is not always desirable.

In this work, we propose a novel token based scheduling game. We prove that while playing the above scheduling game, all the jobs trying to minimize their own response time will end up implementing the shortest remaining service time first scheduling policy themselves.

Theory Seminar 2008-04-25

Finding a Maximum Matching in a Sparse Random Graph in O(n) Expected Time

Pall Melsted, CMU
April 25, 2008, 3:30PM, Wean 7220

Abstract:

We present a linear expected time algorithm for finding maximum cardinality matchings in sparse random graphs. This is optimal and improves on previous results by a logarithmic factor.

This is joint work with Prasad Chebolu and Alan Frieze.

ACO Seminar 2008-04-23

Title: The formulation complexity of minimum cut
Speaker: Ojas Parekh, Emory University
When: April 24, 12:30-13:30
Where: Porter Hall 125B

Abstract:

Our focus in this talk will be the size of linear programming formulations of combinatorial optimization problems. We may view this parameter as akin to traditional measures of complexity, such as computational time and space. We will focus on problems in P, in particular the minimum cut problem. For a graph $(V,E)$, existing linear formulations for the minimum cut problem require $\Theta(|V||E|)$ variables and constraints. These formulations can be interpreted as a composition of $|V|-1$ polyhedra for minimum $s$-$t$ cuts paralleling early algorithmic approaches to finding globally minimum cuts, which relied on $|V|-1$ calls to a minimum $s$-$t$ cut algorithm. We present the first formulation to beat this bound, one that uses $O(|V|^2)$ variables and $O(|V|^3)$ constraints. Our formulation directly implies a smaller compact linear relaxation for the Traveling Salesman Problem that is equivalent in strength to the standard subtour relaxation.

Theory Lunch 2008-04-23

Date: 2008-04-23 12:00
Place: NSH 1507
Speaker: Elaine Shi
Title: How to build private Google Docs

Abstract: I will describe some latest results in predicate encryption. The crypto construction allows a user to store her personal files on a remote untrusted server, and make expressive search queries to retrieve certain documents. The remote untrusted server learns no unintended information.

Thesis Proposal 2008-04-23

Title: Approximation Algorithms for Vehicle Routing and Scheduling.

Viswanath Nagarajan, Thesis Proposal for Ph.D. in Algorithms, Combinatorics and Optimization.
10:30am Wednesday 23-April
Room 384, 3rd floor Posner Hall (Tepper School of Business)

Broadly speaking, any scheduling problem can be characterized as serving a set of requests using a limited set of resources, subject to constraints detailing how the resources may serve requests. Due to the complicating nature of constraints in typical scheduling problems, most of them are NP-complete and hence we do not expect efficient (i.e. polynomial time) exact algorithms. The two main approaches to practical solutions of such problems are (i) exact algorithms that compute the optimal solution but take exponential time in the worst case, and (ii) heuristic algorithms that run in polynomial time but find near-optimal solutions. An approximation algorithm is an efficient heuristic along with a worst-case guarantee on the quality of the near-optimal solutions found by it. The goal of this thesis is to design approximation algorithms for some scheduling problems, with an emphasis on Vehicle Routing Problems.

Vehicle routing problems (VRPs) form a rich class of variants of the basic Traveling Salesman Problem, that are also practically motivated. In VRPs, a fleet of vehicles represents the resources used to serve a set of client-requests (such as transporting objects to the clients). Many VRPs just seek to minimize cost incurred by the vehicles while serving client requests; a goal in this thesis is to study VRPs that incorporate some additional criteria on the vehicle routes.

All VRPs are defined in relation to a metric space (i.e. set of locations with a distance function on them). Most of the work on approximation algorithms for VRPs has focussed on symmetric metrics. The corresponding problems on asymmetric metrics become considerably harder. Another goal of this thesis is to design algorithms for VRPs on asymmetric metrics.

Committee: R. Ravi (Chair), Gerard Cornuejols, Anupam Gupta, Mike Trick

Theory Lunch 2008-04-16

Date: 2008-04-16 12:00
Speaker: Mike Dinitz
Title: The Discounted Secretary Problem
Place: NSH 1507

Abstract:

The classical secretary problem studies how to select online an element with maximum value in a randomly ordered sequence. The problem is closely connected with online mechanism design in which agents {e} with private values v(e) for a good arrive sequentially in random order and the mechanism designer wishes to allocate the good to an agent with maximum value. The difficulty lies in the fact that an agent’s allocation must be decided irrevocably upon arrival. A mechanism for this problem is called alpha-competitive if it gets, in expectation, at least a 1/alpha fraction of the (expected) optimal offline solution. It is well-known how to design constant-competitive algorithms for the classical secretary problem and several variants. In this talk we will discuss the discounted secretary problem, in which there is a time-dependent “discount” factor d(t) and the benefit derived from assigning the good at time t to agent e is the product of d(t) and v(e). For instance, the special case when d(t) is decreasing captures the natural tension between selling early and waiting to maximize the value of the agent receiving the good. We provide nearly matching logarithmic upper and lower bounds for this problem, and show a constant-competitive algorithm when the expected optimum is known in advance.

OR Seminar 2008-04-18

Name: Nikolaos Sahinidis
University: Carnegie Mellon University Dept. of Chemical Engineering
Date: Friday, April 18, 2008
Time: 3:30 to 5:00 pm
Location: Room 388 Posner Hall
Title: Optimization in the New Biology

Abstract:

A variety of modern bioinformatics and systems biology problems can be approached systematically from an optimization point of view. This talk will focus on protein side-chain prediction, protein structural alignment, structure determination from X-ray diffraction measurements, and metabolic systems analysis and design. To solve these problems, we have employed machinery from linear algebra, dynamic programming, combinatorial optimization, and mixed-integer nonlinear programming. Many of the underlying biological problems are purely continuous in nature but have, to this date, been approached mostly via combinatorial optimization algorithms that are applied to discrete approximations. Other problems naturally present a strong and difficult combinatorial component.

ACO Seminar 2008-04-15

Title: A Polynomial Bound on Vertex Folkman Numbers
Speaker: Andrzej Dudek, Emory University
When: Tuesday April 15, 12:30-13:30
Where: Wean Hall 5304

Abstract:

In 1970, Folkman proved that for a given integer r and a graph G of order n there exists a graph H with the same clique number as G such that every r coloring of vertices of H yields at least one monochromatic copy of G. His proof gives no good bound on the order of graph H, i.e., the order of H is bounded by an iterated power function of n. In this talk we will give an alternative proof of Folkman’s theorem with the relatively small order of H bounded from above by O(n^3 log^3 n). This is joint work with Vojtech Rodl.

Theory Seminar 2008-04-18

Friday April 18th, 2008
3:30pm
WEH 7220

TITLE: What makes a good Steiner point?

Benoit Hudson
Toyota Technological Institute at Chicago

ABSTRACT:

The mesh refinement problem is to take an input geometry (defined by a set of points, curves, and surfaces), and output a set of points that both “respects” the geometry and has good “quality.” What it means for a tetrahedral mesh to respect curved surfaces is already interesting and will take some explaining. Even knowing what the goal is, mesh refinement algorithms typically are of the form: until the output is good enough, add points. But where should we add these additional Steiner points? And how do we know that the algorithm will stop? Most prior work is very specific about where to add points, and thus needs its own very specific proof that the algorithm ends.

In this talk, I will give a set of rules for choosing Steiner points. Any algorithm that follows my rules — as most previous algorithms do — will terminate. After hearing me out, you will know how to represent curved surfaces with linear elements, and you will be able to design your very own meshing algorithm with confidence.

Thesis Oral 2008-04-16

Title: Approximation Algorithms for Network Design With Uncertainty
Speaker: Barbara Anthony
When: Wednesday, April 16, 10:30 am
Where: Doherty Hall 4303

Abstract: We present an extension of the k-median problem where we are given a metric space (V,d) and not just one but m client sets S_i (subsets of V) for i = 1, …, m, and the goal is to open k facilities F to minimize the worst-case cost over all the client sets, i.e. max_{i in [m]} sum_{j in S_i} d(j,F). This is a ‘min-max’ or ‘robust’ version of the k-median problem; however, note that in contrast to previous papers on robust/stochastic problems, we have only one stage of decision-making — where should we place the facilities? We present an O(log n + log m) approximation for robust k-median: The algorithm is simple and combinatorial, and is based on reweighting/Lagrangean-relaxation ideas. In fact, we give a general framework for (minimization) facility location problems where there is a bound on the number of open facilities. For robust and stochastic versions of such location problems, we show that if the problem satisfies a certain ‘projection’ property, essentially the same algorithm gives a logarithmic approximation ratio in both versions. We use our framework to give the first approximation algorithms for robust/stochastic versions of k-tree, capacitated k-median, and fault-tolerant k-median.

This talk, on robust and stochastic location problems, covers one part of my thesis on approximation algorithms for network design with uncertainty.

Thesis Committee:
Anupam Gupta (Computer Science, chair)
Thomas Bohman (Math)
Alan Frieze (Math)
R. Ravi (Tepper)