Frankly, I have considered “folding” completely since at this moment of my life I really don’t have the time, but then I figured that some of the existing content may be useful to others and so I decided to spend my time this way. Hope this is the right choice!

]]>Don Sheehy, CMU

August 8, 2008, 3:30PM, Wean 7220

Abstract:

In the meshing problem, we decompose a geometric domain into as few simplices as possible while ensuring each simplex achieves some quality (roundness) guarantee. For years, a proof of “size optimality” has been the most important stamp of approval needed by any new meshing algorithm, where size is measured as the number of simplices in the output. However, the lower bound used for these optimality proofs is not in general bounded by any function of the input size. This leads us to think that perhaps “size optimality” should not be the last word in mesh size analysis. In the first part of this talk, I will introduce well-paced point sets and prove that for this general class of inputs, standard meshing technology will produce linear size output. I will then show how to use this result to construct linear-size Delaunay meshes in any dimension by giving up quality guarantees exactly where the traditional size lower bound would dictate superlinear output.

In the second half of this double feature, I will change gears and present a data structure for Approximate Nearest Neighbor (ANN) search that achieves spatial adaptivity. That is, the algorithm can find constant-approximations to the nearest neighbor in log(d(p,q)) time per query, where q is the query point, p is the answer to the previous query, and d(p,q) is the number of points in a reasonably sized box containing p and q. Thus, we get worst-case O(log n)-time queries, but if the queries are spatially correlated, we can do even better. This is the first data structure to achieve spatial adaptivity for ANN.

(both results to appear at CCCG 2008)

]]>Virginia Panayotova Vassilevska

10:00 AM, 1507 Newell-Simon Hall

Thesis Oral

Title: Efficient Algorithms for Path Problems in Weighted Graphs

Abstract:

Problems related to computing optimal paths have been abundant in computer science since its emergence as a field. Yet for a large number of such problems we still do not know whether the state-of-the-art algorithms are the best possible. A notable example of this phenomenon is the all pairs shortest paths problem in a directed graph with real edge weights. The best algorithm (modulo small polylogarithmic improvements) for this problem runs in cubic time, a running time known since the 1960s. Our grasp of many such fundamental algorithmic questions is far from optimal, and the major goal of this thesis is to bring some new insights into efficiently solving path problems in graphs.

We focus on several path problems optimizing different measures: shortest paths, maximum bottleneck paths, minimum nondecreasing paths, and various extensions. For the all-pairs versions of these path problems we use an algebraic approach. We obtain improved algorithms using reductions to fast matrix multiplication. For maximum bottleneck paths and minimum nondecreasing paths we are the first to break the cubic barrier, obtaining truly subcubic strongly polynomial algorithms. We also consider a nonalgebraic, combinatorial approach, which is considered more efficient in practice compared to methods based on fast matrix multiplication. We present a data structure which maintains a matrix so that products with given sparse vectors can be computed efficiently. This allows us to obtain good running times for several path problems in unweighted sparse graphs.

This thesis also gives algorithms for some single source path problems. We obtain the first linear time algorithm for the single source minimum nondecreasing paths problem. We give some extensions to this, including algorithms to find shortest minimum nondecreasing paths and cheapest minimum nondecreasing paths. Besides finding optimal paths, we consider the problem of finding optimal cycles. In particular, we focus on the problem of finding in a weighted graph a triangle of maximum weight sum. We obtain the first truly subcubic algorithm for finding a maximum weight triangle in a node-weighted graph. We also present algorithms for the edge-weighted case. These algorithms immediately imply good algorithms for finding maximum weight k-cliques, or arbitrary maximum weight pattern subgraphs of fixed size.

Thesis Committee:

Guy Blelloch, Chair

Anupam Gupta

Manuel Blum

Uri Zwick, Tel Aviv University

P.S. Charles has a blog, and some recent post are about Turing and this book.

]]>Seth Pettie, University of Michigan

July 18, 2008, 3:30PM, Wean 7220

Abstract:

In this talk I’ll present a new way to analyze splay trees (and other dynamic data structures) that is not based on potential functions or direct counting arguments. The three-part strategy is to (1) transcribe the operations of the data structure as some combinatorial object, (2) show the object has some forbidden substructure, and (3) to prove upper bounds on the size of such a combinatorial object. As an example of this strategy, we show that splay trees execute a sequence of N deque operations (push, pop, inject, and eject) in O(Na^* (N)) time, where a^* is the iterated-inverse-Ackermann function. (This bound is within a tiny a^*(N) factor of that conjectured by Tarjan in 1985.) The proof uses known bounds on the length of generalized Davenport-Schinzel sequences.

]]>Wean 8220

1:30pm

Title: Graph partitioning into isolated, high conductance clusters: Theory, computation and applications to preconditioning.

Yiannis Koutis, CMU

Abstract:

We study the problem of decomposing a weighted graph with $n$ vertices into a collection $P$ of vertex disjoint clusters such that, for all clusters $C$ in $P$, the graph induced by the vertices in $C$ and the edges leaving $C$, has conductance bounded below by a constant $\phi$. We show that for constant average degree graphs we can compute a decomposition $P$ such that $|P| < n/a$, where $a$ is a constant, in $O(\log n)$ parallel time with $O(n)$ work. We show how these decompositions can be used in the first known linear work parallel and quite practical construction of provably good preconditioners for the important class of fixed degree graph Laplacians. On a more theoretical note, we present upper bounds on the Euclidean distance of eigenvectors of the normalized Laplacian from the space of vectors which consists of the cluster-wise constant vectors.

]]>Speaker: Yiannis Koutis

Title: Faster algebraic algorithms for path and packing problems

Place: NSH 1507

Abstract:

We study the problem of deciding whether an n-variate polynomial, presented as an arithmetic circuit G, contains a degree k square-free term with an odd coefficient. We show that if G can be evaluated over the integers modulo 2^(k+1) in time t and space s, the problem can be decided with constant probability in O((kn+t)2^k) time and O(kn+s) space. Based on this, we present new and faster algorithms for several parameterized problems, among which: (i) an O(2^(mk)) algorithm for the m-set k-packing problem and (ii) an O(2^(3k/2)) algorithm for the simple k-path problem, or an O(2^k) algorithm if the graph has an induced k-subgraph with an odd number of Hamiltonian paths.

]]>But if you have tried this route, you may notice that TeX has “optimized away” your effort by subtly padding the pages with more vertical spaces, thereby keeping the page count constant…

Long story short, you want to use `\raggedbottom`

. Put it in the preamble and recompile. With this, LaTeX will keep the breaks at the same places, but it will not pad and hence the bottom of the pages will be ragged (duh!). Now you can usually see why your effort did not produce the desired effect: the space that you just freed up does not allow the current page/column to absorb enough material from the next.

With the ability to find a short page/column by inspection, now you can identify the *earliest* place where rewording is more likely help. Repeat the reword-recompile cycle a couple times, and the page count will go down. Just be sure to take out `\raggedbottom`

when you are done!

P.S. The two-page mode in your previewer can make the inspection more effective.

]]>Speaker: Karl Wimmer

Title: Polynomial regression under arbitrary product spaces

Place: NSH 1507

Abstract:

Recently, Kalai et. al gave a variant of the “Low-Degree Algorithm” for agnostic learning (learning with arbitrary classification noise) under the uniform distribution on {0,1}^n. One result of their work is an agnostic learning algorithm with respect to the class of linear threshold functions under certain restricted instance distributions, including the uniform distribution on {0,1}^n.

In this talk, we extend these ideas to product distributions on instance spaces X_1 x … X_n. We develop a variant of the “Low-Degree Algorithm” for these distributions, and we show that our algorithm agnostically learns with respect to the class of threshold functions under these distributions. We prove this by extending the “noise sensitivity method” to arbitrary product spaces, showing that threshold functions over arbitrary product spaces are no more noise sensitive than their Boolean counterparts.

]]>[...] a panelist at IPDPS (in Miami, a couple of weeks ago) assert that parallel-processing topics have been disappearing from CS curricula in recent years. As anecdotal evidence, he pointed out the topic’s removal in the 2nd edition of Introduction to Algorithms [...]

The article goes on to give several other examples to prove his point. In the end, Michael wrote

[...] multicore computing platforms are now the norm. Recognizing that reality, let’s make the adjustment time short.

Are we expecting a surge of algorithm and data structure textbooks with an emphasis in multicore?

]]>Speaker: Ryan Martin, Iowa State

When: May 2, 11:30-12:30

Where: Hamburg Hall, Room 237

Abstract:

We present some results on packing graphs in dense multipartite graphs. This is a question very similar to the Hajnal-Szemeredi theorem, which gives sufficient minimum-degree conditions for an $n$-vertex graph to have a subgraph consisting of $\lfloor n/r\rfloor$ vertex-disjoint copies of $K_r$. This is a packing, or tiling, of the graph by copies of $K_r$. The Hajnal-Szemeredi theorem has been generalized to finding minimum-degree conditions that guarantee packings of non-complete graphs, notably by Alon and Yuster and by Kuhn and Osthus. We consider a multipartite version of this problem. That is, given an $r$-partite graph with $N$ vertices in each partition, what is the minimum-degree required of the bipartite graph induced by each pair of color-classes so that the graph contains $N$ vertex-disjoint copies of $K_r$? The question has been answered for $r=3,4$, provided $r$ is sufficiently large. When $r=3$ and $N$ is sufficiently large, a degree condition of $(2/3)N$ is sufficient with the exception of a single tripartite graph when $N$ is an odd multiple of $3$. When $r=4$ and $N$ is sufficiently large, a degree condition of $(3/4)N$ is sufficient and there is no exceptional graph. There are also bounds on the degree condition for higher $r$ by Csaba and Mydlarz. This question has also been generalized to finding minimum-degree conditions for packings of some arbitrary $r$-colorable graph in an $r$-partite. The case $r=2$ is highly nontrivial for packing arbitrary bipartite graphs and was answered very precisely by Zhao. The case $r=3$ is even more complex and we provide some tight bounds on the required degree condition. This talk includes joint work with Cs. Magyar, with E. Szemeredi and with Y. Zhao.

]]>Speaker: Penny Haxell, Waterloo

When: May 1, 12:30-13:30

Where: Porter Hall 125B

Abstract:

We address a question in graphs called the stable paths problem, which is an abstraction of a network routing problem concerning the Border Gateway Protocol (BGP). The main tool we use is Scarf’s Lemma. This talk will describe Scarf’s Lemma and how it is related to other results more familiar to combinatorialists, and then will explain its implications for the stable paths problem.

]]>3:30 PM

7500 Wean Hall

Nash Bargaining via Flexible Budget Markets

Vijay V. Vazirani, Georgia Tech

In his seminal 1950 paper, John Nash defined the bargaining problem; the ensuing theory of bargaining lies today at the heart of game theory. In this work, we initiate an algorithmic study of Nash bargaining problems.

We consider a class of Nash bargaining problems whose solution can be stated as a convex program. For these problems, we show that there corresponds a market whose equilibrium allocations yield the solution to the convex program and hence the bargaining problem. For several of these markets, we give combinatorial, polynomial time algorithms, using the primal-dual paradigm.

Unlike the traditional Fisher market model, in which buyers spend a fixed amount of money, in these markets, each buyer declares a lower bound on the amount of utility she wishes to derive. The amount of money she actually spends is a specific function of this bound and the announced prices of goods.

Over the years, a fascinating theory has started forming around a convex program given by Eisenberg and Gale in 1959. Besides market equilibria, this theory touches on such disparate topics as TCP congestion control and efficient solvability of nonlinear programs by combinatorial means. Our work shows that the Nash bargaining problem fits harmoniously in this collage of ideas.

]]>Mohit Singh

Wednesday, April 30, 2008, 3:30 pm, 384 Posner

Abstract:

Linear programming has been a successful tool in combinatorial optimization to achieve polynomial time algorithms for problems in P and also to achieve good approximation algorithms for problems which are NP-hard. We demonstrate that iterative methods give a general framework to analyze linear programming formulations of polynomial time solvable problems as well as NP-hard problems.

In this thesis, we focus on degree bounded network design problems. The most well-studied problem in this class is the Minimum Bounded Degree Spanning Tree problem defined as follows. Given a weighted undirected graph with degree bound B, the task is to find a spanning tree of minimum cost that satisfies the degree bound. We present a polynomial time algorithm that returns a spanning tree of optimal cost and maximum degree B+1. This generalizes a result of Furer and Raghavachari to weighted graphs, and thus settles a 15-year-old conjecture of Goemans affirmatively. This is also the best possible result for the problem in polynomial time unless P=NP.

We also study degree bounded versions of general network design problems including the minimum bounded degree Steiner tree problem, the minimum bounded degree Steiner forest problem, minimum bounded degree k-edge connected subgraph problem and the minimum bounded degree arborescence problem. We show that iterative methods give bi-criteria approximation algorithms that return a solution whose cost is within a small constant factor of the optimal solution and the degree bounds are violated by an additive factor in undirected graphs and a small multiplicative factor in directed graphs. These results also imply first additive approximation algorithms for various degree constrained network design problems in undirected graphs.

We also show the generality of the iterative methods and apply it to the degree constrained matroid problem, multi-criteria spanning tree problem, multi-criteria matroid basis problem and the generalized assignment problem achieving or matching best known approximation algorithms for them.

Thesis Committee:

Prof. R. Ravi, Carnegie Mellon University (Chair)

Prof. Gerard Cornuejols, Carnegie Mellon University

Prof. Alan Frieze, Carnegie Mellon University

Prof. Michel Goemans, Massachusetts Institute of Technology

Prof. Anupam Gupta, Carnegie Mellon University

`qsort`

(that part starts shortly after 34:00).
]]>Varun Gupta

12:00 PM, 1507 Newell-Simon Hall

Title: Optimal size-based scheduling with selfish users

Abstract:

We consider the online single-server job scheduling problem. It is known that to minimize the average response time of jobs in this setting, at all times the job with the shortest remaining service time must be scheduled. This requires that the server knows about the sizes of all the jobs. However, in the scenario where the server does not know the sizes of the jobs whereas the jobs know their own sizes, the server can not rely on the jobs to truthfully reveal their sizes since a job may reduce its own response time by misreporting. While there are mechanisms in the literature that achieve truthful revelation, such mechanisms are based on imposing a tax and hence involve “real” money – which is not always desirable.

In this work, we propose a novel token based scheduling game. We prove that while playing the above scheduling game, all the jobs trying to minimize their own response time will end up implementing the shortest remaining service time first scheduling policy themselves.

]]>Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.

Among other things, he explained why the numbering of Volume 4 fascicles starts at 0 and not 1. (You may recall that there is no Volume 0 and so zero-based counting cannot be exactly the reason. Well, I mean *not exactly*.)

Pall Melsted, CMU

April 25, 2008, 3:30PM, Wean 7220

Abstract:

We present a linear expected time algorithm for finding maximum cardinality matchings in sparse random graphs. This is optimal and improves on previous results by a logarithmic factor.

This is joint work with Prasad Chebolu and Alan Frieze.

]]>Speaker: Ojas Parekh, Emory University

When: April 24, 12:30-13:30

Where: Porter Hall 125B

Abstract:

Our focus in this talk will be the size of linear programming formulations of combinatorial optimization problems. We may view this parameter as akin to traditional measures of complexity, such as computational time and space. We will focus on problems in P, in particular the minimum cut problem. For a graph $(V,E)$, existing linear formulations for the minimum cut problem require $\Theta(|V||E|)$ variables and constraints. These formulations can be interpreted as a composition of $|V|-1$ polyhedra for minimum $s$-$t$ cuts paralleling early algorithmic approaches to finding globally minimum cuts, which relied on $|V|-1$ calls to a minimum $s$-$t$ cut algorithm. We present the first formulation to beat this bound, one that uses $O(|V|^2)$ variables and $O(|V|^3)$ constraints. Our formulation directly implies a smaller compact linear relaxation for the Traveling Salesman Problem that is equivalent in strength to the standard subtour relaxation.

]]>Place: NSH 1507

Speaker: Elaine Shi

Title: How to build private Google Docs

Abstract: I will describe some latest results in predicate encryption. The crypto construction allows a user to store her personal files on a remote untrusted server, and make expressive search queries to retrieve certain documents. The remote untrusted server learns no unintended information.

]]>