Karthekeyan Chandrasekaran
Associate Professor, Department of Industrial and Enterprise Systems EngineeringAffiliate, Department of Computer Science
University of Illinois, UrbanaChampaign
Research Interests
Combinatorial optimization, integer programming, probabilistic methods and analysis, randomized algorithms.
Bio
B. Tech., Computer Science, IIT MadrasPh.D., Algorithms, Combinatorics and Optimization (ACO), Georgia Tech
Postdoc (Simons), Theory of Computation, Harvard
CV
Teaching

Graduatelevel
 IE 511: Integer Programming  Spring 2015, Spring 2017, Spring 2019, Spring 2021 (with lecture notes)
 IE 519/CS 586: Combinatorial Optimization (formerly IE 598)  Fall 2015, Spring 2018, Spring 2020
 Undergraduatelevel
 IE 310: Deterministic Models in Optimization  Spring 2016, Fall 2016, Fall 2017, Fall 2018, Fall 2019, Fall 2020, Spring 2022 (with lecture slides)
 IE 311: Operations Research Lab  Spring 2016, Fall 2016, Fall 2017, Fall 2018
Students
 PhD Advisees
 Shubhang Kulkarni
 Weihang Wang
 Calvin Beideman
 Chao Xu:
May 2018
(currently Assistant Professor at UESTC)
Thesis: Cuts and Connectivity in Graphs and Hypergraphs (coadvised with Chandra Chekuri)
 Masters Advisees
 Sahand Mozaffari, Aug 2021 (Joined Microsoft)
 Ali Bibak, Aug 2020 (Joined FlexTrade)
 Victor Sui (BSMCS), May 2020 (Joined Jump Trading LLC)
 Undergrad Advisees
 Aditya Pillai, May 2020 (Joined as PhD student at Georgia Tech)
 Jingwen Jiang, May 2016 (Joined as PhD student at University of Chicago)
Publications
 Approximate minimum cuts and their enumeration
(with Calvin Beideman, Weihang Wang) (in submission) [Abstract]
We show that every alphaapproximate minimum cut in a connected graph is the unique minimum (S,T)terminal cut for some subsets S and T of vertices each of size at most 2 alpha + 1. This leads to an alternative proof that the number of alphaapproximate minimum cuts in a nvertex connected graph is n^{O(alpha)} and they can all be enumerated in deterministic polynomial time for constant alpha.  Analyzing Residual Random Greedy for monotone submodular maximization
(with Kristóf Bérczi, Tamás Király, Aditya Pillai) (in submission) [Abstract]
Residual Random Greedy (RRGreedy) is a natural randomized version of the greedy algorithm for submodular maximization. It was introduced to address nonmonotone submodular maximization (Buchbinder, Feldman, Naor, Schwartz, SODA 2014) and plays an important role in the deterministic algorithm for monotone submodular maximization that beats the (1/2)factor barrier (Buchbinder, Feldman, Garg, IPCO 2019). In this work, we analyze RRGreedy for monotone submodular functions along two fronts: (1) For matroid constrained maximization of submodular functions with bounded curvature alpha, we show that RRGreedy achieves a (1/(1+alpha))approximation irrespective of the randomness in the algorithm. In particular, this implies that it achieves a (1/2)approximation always (not just in expectation). (2) We generalize RRGreedy to k>=2 matroid constraints and show that the generalization achieves a (1/(k+1))approximation in expectation relative to the optimum value of the Lovasz extension over the intersection of k matroid polytopes. Our results suggest that RRGreedy is at least as good as Greedy for matroid and matroid intersection constraints.  Approximate Representation of Symmetric
Submodular Functions via Hypergraph Cut
Functions
(with Calvin Beideman, Chandra Chekuri, Chao Xu)
(To appear in) Foundations of Software Technology and Theoretical Computer Science (FSTTCS), 2022. [Abstract]
Submodular functions are fundamental to combinatorial optimization. Many interesting problems can be formulated as special cases of problems involving submodular functions. In this work, we consider the problem of approximating symmetric submodular functions everywhere using hypergraph cut functions. Devanur, Dughmi, Schwartz, Sharma, and Singh (arXiv: 1304.4948, 2013) showed that symmetric submodular functions over nelement ground sets cannot be approximated within (n/8)factor using a graph cut function and raised the question of approximating them using hypergraph cut functions. Our main result is that there exist symmetric submodular functions over nelement ground sets that cannot be approximated within a o(n^{1/3}/log^2 n)factor using a hypergraph cut function. On the positive side, we show that symmetrized concave linear functions and symmetrized rank functions of uniform matroids and partition matroids can be constantapproximated using hypergraph cut functions.  Counting and enumerating optimum cut sets for hypergraph kpartitioning problems for fixed k
(with Calvin Beideman, Weihang Wang)
International Colloquium on Automata, Languages and Programming (ICALP), 2022. [pdf] [arXiv] [Abstract]
We consider the problem of enumerating optimal solutions for two hypergraph kpartitioning problemsnamely, MinMaxHypergraphkPartition and HypergraphkCut. The input in hypergraph kpartitioning problems is a hypergraph G=(V, E) with nonnegative hyperedge costs along with a fixed positive integer k. The goal is to find a partition of V into k nonempty parts (V_1, V_2, ..., V_k)known as a kpartitionso as to minimize an objective of interest.
(1) If the objective of interest is the maximum cut value of the parts, then the problem is known as MinMaxHypergraphkPartition. A subset of hyperedges is a Minmaxkcutset if it is the subset of hyperedges crossing an optimum kpartition for MinMaxHypergraphkPartition.
(2) If the objective of interest is the total cost of hyperedges crossing the kpartition, then the problem is known as HypergraphkCut. A subset of hyperedges is a Minkcutset if it is the subset of hyperedges crossing an optimum kpartition for HypergraphkCut.
We give the first polynomial bound on the number of Minmaxkcutsets and a polynomialtime algorithm to enumerate all of them in hypergraphs for every fixed k. Our technique is strong enough to also enable an n^{O(k)}ptime deterministic algorithm to enumerate all minkcutsets in hypergraphs, thus improving on the previously known n^{O(k^2)}ptime deterministic algorithm, where n is the number of vertices and p is the size of the hypergraph. The correctness analysis of our enumeration approach relies on a structural result that is a strong and unifying generalization of known structural results for HypergraphkCut and MinMaxHypergraphkPartition. We believe that our structural result is likely to be of independent interest in the theory of hypergraphs (and graphs).  Faster Connectivity in Lowrank Hypergraphs via Expander Decomposition
(with Calvin Beideman, Sagnik Mukhopadhyay, Danupon Nanongkai)
Integer Programming and Combinatorial Optimization (IPCO), 2022. [arXiv] [Abstract]
We design an algorithm for computing connectivity in hypergraphs which runs in time hat{O}_r(p + min{lambda n^2, n^r/lambda}), where p is the size, n is the number of vertices, and r is the rank of the hypergraph. Our algorithm is faster than existing algorithms when the connectivity lambda is Omega(n^{(r2)/2}). At the heart of our algorithm is a structural result regarding mincuts in simple hypergraphs: we show a tradeoff between the number of hyperedges taking part in all mincuts and the size of the smaller side of the mincut. This structural result can be viewed as a generalization of a wellknown structural theorem for simple graphs [KawarabayashiThorup, JACM 19]. We extend the framework of expander decomposition to simple hypergraphs in order to prove this structural result. We also make the proof of the structural result constructive to obtain our faster hypergraph connectivity algorithm.  Deterministic enumeration of all minimum cutsets and kcutsets in hypergraphs for fixed k
(with Calvin Beideman, Weihang Wang)
ACMSIAM Symposium on Discrete Algorithms (SODA), 2022. [pdf] [arXiv] [Abstract]
We consider the problem of deterministically enumerating all minimum kcutsets in a given hypergraph for any fixed k. The input here is a hypergraph G = (V, E) with nonnegative hyperedge costs. A subset F of hyperedges is a kcutset if the number of connected components in G  F is at least k and it is a minimum kcutset if it has the least cost among all kcutsets. For fixed k, we call the problem of finding a minimum kcutset as HypergraphkCut and the problem of enumerating all minimum kcutsets as EnumHypergraphkCut. The special cases of HypergraphkCut and EnumHypergraphkCut restricted to graph inputs are wellknown to be solvable in (randomized as well as deterministic) polynomial time. In contrast, it is only recently that polynomialtime algorithms for HypergraphkCut were developed. The randomized polynomialtime algorithm for HypergraphkCut that was designed in 2018 (Chandrasekaran, Xu, and Yu, SODA 2018) showed that the number of minimum kcutsets in a hypergraph is O(n^{2k2}), where n is the number of vertices in the input hypergraph, and that they can all be enumerated in randomized polynomial time, thus resolving EnumHypergraphkCut in randomized polynomial time. A deterministic polynomialtime algorithm for HypergraphkCut was subsequently designed in 2020 (Chandrasekaran and Chekuri, FOCS 2020), but it is not guaranteed to enumerate all minimum kcutsets. In this work, we give the first deterministic polynomialtime algorithm to solve EnumHypergraphkCut (this is nontrivial even for k = 2). Our algorithm is based on new structural results that allow for efficient recovery of all minimum kcutsets by solving minimum (S,T)terminal cuts. Our techniques give new structural insights even for enumerating all minimum cutsets (i.e., minimum 2cutsets) in a given hypergraph.  l_{p}norm Multiway Cut
(with Weihang Wang)
Preliminary version in European Symposium on Algorithms (ESA), 2021.
Algorithmica, 2022. [pdf] [arXiv] [Abstract]
We introduce and study l_pnorm multiway cut: the input here is an undirected graph with nonnegative edge weights along with k terminals and the goal is to find a partition of the vertex set into k parts each containing exactly one terminal so as to minimize the l_pnorm of the cut values of the parts. This is a unified generalization of minsum multiway cut (when p=1) and minmax multiway cut (when p=infinity), both of which are wellstudied classic problems in the graph partitioning literature. Our motivation behind studying the l_pnorm objective is that it acts as a fairness inducing objective based on p. We show that l_pnorm multiway cut is NPhard for constant number of terminals and is NPhard in planar graphs. On the algorithmic side, we design an O(log^2 n)approximation for all p>=1. We also show an integrality gap of Omega(k^{11/p}) for a natural convex program and an O(k^{11/pepsilon})inapproximability for any constant epsilon>0 assuming the small set expansion hypothesis.  Fixed Parameter Approximation Scheme for Minmax kcut
(with Weihang Wang)
Preliminary version in Integer Programming and Combinatorial Optimization (IPCO), 2021.
Mathematical Programming, 2022. [pdf] [arXiv] [Abstract]
We consider the graph kpartitioning problem under the minmax objective, termed as Minmax kcut. The input here is a graph G = (V, E) with nonnegative edge weights w: E > R_+ and an integer k>=2 and the goal is to partition the vertices into k nonempty parts V_1, ..., V_k so as to minimize max_{i=1}^k w(\delta(V_i)). Although minimizing the sum objective sum_{i=1}^k w(\delta(V_i)), termed as Minsum kcut, has been studied extensively in the literature, very little is known about minimizing the max objective. We initiate the study of Minmax kcut by showing that it is NPhard and W[1]hard when parameterized by k, and design a parameterized approximation scheme when parameterized by k. The main ingredient of our parameterized approximation scheme is an exact algorithm for Minmax kcut that runs in time (lambda k)^{O(k^2)}n^{O(1)}, where lambda is value of the optimum and n is the number of vertices. Our algorithmic technique builds on the technique of Lokshtanov, Saurabh, and Surianarayanan (FOCS, 2020) who showed a similar result for Minsum kcut. Our algorithmic techniques are more general and can be used to obtain parameterized approximation schemes for minimizing l_pnorm measures of kpartitioning for every p>=1.  Minmax Partitioning of Hypergraphs and Symmetric Submodular Functions
(with Chandra Chekuri)
ACMSIAM Symposium on Discrete Algorithms (SODA), 2021. [pdf] [Abstract]
We consider the complexity of minmax partitioning of graphs, hypergraphs and (symmetric) submodular functions. Our main result is an algorithm for the problem of partitioning the ground set of a given symmetric submodular function f: 2^V > R into k nonempty parts V_1, V_2, ...,V_k to minimize max_{i in [k]} f(V_i). Our algorithm runs in n^{O(k^2)}T time, where n = V and T is the time to evaluate f on a given set; hence, this yields a polynomial time algorithm for any fixed k in the evaluation oracle model. As an immediate corollary, for any fixed k, there is a polynomialtime algorithm for the problem of partitioning a given hypergraph H = (V, E) into k nonempty parts to minimize the maximum capacity of the parts. The complexity of this problem, termed MinMaxHypergraphkPart, was raised by Lawler in 1973 (Networks, Vol. 3, pp 275285). In contrast to our positive result, a reduction by Chekuri and Li (2015) implies that when k is part of the input, MinMaxHypergraphkPart is hard to approximate to within an almost polynomial factor under the Exponential Time Hypothesis (ETH).  FixedParameter Algorithms for Longest Heapable Subsequence and Maximum Binary Tree
(with Elena Grigorescu, Gabriel Istrate, Shubhang Kulkarni, YoungSan Lin, Minshen Zhu)
International Symposium on Parameterized and Exact Computation (IPEC), 2020. [pdf] [arXiv] [Abstract]
A heapable sequence is a sequence of numbers that can be arranged in a minheap data structure. Finding a longest heapable subsequence of a given sequence was proposed by Byers, Heeringa, Mitzenmacher, and Zervas (ANALCO 2011) as a generalization of the wellstudied longest increasing subsequence problem and its complexity still remains open. An equivalent formulation of the longest heapable subsequence problem is that of finding a maximumsized binary tree in a given permutation directed acyclic graph (permutation DAG). In this work, we study parameterized algorithms for both longest heapable subsequence and maximumsized binary tree. We show the following results:
(1) We show that the longest heapable subsequence problem can be solved in k^{O(log{k})}n time, where k is the number of distinct values in the input sequence of length n.
(2) We introduce alphabet size as a new parameter in the study of computational problems in permutation DAGs. Our result on longest heapable subsequence implies that the maximumsized binary tree problem in a given permutation DAG is fixedparameter tractable when parameterized by the alphabet size.
(3) We show that the alphabet size with respect to a fixed topological ordering can be computed in polynomial time, admits a minmax relation, and has a polyhedral description.
(4) We show that the maximumsized binary tree problem in undirected graphs can be solved in w^{O(w)}n time, where w is the treewidth and n is the number of vertices in the input graph.
Our results make progress towards understanding the complexity of longest heapable subsequence and maximumsized binary tree problems from the perspective of parameterized algorithms. We believe that the parameter alphabet size that we introduce is likely to be useful in the context of optimization problems defined over permutation DAGs.  Hypergraph kcut for fixed k in deterministic polynomial time
(with Chandra Chekuri)
Preliminary version in IEEE Symposium on Foundations of Computer Science (FOCS), 2020.
Mathematics of Operations Research, 2022. [pdf] [arXiv] [Abstract]
We consider the Hypergraphkcut problem. The input consists of a hypergraph G = (V, E) with nonnegative hyperedgecosts c: E > R_+ and a positive integer k. The objective is to find a leastcost subset F of hyperedges of E such that the number of connected components in G  F is at least k. An alternative formulation of the objective is to find a partition of V into k nonempty sets V_1, V_2, ..., V_k so as to minimize the cost of the hyperedges that cross the partition. Graphkcut, the special case of Hypergraphkcut obtained by restricting to graph inputs, has received considerable attention. Several different approaches lead to a polynomialtime algorithm for Graphkcut when k is fixed, starting with the work of Goldschmidt and Hochbaum (1988). In contrast, it is only recently that a randomized polynomial time algorithm for Hypergraphkcut was developed (Chandrasekaran, Xu, Yu, SODA 2018) via a subtle generalization of Karger's random contraction approach for graphs. In this work, we develop the first deterministic polynomial time algorithm for Hypergraphkcut for all fixed k. We describe two algorithms both of which are based on a divide and conquer approach. The first algorithm is simpler and runs in n^{O(k^2)} time while the second one runs in n^{O(k)} time. Our proof relies on new structural results that allow for efficient recovery of the parts of an optimum kpartition by solving minimum (s,t)terminal cuts. Our techniques give new insights even for Graphkcut.  Multicriteria cuts and sizeconstrained kcuts in hypergraphs
(with Calvin Beideman, Chao Xu)
Preliminary version in International Conference on Randomization and Computation (RANDOM), 2020.
Mathematical Programming, 2022. [pdf] [arXiv] [Abstract]
We address counting and optimization variants of multicriteria global mincut and sizeconstrained minkcut in hypergraphs.
1. For an rrank nvertex hypergraph endowed with t hyperedgecost functions, we show that the number of multiobjective mincuts is O(r2^{tr}n^{3t1}). In particular, this shows that the number of parametric mincuts in constant rank hypergraphs for a constant number of criteria is strongly polynomial, thus resolving an open question by Aissi, Mahjoub, McCormick, and Queyranne (Math Programming, 2015). In addition, we give randomized algorithms to enumerate all multiobjective mincuts and all paretooptimal cuts in strongly polynomialtime.
2. We also address nodebudgeted multiobjective mincuts: For an nvertex hypergraph endowed with t vertexweight functions, we show that the number of nodebudgeted multiobjective mincuts is O(r2^{r}n^{t+2}), where r is the rank of the hypergraph, and the number of nodebudgeted bmultiobjective mincuts for a fixed budgetvector b is O(n^2).
3. We show that minkcut in hypergraphs subject to constant lower bounds on part sizes is solvable in polynomialtime for constant k, thus resolving an open problem posed by Queyranne. Our technique also shows that the number of optimal solutions is polynomial.
All of our results build on the random contraction approach of Karger (SODA, 1993). Our techniques illustrate the versatility of the random contraction approach to address counting and algorithmic problems concerning multiobjective mincuts and sizeconstrained kcuts in hypergraphs.  The Maximum Binary Tree Problem
(with Elena Grigorescu, Gabriel Istrate, Shubhang Kulkarni, YoungSan Lin, Minshen Zhu)
Preliminary version in European Symposium on Algorithms (ESA), 2020.
Algorithmica, 2021. [pdf] [arXiv] [Abstract]
We introduce and investigate the approximability of the maximum binary tree problem (MBT) in directed and undirected graphs. The goal in MBT is to find a maximumsized binary tree in a given graph. MBT is a natural variant of the wellstudied longest path problem, since both can be viewed as finding a maximumsized tree of bounded degree in a given graph.
The connection to longest path motivates the study of MBT in directed acyclic graphs (DAGs), since the longest path problem is solvable efficiently in DAGs. In contrast, we show that MBT in DAGs is in fact hard: it has no efficient exp(O(log n/ log log n))approximation algorithm under the exponential time hypothesis, where n is the number of vertices in the input graph. In undirected graphs, we show that MBT has no efficient exp(O(log^{0.63}{n}))approximation under the exponential time hypothesis. Our inapproximability results rely on selfimproving reductions and structural properties of binary trees. We also show constantfactor inapproximability under the P neq NP assumption.
In addition to inapproximability results, we present algorithmic results along two different flavors: (1) We design a randomized algorithm to verify if a given directed graph on n vertices contains a binary tree of size k in 2^k poly(n) time. (2) Motivated by the longest heapable subsequence problem, introduced by Byers, Heeringa, Mitzenmacher, and Zervas, (ANALCO 2011), which is equivalent to MBT in permutation DAGs, we design efficient algorithms for MBT in bipartite permutation graphs.  Spectral Aspects of Symmetric Matrix Signings
(with Charles Carlson, HsienChih Chang, Naonori Kakimura, Alexandra Kolla)
Preliminary version in Mathematical Foundations of Computer Science (MFCS), 2019.
Discrete Optimization, 2020. [pdf] [arXiv] [Abstract]
The spectra of signed matrices have played a fundamental role in social sciences, graph theory, and control theory. In this work, we investigate the computational problems of finding symmetric signings of matrices with natural spectral properties. Our results are the following:
1. We characterize matrices that have an invertible signing: a symmetric matrix has an invertible symmetric signing if and only if the support graph of the matrix contains a perfect 2matching. Further, we present an efficient algorithm to search for an invertible symmetric signing.
2. We use the abovementioned characterization to give an algorithm to find a minimum increase in the support of a given symmetric matrix so that it has an invertible symmetric signing.
3. We show NPcompleteness of the following problems: verifying whether a given matrix has a symmetric offdiagonal signing that is singular/has bounded eigenvalues. However, we also illustrate that the complexity could differ substantially for input matrices that are adjacency matrices of graphs.
We use combinatorial techniques in addition to classic results from matching theory.
 Improving the Integrality Gap for Multiway Cut
(with Kristóf Bérczi, Tamás Király, Vivek Madan)
Preliminary version in Integer Programming and Combinatorial Optimization (IPCO), 2019.
Mathematical Programming, 2020. [pdf]+[supplement] [arXiv] [Abstract]
In the multiway cut problem, we are given an undirected graph with nonnegative edge weights and a collection of k terminal nodes, and the goal is to partition the node set of the graph into k nonempty parts each containing exactly one terminal so that the total weight of the edges crossing the partition is minimized. The multiway cut problem for k>=3 is APXhard. For arbitrary k, the bestknown approximation factor is 1.2965 due to Sharma and Vondrak while the best known inapproximability factor is 1.2 due to Angelidakis, Makarychev and Manurangsi. In this work, we improve on the lower bound to 1.20016 by constructing an integrality gap instance for the CKR relaxation.
A technical challenge in improving the gap has been the lack of geometric tools to understand higherdimensional simplices. Our instance is a nontrivial 3dimensional instance that overcomes this technical challenge. We analyze the gap of the instance by viewing it as a convex combination of 2dimensional instances and a uniform 3dimensional instance. We believe that this technique could be exploited further to construct instances with larger integrality gap. One of the byproducts from our proof technique is a generalization of a result on Sperner admissible labelings due to Mirzakhani and Vondrak that might be of independent combinatorial interest.  Improving the smoothed complexity of FLIP for max cut problems
(with Ali Bibak, Charles Carlson)
Preliminary version in ACMSIAM Symposium on Discrete Algorithms (SODA), 2019.
ACM Transactions on Algorithms, 2021. [pdf] [arXiv] [Abstract]
Finding locally optimal solutions for maxcut and maxkcut are wellknown PLScomplete problems. An instinctive approach to finding such a locally optimum solution is the FLIP method. Even though FLIP requires exponential time in worstcase instances, it tends to terminate quickly in practical instances. To explain this discrepancy, the runtime of FLIP has been studied in the smoothed complexity framework. Etscheid and Röglin [ER17] showed that the smoothed complexity of FLIP for maxcut in arbitrary graphs is quasipolynomial. Angel, Bubeck, Peres, and Wei [ABPW17] showed that the smoothed complexity of FLIP for maxcut in complete graphs is O(phi^5 n^15.1), where phi is an upper bound on the random edgeweight density and n is the number of vertices in the input graph.
While Angel et al.’s result showed the first polynomial smoothed complexity, they also conjectured that their runtime bound is far from optimal. In this work, we make substantial progress towards improving the runtime bound. We prove that the smoothed complexity of FLIP in complete graphs is O(phi n^(7.83)). Our results are based on a carefully chosen matrix whose rank captures the runtime of the method along with improved rank bounds for this matrix and an improved union bound based on this matrix. In addition, our techniques provide a general framework for analyzing the FLIP method in the smoothed framework. We illustrate this general framework by showing that the smoothed complexity of FLIP for max3cut in complete graphs is polynomial and for maxkcut in arbitrary graphs is quasipolynomial. We believe that our techniques should also be of interest towards addressing the smoothed complexity of FLIP for maxkcut in complete graphs for larger constants k.  Beating the 2approximation factor for global bicut
(with Kristóf Bérczi, Tamás Király, Euiwoong Lee, Chao Xu)
Mathematical Programming, 2019. [pdf] [Abstract]
In the fixedterminal bicut problem, the input is a directed graph with two specified nodes s and t and the goal is to find a smallest subset of edges whose removal ensures that s cannot reach t and t cannot reach s. In the global bicut problem, the input is a directed graph and the goal is to find a smallest subset of edges whose removal ensures that there exist two nodes s and t such that s cannot reach t and t cannot reach s. Fixedterminal bicut and global bicut are natural extensions of {s,t}min cut and global mincut respectively, from undirected graphs to directed graphs. Fixedterminal bicut is NPhard, admits a simple 2approximation, and does not admit a (2eps)approximation for any constant eps>0 assuming the unique games conjecture. In this work, we show that global bicut admits a (21/448)approximation, thus improving on the approximability of the global variant in comparison to the fixedterminal variant.  Additive Stabilizers for Unstable Graphs
(with Corinna Gottschalk, Jochen Könemann, Britta Peis, Daniel Schmand, Andreas Wierz)
Discrete Optimization, 2019. [pdf] [arXiv] [Abstract]
Stabilization of graphs has received substantial attention in recent years due to its connection to game theory. Stable graphs are exactly the graphs inducing a matching game with nonempty core. They are also the graphs that induce a network bargaining game with a balanced solution. A graph with weighted edges is called stable if the maximum weight of an integral matching equals the cost of a minimum fractional weighted vertex cover. If a graph is not stable, it can be stabilized in different ways. Recent papers have considered the deletion or addition of edges and vertices in order to stabilize a graph. In this work, we focus on a finegrained stabilization strategy, namely stabilization of graphs by fractionally increasing edge weights.
We show the following results for stabilization by minimum weight increase in edge weights (min additive stabilizer): (i) Any approximation algorithm for min additive stabilizer that achieves a factor of O(V^(1/24eps)) for eps>0 would lead to improvements in the approximability of densestksubgraph. (ii) Min additive stabilizer has no o(logV) approximation unless NP=P. Results (i) and (ii) together provide the first superconstant hardness results for any graph stabilization problem. On the algorithmic side, we present (iii) an algorithm to solve min additive stabilizer in factorcritical graphs exactly in polytime, (iv) an algorithm to solve min additive stabilizer in arbitrarygraphs exactly in time exponential in the size of the Tutte set, and (v) a polytime algorithm with approximation factor at most O(sqrt(V)) for a superclass of the instances generated in our hardness proofs.
 Latticebased Locality Sensitive Hashing is Optimal
(with Daniel Dadush, Venkata Gandikota, Elena Grigorescu)
Innovations in Theoretical Computer Science (ITCS), 2018. [pdf] [Abstract]
Locality sensitive hashing (LSH) was introduced by Indyk and Motwani (STOC '98) to give the first sublinear time algorithm for the $c$approximate nearest neighbor (ANN) problem using only polynomial space. At a high level, LSH family hashes "nearby" points to the same bucket and "far away" points to different buckets. The quality of measure of an LSH family is its LSH exponent, which helps determine both the query time and space usage. In a seminal work, Andoni and Indyk (FOCS '06) constructed an LSH family based on random ball partitionings of space that achieves an LSH exponent of 1/c^2 for the l_2norm, which was later shown to be optimal by O'Donnell, Wu and Zhou (TOCT '14). Although optimal in the LSH exponent, the ball partitioning approach is computationally expensive. So, in the same work, Andoni and Indyk proposed a simpler and more practical hashing scheme based on Euclidean lattices and provided computational results using the 24dimensional Leech lattice. However, no theoretical analysis of the scheme was given, thus leaving open the question of finding the exponent of lattice based LSH. In this work, we resolve this question by showing the existence of lattices achieving the optimal LSH exponent of 1/c^2 using techniques from the geometry of numbers. At a more conceptual level, our results show that optimal LSH space partitions can have periodic structure. Understanding the extent to which additional structure can be imposed on these partitions, e.g. to yield low space and query complexity, remains an important open problem.  Hypergraph kcut in randomized polynomial time
(with Chao Xu, Xilin Yu)
Preliminary version in ACMSIAM Symposium on Discrete Algorithms (SODA), 2018.
Mathematical Programming, 2019. [pdf] [Abstract]
In the hypergraph kcut problem, the input is a hypergraph, and the goal is to find a smallest subset of hyperedges whose removal ensures that the remaining hypergraph has at least k connected components. This problem is known to be at least as hard as the densest ksubgraph problem when k is part of the input (ChekuriLi, 2015). We present a randomized polynomial time algorithm to solve the hypergraph kcut problem for constant k.
Our algorithm solves the more general hedge kcut problem when the subgraph induced by every hedge has a constant number of connected components. In the hedge kcut problem, the input is a hedgegraph specified by a vertex set and a disjoint set of hedges, where each hedge is a subset of edges defined over the vertices. The goal is to find a smallest subset of hedges whose removal ensures that the number of connected components in the remaining underlying (multi)graph is at least k.
Our algorithm is based on random contractions akin to Karger's min cut algorithm. Our main technical contribution is a distribution over the hedges (hyperedges) so that random contraction of hedges (hyperedges) chosen from the distribution succeeds in returning an optimum solution with large probability.  A tight √2approximation for Linear 3Cut
(with Kristóf Bérczi, Tamás Király, Vivek Madan)
Preliminary version in ACMSIAM Symposium on Discrete Algorithms (SODA), 2018.
Mathematical Programming, 2019. [pdf] [Abstract]
We investigate the approximability of the linear 3cut problem in directed graphs, which is the simplest unsolved case of the linear kcut problem. The input here is a directed graph D=(V,E) with node weights and three specified terminal nodes s, r, t in V, and the goal is to find a minimum weight subset of nonterminal nodes whose removal ensures that s cannot reach r and t, and r cannot reach t. The problem is approximationequivalent to the problem of blocking rooted in and outarborescences, and it also has applications in network coding and security.
The approximability of linear 3cut has been wide open until now: the best known lower bound under the Unique Game Conjecture (UGC) was 4/3, while the best known upper bound was 2 using a trivial algorithm. In this work we completely close this gap: we present a √2approximation algorithm and show that this factor is tight assuming UGC. Our contributions are twofold:
(1) we analyze a natural twostep deterministic rounding scheme through the lens of a singlestep randomized rounding scheme with nontrivial distributions, and
(2) we construct integrality gap instances that meet the upper bound of √2. Our gap instances can be viewed as a weighted graph sequence converging to a "graph limit structure".  Graph Stabilization: A Survey
Combinatorial Optimization and Graph Algorithms: Communications of NII Shonan Meetings, 2017. [pdf] [Abstract]
Graph stabilization has raised a family of network design problems that has received considerable attention recently. Stable graphs are those graphs for which the matching game has nonempty core. In the optimization terminology, they are graphs for which the fractional matching linear program has an integral optimum solution. Graph stabilization involves minimally modifying a given graph to make it stable. In this survey, we outline recent developments in graph stabilization and highlight some open problems.  Odd Multiway Cut in Directed Acyclic Graphs
(with Sahand Mozaffari)
Preliminary version in International Symposium on Parameterized and Exact Computation (IPEC), 2017.
SIAM Journal on Discrete Mathematics, 2020. [pdf] [arXiv] [Abstract]
We investigate the odd multiway node (edge) cut problem where the input is a graph with a specified collection of terminal nodes and the goal is to find a smallest subset of nonterminal nodes (edges) to delete so that the terminal nodes do not have an odd length path between them. In earlier work, Lokshtanov and Ramanujan showed that both odd multiway node cut and odd multiway edge cut are fixedparameter tractable (FPT) when parameterized by the size of the solution in undirected graphs. In this work, we focus on directed acyclic graphs (DAGs) and design a fixedparameter algorithm. Our main contribution is an extension of the shadowremoval framework for parity problems in DAGs. We complement our FPT results with tight approximability as well as polyhedral results for 2 terminals in DAGs. Additionally, we show inapproximability results for odd multiway edge cut in undirected graphs even for 2 terminals.
 Global and fixedterminal cuts in digraphs
(with Kristóf Bérczi, Tamás Király, Euiwoong Lee, Chao Xu)
International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), 2017. [pdf] [arXiv] [Abstract]
The computational complexity of multicutlike problems may vary significantly depending on whether the terminals are fixed or not. In this work we present a comprehensive study of this phenomenon in two types of cut problems in directed graphs: double cut and bicut.
1. The fixedterminal edgeweighted double cut is known to be solvable efficiently. We show a tight approximability factor of 2 for the fixedterminal nodeweighted double cut. We show that the global nodeweighted double cut cannot be approximated to a factor smaller than 3/2 under the Unique Games Conjecture (UGC).
2. The fixedterminal edgeweighted bicut is known to have a tight approximability factor of 2. We show that the global edgeweighted bicut is approximable to a factor strictly better than 2, and that the global nodeweighted bicut cannot be approximated to a factor smaller than 3/2 under UGC.
3. In relation to these investigations, we also prove two results on undirected graphs which are of independent interest. First, we show NPcompleteness and a tight inapproximability bound of 4/3 for the nodeweighted 3cut problem. Second, we show that for constant k, there exists an efficient algorithm to solve the minimum {s,t}separating kcut problem.
Our techniques for the algorithms are combinatorial, based on LPs and based on enumeration of approximate mincuts. Our hardness results are based on combinatorial reductions and integrality gap instances.
 On the Expansion of GroupBased Lifts
(with Naman Agarwal, Alexandra Kolla, Vivek Madan)
Preliminary version in International Workshop on Randomization and Computation (RANDOM), 2017.
SIAM Journal on Discrete Mathematics, 2019. [pdf] [arXiv] [Abstract]
A klift of an nvertex base graph G is a graph H on n x k vertices, where each vertex v of G is replaced by k vertices v_1,...,v_k and each edge uv in G is replaced by a matching representing a bijection pi_{uv} so that the edges of H are of the form (u_i,v_{pi_{uv}(i)}). Lifts have been investigated as a means to efficiently construct expanders. In this work, we study lifts obtained from groups and group actions. We derive the spectrum of such lifts via the representation theory principles of the underlying group. Our main results are:
(1) A uniform random lift by a cyclic group of order k of any nvertex dregular base graph G, with the nontrivial eigenvalues of the adjacency matrix of G bounded by lambda in magnitude, has the new nontrivial eigenvalues bounded by lambda+O(sqrt{d}) in magnitude with probability 1ke^{Omega(n/d^2)}. The probability bounds as well as the dependency on lambda are almost optimal. As a special case, we obtain that there is a constant c_1 such that for every k<=2^{c_1n/d^2}, there exists a lift H of every Ramanujan graph by a cyclic group of order k such that H is almost Ramanujan (nontrivial eigenvalues of the adjacency matrix at most O(sqrt{d}) in magnitude). This result leads to a quasipolynomial time deterministic algorithm to construct almost Ramanujan expanders.
(2) There is a constant c_2 such that for every k>=2^{c_2nd}, there does NOT exist an abelian klift H of any nvertex dregular base graph such that H is almost Ramanujan. This can be viewed as an analogue of the wellknown noexpansion result for constant degree abelian Cayley graphs.
Suppose k_0 is the order of the largest abelian group that produces expanding lifts. Our two results highlight lower and upper bounds on k_0 that are tight upto a factor of d^3 in the exponent, thus suggesting a threshold phenomenon.
 Shift Lifts Preserving Ramanujan Property
(with Ameya Velingker)
Linear Algebra and its Applications, 2017. [pdf] [arXiv] [Abstract]
In a breakthrough work, MarcusSpielmanSrivastava recently showed that every dregular bipartite Ramanujan graph has a 2lift that is also dregular bipartite Ramanujan. As a consequence, a straightforward iterative bruteforce search algorithm leads to the construction of a dregular bipartite Ramanujan graph on N vertices in time 2^O(dN). Shift klifts studied by AgarwalKollaMadan lead to a natural approach for constructing Ramanujan graphs more efficiently. The number of possible shift klifts of a dregular nvertex graph is k^(nd/2). Suppose the following holds for k=2^{\Omega(n)}:
There exists a shift klift that maintains the Ramanujan property of dregular bipartite graphs on n vertices for all n.  (*)
Then, by performing a similar bruteforce search, one would be able to construct an Nvertex bipartite Ramanujan graph in time 2^O(dlog^2 N). Furthermore, if (*) holds for all k>=2, then one would obtain an algorithm that runs in poly(N^d) time. In this work, we take a first step towards proving (*) by showing the existence of shift klifts that preserve the Ramanujan property in dregular bipartite graphs for k=3, 4.
 Local Testing for Membership in Lattices
(with Mahdi Cheraghchi, Venkata Gandikota, Elena Grigorescu)
Preliminary version in Foundations of Software Technology and Theoretical Computer Science (FSTTCS), 2016.
SIAM Journal on Discrete Mathematics, 2018. [pdf] [arXiv] [Abstract]
Motivated by the structural analogies between point lattices and linear errorcorrecting codes, and by the mature theory on locally testable codes, we initiate a systematic study of local testing for membership in lattices. Testing membership in lattices is also motivated in practice, by applications to integer programming, error detection in latticebased communication, and cryptography. Apart from establishing the conceptual foundations of lattice testing, our results include the following:
1. We demonstrate upper and lower bounds on the query complexity of local testing for the wellknown family of code formula lattices. Furthermore, we instantiate our results with code formula lattices constructed from ReedMuller codes, and obtain nearlytight bounds.
2. We show that in order to achieve low query complexity, it is sufficient to design onesided nonadaptive canonical tests. This result is akin to, and based on an analogous result for errorcorrecting codes due to BenSasson et al. (SIAM J. Computing 35(1) pp 121).
 Deciding Orthogonality in ConstructionA Lattices
(with Venkata Gandikota, Elena Grigorescu)
Preliminary version in Foundations of Software Technology and Theoretical Computer Science (FSTTCS), 2015.
SIAM Journal on Discrete Mathematics, 2017. [pdf] [arXiv] [Abstract]
Lattices are discrete mathematical objects with widespread applications to integer programs as well as modern cryptography. A fundamental problem in both domains is the Closest Vector Problem (popularly known as CVP). It is wellknown that CVP can be easily solved in lattices that have an orthogonal basis if the orthogonal basis is specified. This motivates the orthogonality decision problem: verify whether a given lattice has an orthogonal basis. Surprisingly, the orthogonality decision problem is not known to be either NPcomplete or in P.
In this work, we focus on the orthogonality decision problem for a wellknown family of lattices, namely Construction A lattices. These are lattices of the form C+qZ^n, where C is an errorcorrecting qary code, and are studied in communication settings. We provide a complete characterization of lattices obtained from binary and ternary codes using Construction A that have an orthogonal basis. This characterization leads to an efficient algorithm solving the orthogonality decision problem, which also finds an orthogonal basis if one exists for this family of lattices. We believe that these results could provide a better understanding of the complexity of the orthogonality decision problem in general.
 Finding Small Stabilizers for Unstable Graphs
(with Adrian Bock, Jochen Könemann, Britta Peis, Laura Sanità)
Preliminary version in Integer Programming and Combinatorial Optimization (IPCO), 2014.
Mathematical Programming, Vol. 154, Issue 1, 2015. [pdf] [Abstract]
An undirected graph G=(V,E) is stable if the cardinality of a maximum matching equals the size of a minimum fractional vertex cover. We call a set of edges F subseteq E to be a stabilizer if its removal from G yields a stable graph. In this paper we study the following natural edgedeletion question: given a graph G=(V,E), can we find a minimumcardinality stabilizer? Stable graphs play an important role in cooperative game theory. In the classic matching game introduced by Shapley and Shubik (Int J Game Theory 1(1):111130, 1971) we are given an undirected graph G=(V,E) where vertices represent players, and we define the value of each subset S subseteq V as the cardinality of a maximum matching in the subgraph induced by S. The core of such a game contains all fair allocations of the value of V among the players, and is wellknown to be nonempty iff graph G is stable. The stabilizer problem addresses the question of how to modify the graph to ensure that the core is nonempty. We show that this problem is vertexcover hard. We prove that every minimumcardinality stabilizer avoids some maximum matching of G. We use this insight to give efficient approximation algorithms for sparse graphs and for regular graphs.  Finding a Most Biased Coin with Fewest Flips
(with Richard Karp)
Conference on Learning Theory (COLT), 2014. [arXiv] [Abstract]
We study the problem of learning a most biased coin among a set of coins by tossing the coins adaptively. The goal is to minimize the number of tosses until we identify a coin whose posterior probability of being most biased is at least 1  delta for a given delta. Under a particular probabilistic model, we give an optimal algorithm, i.e., an algorithm that minimizes the expected number of future tosses. The problem is closely related to finding the best arm in the multiarmed bandit problem using adaptive strategies. Our algorithm employs an optimal adaptive strategya strategy that performs the best possible action at each step after observing the outcomes of all previous coin tosses. Consequently, our algorithm is also optimal for any given starting history of outcomes. To our knowledge, this is the first algorithm that employs an optimal adaptive strategy under a Bayesian setting for this problem. Our proof of optimality employs mathematical tools from the area of Markov games.
 Faster Private Release of Marginals on Small Databases
(with Justin Thaler, Jonathan Ullman, Andrew Wan)
Innovations in Theoretical Computer Science (ITCS), 2014. [pdf] [arXiv] [Abstract]
We study the problem of answering kway marginal queries on a database D in ({0,1}^d)^n, while preserving differential privacy. The answer to a kway marginal query is the fraction of the database's records x in {0,1}^d with a given value in each of a given set of up to k columns. Marginal queries enable a rich class of statistical analyses on a dataset, and designing efficient algorithms for privately answering marginal queries has been identified as an important open problem in private data analysis. For any k, we give a differentially private online algorithm that runs in time poly(n, min{ exp(d^{1Omega(1/sqrt{k})}), exp(d / \log^{.99} d) } ) per query and answers any (adaptively chosen) sequence of poly(n) kway marginal queries with error at most +/ .01 on every query, provided n >= d^{.51}. To the best of our knowledge, this is the first algorithm capable of privately answering marginal queries with a nontrivial worstcase accuracy guarantee for databases containing poly(d, k) records in time exp(o(d)). Our algorithm runs the private multiplicative weights algorithm (Hardt and Rothblum, FOCS) on a new approximate polynomial representation of the database.
We derive our representation for the database using techniques from approximation theory. More concretely, our representation uses approximations to the OR function by lowdegree polynomials with coefficients of bounded L_1norm. We derive our representation for the database by approximating the OR function restricted to low Hamming weight inputs using lowdegree polynomials with coefficients of bounded L_1norm. In doing so, we show new upper and lower bounds on the degree of such polynomials, which may be of independent approximationtheoretic interest. First, we construct a polynomial that approximates the dvariate OR function on inputs of Hamming weight at most k such that the degree of the polynomial is at most d^{1Omega(1/sqrt{k})} and the L_1norm of its coefficient vector is d^{0.01}. Then we show the following lower bound that exhibits the tightness of our approach: for any k = o(log d), any polynomial whose coefficient vector has L_1norm poly(d) that pointwise approximates the dvariate OR function on all inputs of Hamming weight at most k must have degree d^{1O(1/sqrt{k})}.
 Integer Feasibility of Random Polytopes
(with Santosh Vempala)
Innovations in Theoretical Computer Science (ITCS), 2014. [pdf] [arXiv] [Abstract]
We study integer programming instances over polytopes P(A,b)={x:Ax<=b} where the constraint matrix A is random, i.e., its entries are i.i.d. Gaussian or, more generally, its rows are i.i.d. from a spherically symmetric distribution. The radius of the largest inscribed ball is closely related to the existence of integer points in the polytope. We show that for m=2^O(sqrt{n}), there exist constants c_0 < c_1 such that with high probability, random polytopes are integer feasible if the radius of the largest ball contained in the polytope is at least c_1sqrt{log(m/n)}; and integer infeasible if the largest ball contained in the polytope is centered at (1/2,...,1/2) and has radius at most c_0sqrt{log(m/n)}. Thus, random polytopes transition from having no integer points to being integer feasible within a constant factor increase in the radius of the largest inscribed ball. We show integer feasibility via a randomized polynomialtime algorithm for finding an integer point in the polytope.
Our main tool is a simple new connection between integer feasibility and linear discrepancy. We extend a recent algorithm for finding lowdiscrepancy solutions (LovettMeka, FOCS '12) to give a constructive upper bound on the linear discrepancy of random matrices. By our connection between discrepancy and integer feasibility, this upper bound on linear discrepancy translates to the radius lower bound that guarantees integer feasibility of random polytopes.
 The Cutting Plane Method is Polynomial for Perfect Matchings
(with László Végh, Santosh Vempala)
Preliminary version in IEEE Symposium on Foundations of Computer Science (FOCS), 2012.
Mathematics of Operations Research, Vol. 41, No. 1, 2016. [pdf] [arXiv] [Abstract]
The cutting plane approach to optimal matchings has been discussed by several authors over the past decades (Padberg and Rao, Grotschel and Holland, Lovasz and Plummer, Trick, Fischetti and Lodi), and its convergence has been an open question. We prove that the cutting plane approach using Edmonds' blossom inequalities converges in polynomial time for the minimumcost perfect matching problem. Our main insight is an LPbased method to retain/drop cuts. This careful cut retention procedure leads to a sequence of intermediate linear programs with a linear number of constraints whose optima are halfintegral and supported by a disjoint union of odd cycles and edges. This structural property of the optima is instrumental in finding violated blossom inequalities (cuts) in linear time. Further, the number of cycles in the support of the halfintegral optima acts as a potential function to show efficient convergence to an integral solution.
 Algorithms for Implicit Hitting Set Problems
(with Richard Karp, Erick MorenoCenteno, Santosh Vempala)
ACMSIAM Symposium on Discrete Algorithms (SODA), 2011. [pdf] [arXiv] [Abstract]
Motivated by instances of the hitting set problem where the number of sets to be hit is large, we introduce the notion of implicit hitting set problems. In an implicit hitting set problem the collection of sets to be hit is typically too large to list explicitly; instead, an oracle is provided which, given a set H, either determines that H is a hitting set or returns a set that H does not hit. We show a number of examples of classic implicit hitting set problems, and give a generic algorithm for solving such problems optimally. The main contribution of this paper is to show that this framework is valuable in developing approximation algorithms. We illustrate this methodology by presenting a simple online algorithm for the minimum feedback vertex set problem on random graphs. In particular our algorithm gives a feedback vertex set of size n(1/p) log np(1o(1)) with probability at least 3/4 for the random graph G(n,p) (the smallest feedback vertex set is of size n(2/p) log np(1 + o(1))). We also consider a planted model for the feedback vertex set in directed random graphs. Here we show that a hitting set for a polynomialsized subset of cycles is a hitting set for the planted random graph and this allows us to exactly recover the planted feedback vertex set.
 Satisfiability Thresholds for kCNF Formula with Bounded Variable Intersections
(with Navin Goyal, Bernhard Haeupler) [pdf] [arXiv] [Abstract]
We determine the thresholds for the number of variables, number of clauses, number of clause intersection pairs and the maximum clause degree of a kCNF formula that guarantees satisfiability under the assumption that every two clauses share at most alpha variables. More formally, we call these formulas alphaintersecting and define, for example, a threshold mu_i(k,alpha) for the number of clause intersection pairs i, such that every alphaintersecting kCNF formula in which at most mu_i(k,alpha) pairs of clauses share a variable is satisfiable and there exists an unsatisfiable alphaintersecting kCNF formula with mu_m(k,alpha) such intersections. We provide a lower bound for these thresholds based on the Lovasz Local Lemma and a nearly matching upper bound by constructing an unsatisfiable kCNF to show that mu_i(k,alpha) = Theta(2^{k(2+1/alpha)})$. Similar thresholds are determined for the number of variables (mu_n = Theta(2^{k/alpha})) and the number of clauses (mu_m = Theta(2^{k(1+(1/alpha))})) (see [Scheder08] for an earlier but independent report on this threshold). Our upper bound construction gives a family of unsatisfiable formula that achieve all four thresholds simultaneously.
 Deterministic Algorithms for the Lovász Local Lemma
(with Navin Goyal, Bernhard Haeupler)
Preliminary version in ACMSIAM Symposium on Discrete Algorithms (SODA), 2010.
SIAM Journal on Computing, Vol. 42, Issue 6, 2013. [pdf] [arXiv] [Abstract]
Lovasz Local Lemma (LLL) is a powerful result in probability theory that is often used for nonconstructive existence proofs of combinatorial structures. A prominent application is to kCNF formulas, where LLL implies that if every clause in a formula shares variables with at most d<=2^k/e other clauses then such a formula has a satisfying assignment. Recently, a randomized algorithm to efficiently construct a satisfying assignment in this setting was given by Moser. Subsequently Moser and Tardos gave a general algorithmic framework for LLL and a randomized algorithm to construct the structures guaranteed by LLL. In this paper we address the main problem left open by Moser and Tardos of derandomizing this algorithm efficiently. Our algorithm works in the general framework of MoserTardos with a minimal loss in parameters. For the special case of constructing satisfying assignments for kCNF formulas, for any epsilon in (0, 1) we give a deterministic algorithm that finds a satisfying assignment for any kCNF formula with m clauses and d<=2^k/(1+epsilon) /e in time \tilde{O}(m^(2(1+1/{lower case epsilon})). This improves upon the deterministic algorithms of Moser and of MoserTardos with running times m^(Omega(k^2)) and m^(Omega(k/epsilon)) which are superpolynomial for k = omega(1) and upon other previous algorithms which work only for d<=2^(k/16)/e. Our algorithm is the first deterministic algorithm that works in the general framework of MoserTardos. Lastly we show how to obtain an NC, i.e., fully parallel, algorithm for the same setting.
 Thin Partitions: Isoperimetric Inequalities and Sampling Algorithms for some Nonconvex Families
(with Daniel Dadush, Santosh Vempala)
ACMSIAM Symposium on Discrete Algorithms (SODA), 2010. [pdf] [arXiv] [Abstract]
Starshaped bodies are an important nonconvex generalization of convex bodies (e.g., linear programming with violations). Here we present an efficient algorithm for sampling a given starshaped body. The complexity of the algorithm grows polynomially in the dimension and inverse polynomially in the fraction of the volume taken up by the kernel of the starshaped body. The analysis is based on a new isoperimetric inequality. Our main technical contribution is a tool for proving such inequalities when the domain is not convex. As a consequence, we obtain a polynomial algorithm for computing the volume of such a set as well. In contrast, linear optimization over starshaped sets is NPhard.
 Sampling sConcave functions
(with Amit Deshpande, Santosh Vempala)
International Workshop on Randomization and Computation (RANDOM), 2009. [pdf] [arXiv] [Abstract]
Efficient sampling, integration and optimization algorithms for logconcave functions rely on the good isoperimetry of these functions. We extend this to show that 1/(n1)concave functions have good isoperimetry, and moreover, using a characterization of functions based on their values along every line, we prove that this is the largest class of functions with good isoperimetry in the spectrum from concave to quasiconcave. We give an efficient sampling algorithm based on a random walk for 1/(n1)concave probability densities satisfying a smoothness criterion, which includes heavytailed densities such as the Cauchy density. In addition, the mixing time of this random walk for Cauchy density matches the corresponding best known bounds for logconcave densities.
 An Observation about Variations of the DiffieHellman Assumption
(with Raghav Bhaskar, Satya V. Lokam, Peter L. Montgomery, Ramarathnam Venkatesan, Yacov Yacobi)
Serdica Journal of Computing, Vol 3, No. 3, 2009. [pdf] [Abstract]
We generalize the Strong BonehBoyen (SBB) signature scheme to sign vectors (GSBB). We show that if a particular average case reduction from SBB to GSBB exists, then the Strong DiffieHellman (SDH) and the Computational DiffieHellman (CDH) have the same worst case complexity.
 Vulnerabilities in Anonymous Credential Systems
(with Raghav Bhaskar, Satya V. Lokam, Peter L. Montgomery, Ramarathnam Venkatesan, Yacov Yacobi)
Electronic Notes in Theoretical Computer Science, Vol 197, No. 2, 2008. [pdf] [Abstract]
We show the following:
(1) In existing anonymous credential revocation systems, the revocation authority can link the transactions of any user in a subset T of users in O(log T) fake failed sessions.
(2) A concern about the DLREPI anonymous credentials system.

PhD Thesis:
New Approaches to Integer Programming. [pdf]