Marina Meila
CURRENT RESEARCH PAPERS SOFTWARE STUDENTS CLASSES CONTACT

GOOGLE Scholar

GEOMETRIC DATA ANALYSIS group

ELECTORAL GEOMETRY and GERRYMANDERING group

NEWS

11/2/20 Manifold coordinates with physical meaning via Riemannian geometry Applied Mathematics Seminar at Yale University

10/23/20 Statistics and Philosophy of Voting is featured in "Perspectives"

10/7/20 Unsupervised Validation for Unsupervised Learning Seminar at Wharton School

9/28/20 Fall graduate+undergraduate course Statistics and Philosophy of Voting I am excited to co-teach it with Elena Erosheva and Conor Mayo-Wilson.

9/6/20 AWARDED NSF grant on "Interpretable embedding coordinates"

6/8/20 AWARDED NSF grant on "Improving panel decision-making: Understanding methods for aggregating reviewer opinions" with Elena Erosheva

7/1/20 The GDA Reading group is looking for a webmaster/blogmaster. Excellent programmers (python, java, matlab) also wanted. Send me an email if you are interested!

6/28/20 Video tutorial on Manifold Learning for the Quantitative Methods Network (QMNET) in Melbourne, Australia. Thanks, Sheridan Grant and Michael Zyphur!

2/6/20 Hanyu Zhang talk on "Clustering and dimension reduction from local to global" at this meeting "From local to global information".

11/01/19 I and Tong Zhang are the Program Chairs of ICML 2021, Vienna, Austria, Sunday July 18 -- Saturday July 24, 2021.

12/1/19 See you at NeurIPS 2019! Yu-chia's paper "Selecting the independent coordinates of manifolds with large aspect ratios" accepted at NeurIPS 2019, Sam and Hanyu have two posters here and here (TBPosted).

OVERVIEW

I work on machine learning by probabilistic methods and reasoning in uncertainty. In this area, it is particularly important to develop computationally aware methods and theories. In this sense, my research is at the frontier between the sciences of computing and statistics. I am particularly interested in combinatorics, algorithms and optimization, on the computing side, and in solving data analysis problems with many variables and combinatorial structure.



MANIFOLD LEARNING AND GEOMETRIC DATA ANALYSIS

Manifold learning algorithms find a non-linear representation of high-dimensional data (like images) with a small number of parameters. However, all such existing methods deform the data (except in special simple cases). We construct low-dimensional representations that are geometrically accurate under much more general conditions. As a consequence of the kind of geometric faithfulness we aim for, one should be able to do regressions, predictions, and other statistical analyses directly on the low-dimensional representation of the data. These analyses would not be correct in general, if one were not preserving the original data geometry accurately.


FOUNDATIONS OF CLUSTERINGS

It was widely believed that little can be theoretically said about clustering and clustering algorithms, as most clustering problems are NP-hard. This part of my work aims to overcome these difficulties, and takes steps towards developing a rigurous and practically relevant theoretical understanding of the clustering algorithms in everyday use. A fundamental concept is that of clusterability of the data. Given that the dat contains clusters, one can show that: we can devise initialization methods that lead w.h.p. to an almost correct clustering, we can prove that a given clustering is almost optimal, and that if the nubmer of clusters in the data is smaller than our guess, we will obtain unstable results.


PERMUTATIONS, PARTIAL RANKINGS, INTRANSITIVITY AND CHOICE

It has been often noted that people's choices are not transitive: in other words, their preferences between K objects are not consistent with an ordering. Economic theory of choice has introduced various theories explaining how the observed intransitivity may arise. However, there is no work to date on how one may infer these models from data. Among the things I want to do: to formulate estimation problems for the hidden context and other models of intransitivity that are relevant to practical domains; to define when the model is identifiable (it may not be when the number of components K is large) and to design rigorously founded algorithms to estimate it. More


CLUSTERING BY EIGENVALUES AND EIGENVECTORS

...is a technique rooted in graph theory for finding groups (or other structure) in data. It already has applications in image segmentation, web and document clustering, social networks, bioinformatics and linguistics. My recent work concentrates on the study of asymmetric links, or, in other words, of directed graphs. More


GRAVIMETRIC INVERSION WITH SPARSITY CONSTRAINTS

This works deals with recovering the shape of an unknown body from gravity measurements. As a mathematical physics problem, this one is old, well-studied, and one of the hardest type of inverse problems. My team is interested in finding algorithmic solutions, under realistic scenarios, that recover given features of the unknown underground density in noise. We showed that this problem can be mapped to a linear program with sparsity constraints, for which we formulated various continuous and integer approaches. The methodological and theoretical work on this problem continues, as we exploit the connections with Compressed Sensing, QBPs and submodularity. The practical results led to intriguing new research questions, since the restricted isometry assumptions that usually underlie compressed sensing algorithms can be proved not to hold for the gravimetry problem. (Collaboration with Caren Marzban and Ulvi Yurtsever.)


PROTEOMICS

Interpreting the very complex signature of an amino-acid sequence that is subjected to collision induced dissociation (CID). Probabilistic identification of the protein composition of a complex mixture from high throughput mass spectrometry data.