# Raphael Arkady Meyer

I am a final year Ph.D. Student at NYU Tandon School of Engineering, advised by Christopher Musco and part of the Algorithms and Foundations Group.

I research the interplay of theoretical statistics and computation, largely through the lens of linear algebra.

In the summer of 2022, I visited Michael Kapralov's group at EPFL and Haim Avron's group at TAU.

Links: Google Scholar, dblp, Github, Zoom Room

My recent publications have looked at:

Fast Randomized Linear-Algebra Algorithms (

*preprint*,*SOSA2021*)Active Learning on Linear Function Families (

*SODA2023*,*NeurIPS2020*)

Of course, I am interested in problems beyond these areas, and if you want to work with me on a problem, send me an email: $ram900@nyu.edu$

# News

New prepreint on arXiv:

*Hutchinson’s Estimator is Bad at Kronecker-Trace-Estimation*.I'm attending Sketching and Algorithm Design workshop at the Simons Institute this October.

I'm organizing a minisymposium on The Matrix-Vector Complexity of Linear Algebra at the first ever SIAM-NNP conference! Details TBD.

I'm giving a talk at the Conference on Fast Direct Solvers at Purdue University in November.

May 2023

New preprint on arXiv:

*On the Unreasonable Effectiveness of Single Vector Krylov Methods for Low-Rank Approximation*.

March 2023

I gave two talks at the NYU / UMass Quantum Linear Algebra reading group.

I gave a talk at the BIRS Perspectives on Matrix Computations about my

*new work on Krylov methods*.

January 2023

I presented

*Near-Linear Sample Complexity for $L_p$ Polynomial Regression*at SODA 2023.

November 2022

I gave a talk at the TCS Seminar at Purdue in early November to present my new research on the role of block size in Krylov Methods.

October 2022

New paper accepted at SODA 2023:

*Near-Linear Sample Complexity for $L_p$ Polynomial Regression*! I just gave a talk on it last week Friday at the Grad Student Seminar at CDS (at NYU).

September 2022

I gave a talk at GAMM ANLA on the role of block size in Krylov Methods for low-rank approximation. A preprint will be available very soon, but until then you can check out my slides for a preview! Slides

July 2022

I gave a talk at the

*SIAM Annual Meeting Minisymposium on Matrix Functions, Operator Functions, and Related Approximation Methods*. Thanks to Heather, Andrew, and Ke for organizing!

June 2022

I'm going be presenting Hutch++ this summer at HALG2022, with both a short talk and a poster.

I'm traveling this summer! I'm first in London for HALG2022. Then I'm spending June visiting Haim Avron at TAU, and July visiting Michael Kapralov at EPFL. If you're in the same place at the same time, drop me a line!

May 2022

I recently organized a mini-conference for NYU CS Theory researchers to present their "Pandemic Papers" in-person. Thanks to everyone who showed up and made it a success!

*More details here*I'm honored to be awarded the

**Deborah Rosenthal, MD Award for Best Quals Examination**in 2022, for my presentation*Towards Optimal Spectral Sum Estimation in the Matrix-Vector Oracle Model*.

April 2022

I'm honored to be a ICLR 2022 Highlighted Reviewer.

# Publications

in submission**Hutchinson's Estimator is Bad at Kronecker-Trace-Estimation***with Haim Avron*

in submission**On the Unreasonable Effectiveness of Single Vector Krylov Methods for Low-Rank Approximation**^{[1]}*with Cameron Musco and Christopher Musco*

at SODA 2023**Near-Linear Sample Complexity for $L_p$ Polynomial Regression**^{[2]}*with Cameron Musco, Christopher Musco, David P. Woodruff, and Samson Zhou*

at ICLR 2022**Fast Regression for Structured Inputs**^{[3]}*with Cameron Musco, Christopher Musco, David P. Woodruff, and Samson Zhou*

at SOSA 2021**Hutch++: Optimal Stochastic Trace Estimation**^{[4]}*with Cameron Musco, Christopher Musco, and David P. Woodruff*

at NeurIPS 2020**The Statistical Cost of Robust Kernel Hyperparameter Tuning**^{[5]}*with Christopher Musco***Optimality Implies Kernel Sum Classifiers are Statistically Efficient**^{[6]}*with Jean Honorio***Characterizing Optimal Security and Round-Complexity for Secure OR Evaluation***with Amisha Jhanji and Hemanta K. Maji*

[1] | Code available on github $\cdot$ Slides |

[2] | Slides |

[3] | Poster |

[4] | Code available on github $\cdot$ Landscape Poster $\cdot$ Portrait Poster $\cdot$ 4min Slides $\cdot$ 12min Slides $\cdot$ 25min Slides $\cdot$ 35min Slides $\cdot$ 1hr Slides |

[5] | Slides |

[6] | Poster $\cdot$ Slides. |

# Talks & Presentations

To date, I have presented every paper I published at the associated conference. This is a list of other talks or presentations I have given.

Short Talk at**On the Unreasonable Effectiveness of Single Vector Krylov for Low-Rank Approximation***BIRS workshop on Perspectives on Matrix Computations*.

at**On the Unreasonable Effectiveness of Single Vector Krylov for Low-Rank Approximation***Purdue University TCS Seminar*

at**Hutch++ and More: Towards Optimal Spectral Sum Estimation***Matrix Functions, Operator Functions, and Related Approximation Methods*, a minisymposium at SIAM Annual Meeting (AN22)

at**Hutch++: Optimal Stochastic Trace Estimation***John Hopkins University Theory Reading Group*

at**Lessons from Trace Estimation Lower Bounds: Testing, Communication, and Anti-Concentration**^{[7]}*Computational Lower Bounds in Numerical Linear Algebra*, a minisymposium at SIAM Annual Meeting (AN21)

[7] | Slides available here. Video starts at 1:04:55 here. |

Short Talk at**On the Unreasonable Effectiveness of Single Vector Krylov for Low-Rank Approximation**^{[8]}*GAMM ANLA 2022*.

Poster and Short Talk at**Hutch++: Optimal Stochastic Trace Estimation**^{[8]}*HALG 2022*.

Talk at**Chebyshev Sampling is Optimal for Lp Polynomial Regression**^{[8]}*NYU "Pandemic Presentations" 2022*

Poster at**Hutch++: Optimal Stochastic Trace Estimation**^{[8]}*Wald(O) 2021*.

Poster at**Optimality Implies Kernel Sum Classifiers are Statistically Efficient**^{[8]}*Midwest Theory Day 2019*

[8] | Assets available in the Publications section. |

1-hour-long talk at NYU/UMass Quantum Linear Algebra Reading Group**The Equivalence of Matrix-Vector Complexity in Quantum Computing, Part 2**

1-hour-long talk at NYU/UMass Quantum Linear Algebra Reading Group**The Equivalence of Matrix-Vector Complexity in Quantum Computing, Part 1**

1-hour-long talk at NYU CDS Student Seminar**Near-Linear Sample Complexity for $L_p$ Polynomial Regression**

1-hour-long talk at NYU VIDA RG Reading Group**Hutch++: Optimal Stochastic Trace Estimation**

1.5-hour-long talk at NYU Tandon Theory Reading Group**Introduction to Leverage Scores**

Two 1.5-hour-long talks at NYU Tandon Reinforcement Learning Reading Group**Strategies for Episodic Tabular & Linear MDPs**

Three 1.5-hour-long talks at NYU Tandon Theory Reading Group**Lagrangian Duality**

1-hour-long talk at NYU CDS Reading Group on Information Theory**Introduction to Differential Entropy**

1-hour-long presentation of the paper**Lower bounds on the complexity of stochastic convex optimization**^{[9]}*Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization*by Agarwal et. al.

[9] | Link to the original paper here. My slides available here. |

# Teaching

I really enjoy teaching, and have been a TA for a few courses now:

Responsible Data Science, New York University, Spring 2023

Algorithmic Machine Learning and Data Science, New York University, Fall 2020

Introduction to Machine Learning, New York University, Spring 2020

Introduction to the Analysis of Algorithms, Purdue University, Fall 2018

# Service

Service outside of reviewing:

Organizer for NYU TCS "Pandemic Presentations" Day

Organizer for NYU Tandon Theory Reading Group

Service as a reviewer:

NeurIPS 2023 Reviewer

ICLR 2023 Reviewer

SODA 2023 External Reviewer

NeurIPS 2022 Reviewer

ICML 2022 Reviewer

STOC 2022 External Reviewer

ICLR 2022 Reviewer*

NeurIPS 2021 Reviewer*

ISIT 2017 External Reviewer

** Denotes Highlighted / Outstanding Reviewer*