Skip to main content

Publications

* indicates equal contribution or alphabetic order.

On Transferring Transferability: Towards a Theory for Size Generalization

Eitan Levin*Yuxin Ma*Mateo DíazSoledad Villar 

NeurIPS 2025 (Spotlight).

We study the properties that make machine learning models generalize their performance across input dimensions.

transferability size generalization graph neural networks equivariant machine learning any-dimensional learning

Nonlinear Laplacians: Tunable principal component analysis under directional prior information

Yuxin MaDmitriy Kunisky 

NeurIPS 2025 (Spotlight).

We study a new class of spectral algorithms for low-rank estimation that involve a tunable nonlinear deformation of an observed matrix. The tunable component can be selected using methods such as black-box optimization or learned from data using neural networks.

principal component analysis random matrix theory spiked matrix models low-rank estimation