Bibliography

Bibliography#

[Amo23]

Brandon Amos. On amortizing convex conjugates for optimal transport. In International Conference on Learning Representations. 2023. URL: https://arxiv.org/abs/2210.12153.

[ACLR22]

Brandon Amos, Samuel Cohen, Giulia Luise, and Ievgen Redko. Meta optimal transport. 2022. URL: https://arxiv.org/abs/2206.05262, doi:10.48550/ARXIV.2206.05262.

[AXK17]

Brandon Amos, Lei Xu, and J. Zico Kolter. Input convex neural networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, 146–155. PMLR, August 2017. URL: https://proceedings.mlr.press/v70/amos17b.html.

[AFS12]

Andreas Argyriou, Rina Foygel, and Nathan Srebro. Sparse prediction with the k-support norm. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc., 2012. URL: https://proceedings.neurips.cc/paper/2012/file/99bcfcd754a98ce89cb86f73acc04645-Paper.pdf.

[AV07]

David Arthur and Sergei Vassilvitskii. K-means++: the advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '07, 1027–1035. Society for Industrial and Applied Mathematics, 2007.

[BCC+15]

Jean-David Benamou, Guillaume Carlier, Marco Cuturi, Luca Nenna, and Gabriel Peyré. Iterative bregman projections for regularized transportation problems. SIAM Journal on Scientific Computing, 37(2):A1111–A1138, 2015. doi:10.1137/141000439.

[Ber71]

Dimitri P Bertsekas. Control of uncertain systems with a set-membership description of the uncertainty. phdthesis, Massachusetts Institute of Technology, 1971.

[BMV21]

Mathieu Blondel, Arthur Mensch, and Jean-Philippe Vert. Differentiable divergences between time series. In Arindam Banerjee and Kenji Fukumizu, editors, Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, 3853–3861. PMLR, April 2021. URL: https://proceedings.mlr.press/v130/blondel21a.html.

[BBV04]

Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. URL: https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf.

[Bre91]

Yann Brenier. Polar factorization and monotone rearrangement of vector-valued functions. Communications on pure and applied mathematics, 44(4):375–417, 1991.

[BKmc22]

Charlotte Bunne, Andreas Krause, and marco cuturi. Supervised training of conditional monge maps. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems. 2022. URL: https://arxiv.org/abs/2206.14262.

[CLZ19]

Song Chen, Blue B. Lake, and Kun Zhang. High-throughput sequencing of the transcriptome and chromatin accessibility in the same cell. Nature Biotechnology, 37(12):1452–1457, 2019. URL: https://doi.org/10.1038/s41587-019-0290-0, doi:10.1038/s41587-019-0290-0.

[CGT19]

Yongxin Chen, Tryphon T. Georgiou, and Allen Tannenbaum. Optimal transport for gaussian mixture models. IEEE Access, 7:6269–6278, 2019.

[CYL16]

Yukun Chen, Jianbo Ye, and Jia Li. A distance for hmms based on aggregated wasserstein metric and state registration. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Computer Vision – ECCV 2016, 451–466. Springer International Publishing, 2016.

[CYL20]

Yukun Chen, Jianbo Ye, and Jia Li. Aggregated wasserstein distance and state registration for hidden markov models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(9):2133–2147, 2020.

[CGHH17]

Victor Chernozhukov, Alfred Galichon, Marc Hallin, and Marc Henry. Monge–Kantorovich depth, quantiles, ranks and signs. The Annals of Statistics, 45(1):223–256, 2017.

[CS09]

Youngmin Cho and Lawrence Saul. Kernel methods for deep learning. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems, volume 22. Curran Associates, Inc., 2009. URL: https://proceedings.neurips.cc/paper_files/paper/2009/file/5751ec3e9a4feab575962e78e006250d-Paper.pdf.

[CWW13]

Keenan Crane, Clarisse Weischedel, and Max Wardetzky. Geodesics in heat: a new approach to computing distance based on heat flow. ACM Trans. Graph., October 2013. URL: https://doi.org/10.1145/2516971.2516977, doi:10.1145/2516971.2516977.

[Cut13]

Marco Cuturi. Sinkhorn distances: lightspeed computation of optimal transport. In C.J. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc., 2013. URL: https://proceedings.neurips.cc/paper/2013/file/af21d0c97db2e27e13572cbf59eb343d-Paper.pdf.

[CB17]

Marco Cuturi and Mathieu Blondel. Soft-DTW: a differentiable loss function for time-series. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, 894–903. PMLR, August 2017. URL: https://proceedings.mlr.press/v70/cuturi17a.html.

[CD14]

Marco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. In Eric P. Xing and Tony Jebara, editors, Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, 685–693. PMLR, June 2014. URL: https://proceedings.mlr.press/v32/cuturi14.html.

[CTNWV20]

Marco Cuturi, Olivier Teboul, Jonathan Niles-Weed, and Jean-Philippe Vert. Supervised quantile normalization for low rank matrix factorization. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, 2269–2279. PMLR, July 2020. URL: https://proceedings.mlr.press/v119/cuturi20a.html.

[CTV19]

Marco Cuturi, Olivier Teboul, and Jean-Philippe Vert. Differentiable ranking and sorting using optimal transport. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL: https://proceedings.neurips.cc/paper/2019/file/d8c24ca8f23c562a5600876ca2a550ce-Paper.pdf.

[Dan67]

John M Danskin. The Theory of Max-Min and its Application to Weapons Allocation Problems. Springer, 1967.

[DD20]

Julie Delon and Agnès Desolneux. A wasserstein-type distance in the space of gaussian mixture models. SIAM Journal on Imaging Sciences, 13(2):936–970, 2020. doi:10.1137/19M1301047.

[DSS+22]

Pinar Demetci, Rebecca Santorella, Björn Sandstede, William Stafford Noble, and Ritambhara Singh. Scot: single-cell multi-omics alignment with optimal transport. Journal of Computational Biology, 29(1):3–18, 2022. PMID: 35050714.

[FSV+19]

Jean Feydy, Thibault Séjourné, François-Xavier Vialard, Shun-ichi Amari, Alain Trouve, and Gabriel Peyré. Interpolating between optimal transport and mmd using sinkhorn divergences. In Kamalika Chaudhuri and Masashi Sugiyama, editors, Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, 2681–2690. PMLR, April 2019. URL: https://proceedings.mlr.press/v89/feydy19a.html.

[Gel90]

Matthias Gelbrich. On a formula for the l2 wasserstein metric between measures on euclidean and hilbert spaces. Mathematische Nachrichten, 147(1):185–203, 1990. doi:10.1002/mana.19901470121.

[GPC18]

Aude Genevay, Gabriel Peyre, and Marco Cuturi. Learning generative models with sinkhorn divergences. In Amos Storkey and Fernando Perez-Cruz, editors, Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, volume 84 of Proceedings of Machine Learning Research, 1608–1617. PMLR, April 2018. URL: https://proceedings.mlr.press/v84/genevay18a.html.

[GPC15]

Alexandre Gramfort, Gabriel Peyré, and Marco Cuturi. Fast optimal transport averaging of neuroimaging data. In International Conference on Information Processing in Medical Imaging, 261–272. Springer, 2015.

[HBC+21]

Matthieu Heitz, Nicolas Bonneel, David Coeurjolly, Marco Cuturi, and Gabriel Peyré. Ground metric learning on graphs. Journal of Mathematical Imaging and Vision, 63(1):89–107, 2021. URL: https://doi.org/10.1007/s10851-020-00996-z, doi:10.1007/s10851-020-00996-z.

[Hig97]

Nicholas J. Higham. Stable iterations for the matrix square root. Numerical Algorithms, 15(2):227–242, 1997. URL: https://doi.org/10.1023/A:1019150005407, doi:10.1023/A:1019150005407.

[HTZ+23]

Guillaume Huguet, Alexander Tong, María Ramos Zapatero, Christopher J. Tape, Guy Wolf, and Smita Krishnaswamy. Geodesic sinkhorn for fast and accurate optimal transport on manifolds. 2023. arXiv:2211.00805.

[IB17]

Roberto Iacono and John P. Boyd. New approximations to the principal real-valued branch of the lambert w-function. Advances in Computational Mathematics, 43(6):1403–1436, 2017. URL: https://doi.org/10.1007/s10444-017-9530-3, doi:10.1007/s10444-017-9530-3.

[IVWW19]

Pitor Indyk, Ali Vakilian, Tal Wagner, and David P Woodruff. Sample-optimal low-rank approximation of distance matrices. In Alina Beygelzimer and Daniel Hsu, editors, Proceedings of the Thirty-Second Conference on Learning Theory, volume 99 of Proceedings of Machine Learning Research, 1723–1751. PMLR, June 2019. URL: https://proceedings.mlr.press/v99/indyk19a.html.

[JL20]

Matt Jacobs and Flavien Léger. A fast approach to optimal transport: the back-and-forth method. Numerische Mathematik, 146(3):513–544, 2020.

[JCG20]

Hicham Janati, Marco Cuturi, and Alexandre Gramfort. Debiased Sinkhorn barycenters. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, 4692–4701. PMLR, July 2020. URL: https://proceedings.mlr.press/v119/janati20a.html.

[JMPC20]

Hicham Janati, Boris Muzellec, Gabriel Peyré, and Marco Cuturi. Entropic optimal transport between unbalanced gaussian measures has a closed form. Advances in neural information processing systems, 33:10468–10479, 2020.

[KUTC23]

Dominik Klein, Théo Uscidda, Fabian Theis, and Marco Cuturi. Entropic (gromov) wasserstein flow matching with genot. 2023. arXiv:2310.09254, doi:10.48550/arXiv.2310.09254.

[KEA+21]

Alexander Korotin, Vage Egiazarian, Arip Asadulaev, Alexander Safin, and Evgeny Burnaev. Wasserstein-2 generative networks. In International Conference on Learning Representations. 2021. URL: https://arxiv.org/abs/1909.13082.

[LvRSU21]

Tobias Lehmann, Max-K. von Renesse, Alexander Sambale, and André Uschmajew. A note on overrelaxation in the sinkhorn algorithm. Optimization Letters, 2021. URL: https://doi.org/10.1007/s11590-021-01830-0, doi:10.1007/s11590-021-01830-0.

[LCBH+22]

Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. 2022. arXiv:2210.02747, doi:10.48550/arXiv.2210.02747.

[Llo82]

S. Lloyd. Least squares quantization in pcm. IEEE Transactions on Information Theory, 28(2):129–137, 1982.

[MTOL20]

Ashok Makkuva, Amirhossein Taghvaei, Sewoong Oh, and Jason Lee. Optimal transport mapping via input convex neural networks. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, 6672–6681. PMLR, July 2020. URL: https://proceedings.mlr.press/v119/makkuva20a.html.

[Mem11]

Facundo Mémoli. Gromov–wasserstein distances and the metric approach to object matching. Foundations of Computational Mathematics, 11(4):417–487, 2011. URL: https://doi.org/10.1007/s10208-011-9093-5, doi:10.1007/s10208-011-9093-5.

[PC19]

Gabriel Peyré and Marco Cuturi. Computational optimal transport: with applications to data science. Foundations and Trends® in Machine Learning, 11(5-6):355–607, 2019. URL: http://dx.doi.org/10.1561/2200000073, doi:10.1561/2200000073.

[PCS16]

Gabriel Peyré, Marco Cuturi, and Justin Solomon. Gromov-wasserstein averaging of kernel and distance matrices. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, 2664–2672. PMLR, June 2016. URL: https://proceedings.mlr.press/v48/peyre16.html.

[PBHDE+23]

Aram-Alexandre Pooladian, Heli Ben-Hamu, Carles Domingo-Enrich, Brandon Amos, Yaron Lipman, and Ricky Chen. Multisample flow matching: straightening flows with minibatch couplings. 2023. arXiv:2304.14772, doi:10.48550/arXiv.2304.14772.

[PCNW22]

Aram-Alexandre Pooladian, Marco Cuturi, and Jonathan Niles-Weed. Debiaser beware: pitfalls of centering regularized transport maps. 2022. URL: https://arxiv.org/abs/2202.08919, doi:10.48550/ARXIV.2202.08919.

[PNW21]

Aram-Alexandre Pooladian and Jonathan Niles-Weed. Entropic estimation of optimal transport maps. 2021. URL: https://arxiv.org/abs/2109.12004, doi:10.48550/ARXIV.2109.12004.

[RPLA21]

Jack Richter-Powell, Jonathan Lorraine, and Brandon Amos. Input convex gradient networks. 2021. URL: https://arxiv.org/abs/2111.12187, doi:10.48550/ARXIV.2111.12187.

[San15]

Filippo Santambrogio. Optimal transport for applied mathematicians. Birkäuser, NY, 55(58-63):94, 2015.

[SC20]

Meyer Scetbon and Marco Cuturi. Linear time sinkhorn divergences using positive features. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, 13468–13480. Curran Associates, Inc., 2020. URL: https://proceedings.neurips.cc/paper_files/paper/2020/file/9bde76f262285bb1eaeb7b40c758b53e-Paper.pdf.

[SC22]

Meyer Scetbon and Marco Cuturi. Low-rank optimal transport: approximation, statistics and debiasing. 2022. URL: https://arxiv.org/abs/2205.12365, doi:10.48550/ARXIV.2205.12365.

[SCP21]

Meyer Scetbon, Marco Cuturi, and Gabriel Peyré. Low-rank sinkhorn factorization. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, 9344–9354. PMLR, July 2021. URL: https://proceedings.mlr.press/v139/scetbon21a.html.

[SKPC23]

Meyer Scetbon, Michal Klein, Giovanni Palla, and Marco Cuturi. Unbalanced low-rank optimal transport solvers. 2023. arXiv:2305.19727, doi:10.48550/arXiv.2305.19727.

[SPC22]

Meyer Scetbon, Gabriel Peyré, and Marco Cuturi. Linear-time gromov Wasserstein distances using low rank couplings and costs. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, 19347–19365. PMLR, July 2022. URL: https://proceedings.mlr.press/v162/scetbon22b.html.

[SST+19]

Geoffrey Schiebinger, Jian Shu, Marcin Tabaka, Brian Cleary, Vidya Subramanian, Aryeh Solomon, Joshua Gould, Siyan Liu, Stacie Lin, Peter Berube, Lia Lee, Jenny Chen, Justin Brumbaugh, Philippe Rigollet, Konrad Hochedlinger, Rudolf Jaenisch, Aviv Regev, and Eric S. Lander. Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogramming. Cell, 176(4):928–943.e22, 2019.

[SHB+18]

Morgan A. Schmitz, Matthieu Heitz, Nicolas Bonneel, Fred Ngolè, David Coeurjolly, Marco Cuturi, Gabriel Peyré, and Jean-Luc Starck. Wasserstein dictionary learning: optimal transport-based unsupervised nonlinear dictionary learning. SIAM Journal on Imaging Sciences, 11(1):643–678, 2018. doi:10.1137/17M1140431.

[SFLCM16]

Amandine Schreck, Gersende Fort, Sylvain Le Corff, and Eric Moulines. A shrinkage-thresholding metropolis adjusted langevin algorithm for bayesian variable selection. IEEE Journal of Selected Topics in Signal Processing, 10(2):366–375, 2016.

[SVP21]

Thibault Sejourne, Francois-Xavier Vialard, and Gabriel Peyré. The unbalanced gromov wasserstein distance: conic formulation and relaxation. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, 8766–8779. Curran Associates, Inc., 2021. URL: https://proceedings.neurips.cc/paper/2021/file/4990974d150d0de5e6e15a1454fe6b0f-Paper.pdf.

[SVP22]

Thibault Sejourne, Francois-Xavier Vialard, and Gabriel Peyré. Faster unbalanced optimal transport: translation invariant sinkhorn and 1-d frank-wolfe. In Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera, editors, Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, volume 151 of Proceedings of Machine Learning Research, 4995–5021. PMLR, March 2022. URL: https://proceedings.mlr.press/v151/sejourne22a/sejourne22a.pdf.

[SdGP+15]

Justin Solomon, Fernando de Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. Convolutional wasserstein distances: efficient optimal transportation on geometric domains. ACM Trans. Graph., July 2015. URL: https://doi.org/10.1145/2766963, doi:10.1145/2766963.

[SFV+19]

Thibault Séjourné, Jean Feydy, François-Xavier Vialard, Alain Trouvé, and Gabriel Peyré. Sinkhorn divergences for unbalanced optimal transport. 2019. URL: https://arxiv.org/abs/1910.12958, doi:10.48550/ARXIV.1910.12958.

[TCDP21]

Alexis Thibault, Lénaïc Chizat, Charles Dossal, and Nicolas Papadakis. Overrelaxed sinkhorn–knopp algorithm for regularized optimal transport. Algorithms, 2021. doi:10.3390/a14050143.

[TC22]

James Thornton and Marco Cuturi. Rethinking initialization of the sinkhorn algorithm. arXiv preprint arXiv:2206.07630, 2022.

[TCT+19]

Vayer Titouan, Nicolas Courty, Romain Tavenard, Chapel Laetitia, and Rémi Flamary. Optimal transport for structured data with application on graphs. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, 6275–6284. PMLR, June 2019. URL: https://proceedings.mlr.press/v97/titouan19a.html.

[TMH+23]

Alexander Tong, Nikolay Malkin, Guillaume Huguet, Yanlei Zhang, Jarrid Rector-Brooks, Kilian Fatras, Guy Wolf, and Yoshua Bengio. Improving and generalizing flow-based generative models with minibatch optimal transport. 2023. arXiv:2302.00482, doi:10.48550/arXiv.2302.00482.

[UC23]

Théo Uscidda and Marco Cuturi. The monge gap: a regularizer to learn all transport maps. 2023. arXiv:2302.04953, doi:10.48550/arXiv.2302.04953.

[VCF+20]

Titouan Vayer, Laetitia Chapel, Remi Flamary, Romain Tavenard, and Nicolas Courty. Fused gromov-wasserstein distance for structured objects. Algorithms, 2020. URL: https://www.mdpi.com/1999-4893/13/9/212, doi:10.3390/a13090212.

[VC24]

Nina Vesseron and Marco Cuturi. On a neural implementation of brenier's polar factorization. 2024. arXiv:2403.03071.

[Vil09]

Cédric Villani. Optimal transport: old and new. Volume 338. Springer, 2009. doi:10.1007/978-3-540-71050-9.

[ZH05]

Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 67(2):301–320, 2005. URL: https://www.jstor.org/stable/3647580.

[AEdelBarrioCAM16]

Pedro C. Álvarez-Esteban, E. del Barrio, J.A. Cuesta-Albertos, and C. Matrán. A fixed-point approach to barycenters in wasserstein space. Journal of Mathematical Analysis and Applications, 441(2):744–762, 2016. URL: https://www.sciencedirect.com/science/article/pii/S0022247X16300907, doi:10.1016/j.jmaa.2016.04.045.