My final project report analyzing and evaluating the paper "Low Rank Sinkhorn Factorization" [ICML 2021] by Meyer Scetbon, Marco Cuturi and Gabriel Peyré. Conducted as part of the ENS MVA Mater of Science in the computational optimal transport course by Gabriel Peyré.
Very interesting class to discover optimal transport which can be interpreted as sorting in high dimensions but most concretely as a way to measure the difference between two probability distributions. The mathematical problem of transporting all the information from one distribution to another with minimal cost dates back to the 18th century and today helps design better deep learning models! For example, GANs were made more stable by utilizing transport distances in the adversarial models. There's more direct applications found in NLP for document similarity, shape matching through point cloud distributions and many others in biology.
Specifically, the paper I review presents a new algorithm to efficiently solve a relaxed version of the original optimal transport problem. The authors assume transport between distributions in high dimensions can be broken into independent transports between smaller subsets of each distribution. Key advantages: works with any cost, more interpretable, breaks distributions into low ranks [which could potentially have applications beyond transport solving?]
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.
My final project report analyzing and evaluating the paper "Low Rank Sinkhorn Factorization" [ICML 2021] by Meyer Scetbon, Marco Cuturi and Gabriel Peyré. Conducted as part of the ENS MVA Mater of Science in the computational optimal transport course by Gabriel Peyré.
Very interesting class to discover optimal transport which can be interpreted as sorting in high dimensions but most concretely as a way to measure the difference between two probability distributions. The mathematical problem of transporting all the information from one distribution to another with minimal cost dates back to the 18th century and today helps design better deep learning models! For example, GANs were made more stable by utilizing transport distances in the adversarial models. There's more direct applications found in NLP for document similarity, shape matching through point cloud distributions and many others in biology.
Specifically, the paper I review presents a new algorithm to efficiently solve a relaxed version of the original optimal transport problem. The authors assume transport between distributions in high dimensions can be broken into independent transports between smaller subsets of each distribution. Key advantages: works with any cost, more interpretable, breaks distributions into low ranks [which could potentially have applications beyond transport solving?]