Luo Luo
Email: luoluo (AT) fudan (DOT) edu (DOT) cn
My research interests include machine learning, optimization and linear algebra.
Teaching
- Multivariate Statistical Analysis (2022, 2023, 2024)
- Optimization Theory (2023, 2024)
- Multivariate Statistics (2022)
Preprints
- Zhiling Zhou, Zhuanghua Liu, Chengchang Liu, Luo Luo.
Incremental Gauss–Newton Methods with Superlinear Convergence Rates.
arXiv preprint:2407.03195, 2024.
[pdf]
- Chengchang Liu, Cheng Chen, Luo Luo.
Symmetric Rank-k Methods.
arXiv preprint:2303.16188, 2023.
[pdf]
- Chengchang Liu, Luo Luo.
Regularized Newton Methods for Monotone Variational Inequalities with Hölder Continuous Jacobians.
arXiv preprint:2212.07824, 2022.
[pdf]
- Lesi Chen, Luo Luo.
Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization.
arXiv preprint:2208.05925, 2022.
[pdf]
Conference Publications
- Qihao Zhou, Haishan Ye, Luo Luo.
Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity.
Advances in Neural Information Processing Systems (NeurIPS), 2024.
[pdf]
- Shihong Ding, Long Yang, Luo Luo, Cong Fang.
Optimizing over Multiple Distributions under Generalized Quasar-Convexity Condition.
Advances in Neural Information Processing Systems (NeurIPS), 2024.
[pdf]
- Zhuanghua Liu, Luo Luo, Bryan Kian Hsiang Low.
Gradient-Free Methods for Nonconvex Nonsmooth Stochastic Compositional Optimization.
Advances in Neural Information Processing Systems (NeurIPS), 2024.
- Zhuanghua Liu, Cheng Chen, Luo Luo, Bryan Kian Hsiang Low.
Zeroth-Order Methods for Constrained Nonconvex Nonsmooth Stochastic Optimization.
International Conference on Machine Learning (ICML), 2024. Oral
[pdf]
- Yunyan Bai, Yuxing Liu, Luo Luo.
On the Complexity of Finite-Sum Smooth Optimization under the Polyak–Lojasiewicz Condition.
International Conference on Machine Learning (ICML), 2024. Spotlight
[pdf]
- Yuxing Liu, Lesi Chen, Luo Luo.
Decentralized Convex Finite-Sum Optimization with Better Dependence on Condition Numbers.
International Conference on Machine Learning (ICML), 2024.
[pdf]
- Lesi Chen, Haishan Ye, Luo Luo.
An Efficient Stochastic Algorithm for Decentralized Nonconvex-Strongly-Concave Minimax Optimization.
International Conference on Artificial Intelligence and Statistics (AISTATS), 2024.
[pdf]
- Zhuanghua Liu, Luo Luo, Bryan Kian Hsiang Low.
Incremental Quasi-Newton Methods with Faster Superlinear Convergence Rates.
AAAI Conference on Artificial Intelligence (AAAI), 2024.
[pdf]
- Zhenwei Lin, Jingfan Xia, Qi Deng, Luo Luo.
Decentralized Gradient-Free Methods for Stochastic Non-Smooth Non-Convex Optimization.
AAAI Conference on Artificial Intelligence (AAAI), 2024. Oral
[pdf]
- Haikuo Yang, Luo Luo, Chris Junchi Li, Michael I. Jordan, Maryam Fazel.
Accelerating Inexact HyperGradient Descent for Bilevel Optimization.
Workshop on Optimization for Machine Learning (NeurIPS Workshop), 2023.
[pdf]
- Chengchang Liu, Cheng Chen, Luo Luo, John C.S. Lui.
Block Broyden's Methods for Solving Nonlinear Equations.
Advances in Neural Information Processing Systems (NeurIPS), 2023.
[pdf]
- Chengchang Liu, Lesi Chen, Luo Luo, John C.S. Lui.
Communication Efficient Distributed Newton Method with Fast Convergence Rates.
ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2023.
[pdf]
- Lesi Chen, Jing Xu, Luo Luo.
Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization.
International Conference on Machine Learning (ICML), 2023.
[pdf]
- Chengchang Liu, Luo Luo.
Quasi-Newton Methods for Saddle Point Problems.
Advances in Neural Information Processing Systems (NeurIPS), 2022. Spotlight
[pdf] [Longer Version]
- Lesi Chen, Boyuan Yao, Luo Luo.
Faster Stochastic Algorithms for Minimax Optimization under Polyak-Łojasiewicz Condition.
Advances in Neural Information Processing Systems (NeurIPS), 2022.
[pdf]
- Luo Luo, Yujun Li, Cheng Chen.
Finding Second-Order Stationary Points in Nonconvex-Strongly-Concave Minimax Optimization.
Advances in Neural Information Processing Systems (NeurIPS), 2022.
[pdf]
- Chengchang Liu, Shuxian Bi, Luo Luo, John C.S. Lui.
Partial-Quasi-Newton Methods: Efficient Algorithms for Minimax Optimization Problems with Unbalanced Dimensionality.
ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022. Best Paper Runner-Up Award
[pdf]
- Luo Luo, Cheng Chen, Guangzeng Xie, Haishan Ye.
Revisiting Co-Occurring Directions: Sharper Analysis and Efficient Algorithm for Sparse Matrices.
AAAI Conference on Artificial Intelligence (AAAI), 2021.
[pdf]
- Luo Luo, Haishan Ye, Zhichao Huang, Tong Zhang.
Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems.
Advances in Neural Information Processing Systems (NeurIPS), 2020.
[pdf]
- Cheng Chen, Luo Luo, Weinan Zhang, Yong Yu.
Efficient Projection-Free Algorithms for Saddle Point Problems.
Advances in Neural Information Processing Systems (NeurIPS), 2020.
[pdf]
- Haishan Ye, Ziang Zhou, Luo Luo, Tong Zhang.
Decentralized Accelerated Proximal Gradient Descent.
Advances in Neural Information Processing Systems (NeurIPS), 2020.
[pdf]
- Guangzeng Xie, Luo Luo, Yijiang Lian, Zhihua Zhang.
Lower Complexity Bounds for Finite-Sum Convex-Concave Minimax Optimization Problems.
International Conference on Machine Learning (ICML), 2020.
[pdf]
- Cheng Chen, Luo Luo, Weinan Zhang, Yong Yu, Yijiang Lian.
Efficient and Robust High-Dimensional Linear Contextual Bandits.
International Joint Conference on Artificial Intelligence (IJCAI), 2020.
[pdf]
- Luo Luo, Wenpeng Zhang, Zhihua Zhang, Wenwu Zhu, Tong Zhang, Jian Pei.
Sketched Follow-The-Regularized-Leader for Online Factorization Machine.
ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2018.
[pdf]
- Haishan Ye, Luo Luo, Zhihua Zhang.
Approximate Newton Methods and Their Local Convergence.
International Conference on Machine Learning (ICML), 2017.
[pdf]
- Zihao Chen, Luo Luo, Zhihua Zhang.
Communication Lower Bounds for Distributed Convex Optimization: Partition Data on Features.
AAAI Conference on Artificial Intelligence (AAAI), 2017. Oral
[pdf]
- Tianfan Fu, Luo Luo, Zhihua Zhang.
Quasi-Newton Hamiltonian Monte Carlo.
Conference on Uncertainty in Artificial Intelligence (UAI), 2016.
[pdf]
- Qiaomin Ye, Luo Luo, Zhihua Zhang.
Frequent Direction Algorithms for Approximate Matrix Multiplication with Applications in CCA.
International Joint Conference on Artificial Intelligence (IJCAI), 2016.
[pdf]
- Luo Luo, Yubo Xie, Zhihua Zhang, Wu-Jun Li.
Support Matrix Machines.
International Conference on Machine Learning (ICML), 2015.
[pdf]
- Zhiquan Liu, Luo Luo, Wu-Jun Li.
Robust Crowdsourced Learning.
IEEE International Conference on Big Data, 2013.
[pdf]
Journal Publications
- Haishan Ye, Luo Luo, Ziang Zhou, Tong Zhang.
Multi-Consensus Decentralized Accelerated Gradient Descent.
Journal of Machine Learning Research (JMLR), 24(306):1-50, 2023.
[pdf]
- Haishan Ye, Luo Luo, Zhihua Zhang.
Approximate Newton Methods.
Journal of Machine Learning Research (JMLR), 22(66):1-41, 2021.
[pdf]
- Haishan Ye, Luo Luo, Zhihua Zhang.
Accelerated Proximal Sub-Sampled Newton Method.
IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2020.
[pdf]
- Haishan Ye, Luo Luo, Zhihua Zhang.
Nesterov's Acceleration for Approximate Newton.
Journal of Machine Learning Research (JMLR), 21(142):1-37, 2020.
[pdf]
- Luo Luo, Cheng Chen, Zhihua Zhang, Wu-Jun Li, Tong Zhang.
Robust Frequent Directions with Application in Online Learning.
Journal of Machine Learning Research (JMLR), 20(45):1-41, 2019.
[pdf]
- Haishan Ye, Guangzeng Xie, Luo Luo, Zhihua Zhang.
Fast Stochastic Second-Order Method Logarithmic in Condition Number.
Pattern Recognition, 88:629-642, 2019.
[pdf]
- Shusen Wang, Luo Luo, Zhihua Zhang.
SPSD Matrix Approximation vis Column Selection: Theories, Algorithms and Extensions.
Journal of Machine Learning Research (JMLR), 17(49):1-49, 2016.
[pdf]