Luo Luo 
	Email: luoluo (AT) fudan (DOT) edu (DOT) cn
	
	My research interests include machine learning, optimization and linear algebra.
	
	Teaching 
	-  Multivariate Statistical Analysis (2022, 2023, 2024, 2025) 
-  Optimization Theory (2023, 2024, 2025) 
-  Multivariate Statistics (2022)  
	
	 Preprints 
	-   Luo Luo, Xue Cui, Tingkai Jia, Cheng Chen. 
 Decentralized Stochastic Nonconvex Optimization under the Relaxed Smoothness.
 arXiv preprint:2509.08726, 2025.
 [pdf]
-  Lesi Chen, Chengchang Liu, Luo Luo, Jingzhao Zhang. 
 Computationally Faster Newton Methods by Lazy Evaluations.
 arXiv preprint:2501.17488, 2025.
 [pdf]
-  Zhiling Zhou, Zhuanghua Liu, Chengchang Liu, Luo Luo. 
 Incremental Gauss–Newton Methods with Superlinear Convergence Rates.
 arXiv preprint:2407.03195, 2024.
 [pdf]
-  Chengchang Liu, Cheng Chen, Luo Luo. 
 Symmetric Rank-k Methods.
 arXiv preprint:2303.16188, 2023.
 [pdf]
-  Chengchang Liu, Luo Luo. 
 Regularized Newton Methods for Monotone Variational Inequalities with Hölder Continuous Jacobians.
 arXiv preprint:2212.07824, 2022.
 [pdf]
-  Luo Luo, Yunyan Bai, Lesi Chen, Yuxing Liu, Haishan Ye. 
 On the Complexity of Decentralized Smooth Nonconvex Finite-Sum Optimization.
 arXiv preprint:2210.13931, 2022.
 [pdf]
 Publications 
	-  Hongxu Chen, Ke Wei, Haishan Ye, Luo Luo. 
 A Near-Optimal Algorithm for Decentralized Convex-Concave Finite-Sum Minimax Optimization.
 Advances in Neural Information Processing Systems (NeurIPS), 2025. 
	  Spotlight
 
-  Binbin Huang, Luo Luo, Yanghua Xiao, Deqing Yang, Baojian Zhou. 
 Accelerated Evolving Set Processes for Local PageRank Computation.
 Advances in Neural Information Processing Systems (NeurIPS), 2025.
 
-  Lesi Chen, Chengchang Liu, Luo Luo, Jingzhao Zhang. 
 Solving Convex-Concave Problems with \(\tilde{\mathcal{O}}(\epsilon^{-4/7})\) Second-Order Oracle Complexity.
 Conference on Learning Theory (COLT), 2025.  
	  Best Student Paper Award
 [pdf]
-  Kunjie Ren, Luo Luo. 
 A Parameter-Free and Near-Optimal Zeroth-Order Algorithm for Stochastic Convex Optimization.
 International Conference on Machine Learning (ICML), 2025.
 [pdf]
-  Chengchang Liu, Luo Luo, John C.S. Lui. 
 An Enhanced Levenberg–Marquardt Method via Gram Reduction.
 AAAI Conference on Artificial Intelligence (AAAI), 2025.
 [pdf]
-  Lesi Chen,  Luo Luo. 
 Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization.
 Journal of Machine Learning Research (JMLR), 25(387):1−44, 2024.
 [pdf]
-  Qihao Zhou, Haishan Ye, Luo Luo. 
 Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity.
 Advances in Neural Information Processing Systems (NeurIPS), 2024.
 [pdf]
-  Zhuanghua Liu, Luo Luo, Bryan Kian Hsiang Low. 
 Gradient-Free Methods for Nonconvex Nonsmooth Stochastic Compositional Optimization.
 Advances in Neural Information Processing Systems (NeurIPS), 2024.
 [pdf]
-  Shihong Ding, Long Yang, Luo Luo, Cong Fang. 
 Optimizing over Multiple Distributions under Generalized Quasar-Convexity Condition.
 Advances in Neural Information Processing Systems (NeurIPS), 2024.
 [pdf]
-  Zhuanghua Liu, Cheng Chen, Luo Luo, Bryan Kian Hsiang Low. 
 Zeroth-Order Methods for Constrained Nonconvex Nonsmooth Stochastic Optimization.
 International Conference on Machine Learning (ICML), 2024.     Oral
 [pdf]
-  Yunyan Bai, Yuxing Liu, Luo Luo. 
 On the Complexity of Finite-Sum Smooth Optimization under the Polyak–Łojasiewicz Condition.
 International Conference on Machine Learning (ICML), 2024.    Spotlight
 [pdf]
-  Yuxing Liu, Lesi Chen, Luo Luo. 
 Decentralized Convex Finite-Sum Optimization with Better Dependence on Condition Numbers.
 International Conference on Machine Learning (ICML), 2024.
 [pdf]
-  Lesi Chen, Haishan Ye, Luo Luo. 
 An Efficient Stochastic Algorithm for Decentralized Nonconvex-Strongly-Concave Minimax Optimization.
 International Conference on Artificial Intelligence and Statistics (AISTATS), 2024.
 [pdf]
-  Zhuanghua Liu, Luo Luo, Bryan Kian Hsiang Low. 
 Incremental Quasi-Newton Methods with Faster Superlinear Convergence Rates.
 AAAI Conference on Artificial Intelligence (AAAI), 2024.
 [pdf]
-  Zhenwei Lin, Jingfan Xia, Qi Deng, Luo Luo. 
 Decentralized Gradient-Free Methods for Stochastic Non-Smooth Non-Convex Optimization.
 AAAI Conference on Artificial Intelligence (AAAI), 2024.     Oral
 [pdf]
-  Haishan Ye, Luo Luo, Ziang Zhou, Tong Zhang. 
 Multi-Consensus Decentralized Accelerated Gradient Descent.
 Journal of Machine Learning Research (JMLR), 24(306):1−50, 2023.
 [pdf]
-  Haikuo Yang, Luo Luo, Chris Junchi Li, Michael I. Jordan, Maryam Fazel. 
 Accelerating Inexact HyperGradient Descent for Bilevel Optimization.
 Workshop on Optimization for Machine Learning, 2023.
 [pdf]
-  Chengchang Liu, Cheng Chen, Luo Luo, John C.S. Lui. 
 Block Broyden's Methods for Solving Nonlinear Equations.
 Conference on Neural Information Processing Systems (NeurIPS), 2023.
 [pdf]
-  Chengchang Liu, Lesi Chen, Luo Luo, John C.S. Lui. 
 Communication Efficient Distributed Newton Method with Fast Convergence Rates.
 ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2023.
 [pdf]
-  Lesi Chen, Jing Xu, Luo Luo. 
 Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization.
 International Conference on Machine Learning (ICML), 2023.
 [pdf]
-  Chengchang Liu,  Luo Luo. 
 Quasi-Newton Methods for Saddle Point Problems.
 Advances in Neural Information Processing Systems (NeurIPS), 2022.     Spotlight
 [pdf] [Longer Version]
-  Lesi Chen, Boyuan Yao,  Luo Luo. 
 Faster Stochastic Algorithms for Minimax Optimization under Polyak-Łojasiewicz Condition.
 Advances in Neural Information Processing Systems (NeurIPS), 2022.
 [pdf]
-  Luo Luo, Yujun Li, Cheng Chen. 
 Finding Second-Order Stationary Points in Nonconvex-Strongly-Concave Minimax Optimization.
 Advances in Neural Information Processing Systems (NeurIPS), 2022.
 [pdf]
-  Chengchang Liu, Shuxian Bi, Luo Luo, John C.S. Lui. 
 Partial-Quasi-Newton Methods: Efficient Algorithms for Minimax Optimization Problems with Unbalanced Dimensionality.
 ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022.    Best Paper Runner-Up Award
 [pdf]
-  Haishan Ye,  Luo Luo, Zhihua Zhang. 
 Approximate Newton Methods.
 Journal of Machine Learning Research (JMLR), 22(66):1−41, 2021.
 [pdf]
-   Luo Luo, Cheng Chen, Guangzeng Xie, Haishan Ye. 
 Revisiting Co-Occurring Directions: Sharper Analysis and Efficient Algorithm for Sparse Matrices.
 AAAI Conference on Artificial Intelligence (AAAI), 2021.
 [pdf]
-  Haishan Ye,  Luo Luo, Zhihua Zhang. 
 Nesterov's Acceleration for Approximate Newton.
 Journal of Machine Learning Research (JMLR), 21(142):1−37, 2020.
 [pdf]
-  Haishan Ye,  Luo Luo, Zhihua Zhang. 
 Accelerated Proximal Sub-Sampled Newton Method.
 IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2020.
 [pdf]
-   Luo Luo, Haishan Ye, Zhichao Huang, Tong Zhang. 
 Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems.
 Advances in Neural Information Processing Systems (NeurIPS), 2020.
 [pdf]
-  Cheng Chen,  Luo Luo, Weinan Zhang, Yong Yu. 
 Efficient Projection-Free Algorithms for Saddle Point Problems.
 Advances in Neural Information Processing Systems (NeurIPS), 2020.
 [pdf]
-  Haishan Ye, Ziang Zhou,  Luo Luo, Tong Zhang. 
 Decentralized Accelerated Proximal Gradient Descent.
 Advances in Neural Information Processing Systems (NeurIPS), 2020.
 [pdf]
-  Guangzeng Xie,  Luo Luo, Yijiang Lian, Zhihua Zhang. 
 Lower Complexity Bounds for Finite-Sum Convex-Concave Minimax Optimization Problems.
 International Conference on Machine Learning (ICML), 2020.
 [pdf]
-  Cheng Chen,  Luo Luo, Weinan Zhang, Yong Yu, Yijiang Lian. 
 Efficient and Robust High-Dimensional Linear Contextual Bandits.
 International Joint Conference on Artificial Intelligence (IJCAI), 2020.
 [pdf]
-   Luo Luo, Cheng Chen, Zhihua Zhang, Wu-Jun Li, Tong Zhang. 
 Robust Frequent Directions with Application in Online Learning.
 Journal of Machine Learning Research (JMLR), 20(45):1−41, 2019.
 [pdf]
-  Haishan Ye, Guangzeng Xie,  Luo Luo, Zhihua Zhang. 
 Fast Stochastic Second-Order Method Logarithmic in Condition Number.
 Pattern Recognition, 88:629-642, 2019.
 [pdf]
-   Luo Luo, Wenpeng Zhang, Zhihua Zhang, Wenwu Zhu, Tong Zhang, Jian Pei. 
 Sketched Follow-The-Regularized-Leader for Online Factorization Machine.
 ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2018.
 [pdf]
-  Haishan Ye,  Luo Luo, Zhihua Zhang. 
 Approximate Newton Methods and Their Local Convergence.
 International Conference on Machine Learning (ICML), 2017.
 [pdf]
-  Zihao Chen,  Luo Luo, Zhihua Zhang. 
 Communication Lower Bounds for Distributed Convex Optimization: Partition Data on Features.
 AAAI Conference on Artificial Intelligence (AAAI), 2017.    Oral
 [pdf]
-  Shusen Wang,  Luo Luo, Zhihua Zhang. 
 SPSD Matrix Approximation vis Column Selection: Theories, Algorithms and Extensions.
 Journal of Machine Learning Research (JMLR), 17(49):1−49, 2016.
 [pdf]
-  Tianfan Fu,  Luo Luo, Zhihua Zhang. 
 Quasi-Newton Hamiltonian Monte Carlo.
 Conference on Uncertainty in Artificial Intelligence (UAI), 2016.
 [pdf]
-  Qiaomin Ye,  Luo Luo, Zhihua Zhang. 
 Frequent Direction Algorithms for Approximate Matrix Multiplication with Applications in CCA.
 International Joint Conference on Artificial Intelligence (IJCAI), 2016.
 [pdf]
-   Luo Luo, Yubo Xie, Zhihua Zhang, Wu-Jun Li. 
 Support Matrix Machines.
 International Conference on Machine Learning (ICML), 2015.
 [pdf]
-  Zhiquan Liu,  Luo Luo, Wu-Jun Li. 
 Robust Crowdsourced Learning.
 IEEE International Conference on Big Data, 2013.
 [pdf]