Research

* denotes equal contribution and ** denotes alphabet order. An up-to-date list is available on Google Scholar.

Large Language Models (LLMs)

  1. A statistical framework of watermarks for large language models: Pivot, detection efficiency and optimal rules
    Xiang Li, Feng Ruan, Huiyuan WangQi Long, and Weijie Su
    The Annals of Statistics, 2025
  2. Robust detection of watermarks in large language models under human edits
    Xiang Li, Feng Ruan, Huiyuan WangQi Long, and Weijie Su
    arXiv preprint arXiv:2411.13868, 2024
  3. Debiasing watermarks for large language models via maximal coupling
    Yangxinyu XieXiang Li, Tanwi Mallick, Weijie Su, and Ruixun Zhang
    arXiv preprint arXiv:2411.11203, 2024

Statistical Inference

  1. Online statistical inference for nonlinear stochastic approximation with Markovian data
    Xiang LiJiadong Liang, and Zhihua Zhang
    arXiv preprint arXiv:2302.07690, 2023
  2. A statistical analysis of Polyak-Ruppert averaged Q-learning
    In International Conference on Artificial Intelligence and Statistics, 2023
  3. Statistical estimation and online inference via Local SGD
    In Conference on Learning Theory, 2022
  4. Convergence and inference of Stream SGD, with applications to queueing systems and inventory control
    Xiang LiJiadong LiangXinyun Chen, and Zhihua Zhang
    arXiv preprint arXiv:2309.09545, 2023
  5. Uncertainty quantification of data shapley via statistical inference
    Mengmeng Wu, Zhihong Liu, Xiang Li, Ruoxi Jia, and Xiangyu Chang
    arXiv preprint arXiv:2407.19373, 2024
  6. Statistical analysis of Karcher means for random restricted PSD matrices
    Hengchao Chen, Xiang Li, and Qiang Sun
    In International Conference on Artificial Intelligence and Statistics, 2023

Stochastic Approximation

  1. Finite-time decoupled convergence in nonlinear two-time-scale stochastic approximation
    Yuze Han, Xiang Li, and Zhihua Zhang
    arXiv preprint arXiv:2401.03893, 2024
  2. Decoupled functional central limit theorems for two-time-scale stochastic approximation
    Yuze Han, Xiang LiJiadong Liang, and Zhihua Zhang
    arXiv preprint arXiv:2412.17070, 2024
  3. Asymptotic behaviors and phase transitions in projected stochastic approximation: A jump diffusion approach
    Jiadong Liang, Yuze Han, Xiang Li, and Zhihua Zhang
    arXiv preprint arXiv:2304.12953, 2023
    Extended version of the conference paper: Asymptotic behaviors of projected stochastic approximation: A jump diffusion perspective
  4. Asymptotic behaviors of projected stochastic approximation: A jump diffusion perspective
    Jiadong Liang, Yuze Han, Xiang Li, and Zhihua Zhang
    In Advances in Neural Information Processing Systems, Spotlight, 2022
  5. Do subsampled Newton methods work for high-dimensional data?
    Xiang Li, Shusen Wang, and Zhihua Zhang
    In AAAI Conference on Artificial Intelligence, 2020

Federated Learning

  1. On the convergence of FedAvg on non-iid data
    Xiang Li*, Kaixuan Huang*, Wenhao Yang*, Shusen Wang, and Zhihua Zhang
    In International Conference on Learning Representations, Oral Presentation, 2020
  2. A random projection approach to personalized federated learning: Enhancing communication efficiency, robustness, and fairness
    Yuze Han**, Shiyun Lin**, Xiang Li**, and Zhihua Zhang**
    Journal of Machine Learning Research, 2024
    Extended version of the conference paper: Personalized federated learning towards communication efficiency, robustness and fairness
  3. Fedpower: Privacy‑preserving distributed eigenspace estimation
    Xiao Guo, Xiang LiXiangyu Chang, Shusen Wang, and Zhihua Zhang
    Machine Learning, 2024
    Extended version of the conference paper: Communication-efficient distributed SVD via local power iterations
  4. Personalized federated learning towards communication efficiency, robustness and fairness
    Shiyun Lin*, Yuze Han*, Xiang Li, and Zhihua Zhang
    In Neural Information Processing Systems, 2022
  5. Communication-efficient distributed SVD via local power iterations
    Xiang Li, Shusen Wang, Kun Chen, and Zhihua Zhang
    In International Conference on Machine Learning, 2021
  6. Communication efficient decentralized training with multiple local updates
    Xiang LiWenhao Yang, Shusen Wang, and Zhihua Zhang
    arXiv preprint arXiv:1910.09126, 2019

Online Decision Making

  1. Variance-aware decision making with linear function approximation with heavy-tailed rewards
    Xiang Li, and Qiang Sun
    Transactions on Machine Learning Research, 2024
  2. A regularized approach to sparse optimal policy in reinforcement learning
    Wenhao Yang*Xiang Li*, and Zhihua Zhang
    In Advances in Neural Information Processing Systems, 2019
  3. Finding near optimal policies via reducive regularization in Markov decision processes
    Wenhao YangXiang Li, Guangzeng Xie, and Zhihua Zhang
    In Workshop on Reinforcement Learning Theory, ICML, 2021