Machine Learning + Large Language Models

Efficient, reliable, and adaptable learning systems.

I am a Master's candidate in Computer Science at the Southern University of Science and Technology (SUSTech) in Shenzhen, China. My research spans machine learning and large language models, with a focus on parameter-efficient adaptation, continual learning, rigorous evaluation, and reproducible research.

Portrait of Xiequn Wang

Status

Master's candidate

Supervisor

Prof. Yu Zhang

Location

Shenzhen, China

Research Focus

My work explores efficient adaptation for large language models, continual learning, evaluation and benchmarks, and vision-language model alignment.

Parameter-Efficient LLM Adaptation

Developing low-rank and progressive adaptation strategies for large language models.

PEFT LoRA Efficiency

Continual Learning

Proactive allocation methods that preserve performance across evolving tasks.

Low-rank allocation Stability Transfer

Vision-Language Models

Normalizing soft prompts and aligning multimodal representations.

Soft prompts VLMs Alignment

Evaluation & Benchmarks

Probing model priors and reasoning with targeted diagnostics and benchmarks.

Diagnostics Benchmarks Robustness

Selected Publications

Corresponding author is marked with a dagger in the CV.

  1. MoSA: Mosaic Shared Adaptation of Large Language Models
    Xiequn Wang, Zhan Zhuang, Shengda Luo, Yu Zhang
    Manuscript
  2. One-Token Verification for Reasoning LLMs, Anytime, Anywhere
    Zhan Zhuang, Xiequn Wang, Zebin Chen, Baijiong Lin, Jianfeng Wang, Ying Wei, Na Mou, Kede Ma, Yu Zhang
    Manuscript
  3. Image Sorting as a Probe of Physical Prior Encoding in Vision Models
    Zhan Zhuang, Xuehao Wang, Xiequn Wang, Ying Wei, Yu Zhang
    Preprint
  4. PLAN: Proactive Low-Rank Allocation for Continual Learning
    Xiequn Wang, Zhan Zhuang, Yu Zhang
    ICCV 2025
  5. Come Together, But Not Right Now: A Progressive Strategy to Boost Low-Rank Adaptation
    Zhan Zhuang, Xiequn Wang, Wei Li, Yulong Zhang, Qiushi Huang, Shuhao Chen, Xuehao Wang, Yanbin Wei, Yuhe Nie, Kede Ma, Yu Zhang, Ying Wei
    ICML 2025
  6. Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models
    Shuai Fu, Xiequn Wang, Qiushi Huang, Yu Zhang
    ICLR 2024 (Spotlight)

Research Projects

Current research themes connected to ongoing papers.

MoSA for LLM Adaptation

Mosaic shared adaptation strategies that improve efficiency without sacrificing quality.

Manuscript

PLAN for Continual Learning

Proactive low-rank allocation to sustain performance over long task sequences.

ICCV 2025

Nemesis for VLM Prompts

Normalizing soft prompts to stabilize and align vision-language models.

ICLR 2024 (Spotlight)

Teaching Assistant

Recent teaching assistant roles at SUSTech.

CSE5001 Advanced Artificial Intelligence

Teaching Assistant, Spring 2024.

CS201 Discrete Mathematics

Teaching Assistant, Spring 2021 and 2024; Fall 2021 and 2023.

CS208 Algorithm Design and Analysis

Teaching Assistant, Spring 2022.

Academic Service

  • Reviewer: NeurIPS 2024 Workshops
  • Reviewer: CVPR 2025

Honors & Awards

  • First-class Scholarship for Postgraduate Students (Sep 2025)
  • First-class Scholarship for Postgraduate Students (Sep 2024)
  • Third Place, SUSTech Collegiate Programming Contest (Sep 2021)
  • Excellent Student, Zhixin College, SUSTech (Apr 2020)

Contact

I am open to research collaborations and internships in machine learning and large language models. Email me with a short note about your interests.

wangxiequn@gmail.com

Based in Shenzhen, China.