Parameter-Efficient LLM Adaptation
Developing low-rank and progressive adaptation strategies for large language models.
Xiequn Wang
Master's candidate, Computer Science, SUSTech
Machine Learning + Large Language Models
I am a Master's candidate in Computer Science at the Southern University of Science and Technology (SUSTech) in Shenzhen, China. My research spans machine learning and large language models, with a focus on parameter-efficient adaptation, continual learning, rigorous evaluation, and reproducible research.
My work explores efficient adaptation for large language models, continual learning, evaluation and benchmarks, and vision-language model alignment.
Developing low-rank and progressive adaptation strategies for large language models.
Proactive allocation methods that preserve performance across evolving tasks.
Normalizing soft prompts and aligning multimodal representations.
Probing model priors and reasoning with targeted diagnostics and benchmarks.
Corresponding author is marked with a dagger in the CV.
Current research themes connected to ongoing papers.
Mosaic shared adaptation strategies that improve efficiency without sacrificing quality.
Proactive low-rank allocation to sustain performance over long task sequences.
Normalizing soft prompts to stabilize and align vision-language models.
Recent teaching assistant roles at SUSTech.
Teaching Assistant, Spring 2024.
Teaching Assistant, Spring 2021 and 2024; Fall 2021 and 2023.
Teaching Assistant, Spring 2022.
I am open to research collaborations and internships in machine learning and large language models. Email me with a short note about your interests.
Based in Shenzhen, China.