Zecheng Tang (汤泽成)
Brief Bio
I am currently a fourth-year PhD candidate at Soochow University (exp. grad. Jul 2027), supervised by Assoc. Prof. Juntao Li and Prof. Min Zhang.
My research focuses on Long-context Modeling, Machine Learning and Optimization of Foundation Model.
My contributions include but not limited to:
Qwen-Image,
Visual ChatGPT,
NUWA-series (LayoutNUWA / LayoutNUWA),
and the OpenBA-series (OpenBA-V1 / OpenBA-V2).
I have been also leading the LCM-Lab, an academic research group dedicated to algorithm optimization and infrastructure building (covering data, evaluation, and training platforms) for scalable long-context modeling, with 9 publications in this field.
Please see Scholar for a complete list.
Before that, I obtained my Bachelor's degree in Software Engineering from Soochow University (ranked 1st / 380).
My formal name is Zecheng Tang (汤泽成), and you may call me Zecheng (/zə'tʃɛŋ/), or simply Tang.
Work Experience
Selected Conference & Journal
* indicates equal contribution; Full list: Google Scholar; Rank system: CCF (China Computer Federation)
[Click to view / hide selected list]
Long-context Modeling & Evaluation
-
Revisiting Long-Context Modeling from a Context Denoising Perspective
Zecheng Tang, Baibei Ji, Juntao Li, Lijun Wu, Haijia Gui, Min Zhang.
ICLR 2026, CCF A.
[Paper] [Code]
[Media]
-
L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?
Zecheng Tang, Keyan Zhou, Juntao Li, Baibei Ji, Jianye Hou, Min Zhang.
ACL 2025 (Long Main), CCF A.
[Paper] [Code]
[Media]
-
LOGO -- Long cOntext aliGnment via efficient preference Optimization
Zecheng Tang, Zechen Sun, Juntao Li, Qiaoming Zhu, Min Zhang.
ICML 2025, CCF A.
[Paper] [Code]
[Media]
-
Open-ended long text generation via masked language modeling
Xiaobo Liang*, Zecheng Tang*, Juntao Li, Min Zhang.
ACL 2023 (Long Main), CCF A.
[Paper] [Code]
Generative Model
-
StrokeNUWA: Tokenizing Strokes for Vector Graphic Synthesis
Zecheng Tang, Chenfei Wu, Zekai Zhang, Mingheng Ni, Juntao Li, Nan Duan, et al.
ICML 2024, CCF A.
[Paper] [Code]
[Media]
-
LayoutNUWA: Revealing the hidden layout expertise of large language models
Zecheng Tang, Chenfei Wu, Juntao Li, Nan Duan.
ICLR 2024, CCF A.
[Paper] [Code]
[Media]
Foundation Model (& Optimization)
-
OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model
Juntao Li*, Zecheng Tang*, Yuyang Ding*, Pinzheng Wang*, Min Zhang, et al.
SCIS 2025, CCF A.
[Paper] [Code]
[Media]
-
CMD: a framework for Context-aware Model self-Detoxification
Zecheng Tang, Keyan Zhou, Juntao Li, Yuyang Ding, Pinzheng Wang, Min Zhang, et al.
EMNLP 2024 (Long Main), CCF B.
[Paper] [Code]
[Media]
-
Improving temporal generalization of pre-trained language models with lexical semantic change
Zhaochen Su*, Zecheng Tang*, Xinyan Guan, Juntao Li, Lijun Wu, Min Zhang.
EMNLP 2022 (Long Main), CCF B.
[Paper] [Code]
Selected Preprint
* indicates equal contribution; Full list: Google Scholar.
[Click to view / hide selected list]
Technical Report
-
Qwen-Image Technical Report
Core Contributor*
[Aug 2025] [Arxiv] [Code] [Media]
-
Step-Video-T2V Technical Report
Contributor*
[Feb 2025] [Arxiv] [Code] [Media]
-
Visual chatgpt: Talking, drawing and editing with visual foundation models
Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, Nan Duan.
[Mar 2023] [Paper] [Code] [Media]
Long-context Modeling & Evaluation
-
Elastic Attention: Test-time Adaptive Sparsity Ratios for Efficient Transformers
Zecheng Tang, Zecheng Tang, Quantong Qiu, Yi Yang, Zhiyi Hong, Haiya Xiang, Juntao Li, Min Zhang, et al.
[Jan 2026] [Arxiv] [Code]
-
MemoryRewardBench: Benchmarking Reward Models for Long-Term Memory Management in Large Language Models
Zecheng Tang, Baibei Ji, Ruoxi Sun, Haitian Wang, Juntao Li, Min Zhang, et al.
[Jan 2026] [Arxiv] [Code] [Media]
-
MMLongCite: A Benchmark for Evaluating Fidelity of Long-Context Vision-Language Models
Keyan Zhou, Zecheng Tang, Libo Qin, Juntao Li, Min Zhang, et al.
[Oct 2025] [Arxiv] [Code]
-
LongRM: Revealing and Unlocking the Context Boundary of Reward Modeling
Zecheng Tang, Baibei Ji, Quantong Qiu, Haitian Wang, Xiaobo Liang, Juntao Li, Min Zhang.
[Oct 2025] [Arxiv] [Code] [Media]
-
LOOM-Scope: LOng-cOntext Model evaluation framework
Zecheng Tang, Haitian Wang, Quantong Qiu, Baibei Ji, Ruoxi Sun, Keyan Zhou, Juntao Li, Min Zhang
[Jul 2025] [Arxiv] [Code] [Media]
Honor & Award
Key terminology translation reference: Chinese Fund Translation.
- 2025: Young Elite Scientists Sponsorship Program (Doctoral Student Special Plan), China Association for Science and Technology (CAST)
- 2025: Top Reviewer, NeurIPS Program Committee.
- 2025: National Scholarship, Soochow University.
- 2024: Star of Tomorrow, Microsoft Research Asia.
- 2022: Outstanding Graduate (ranked 1st/380), Soochow University.
- 2022: Huawei Scholarship (Top 5% Undergraduate), Huawei Inc.
Talk
- [Nov 2024] Long Context Modeling in LLMs: Advances and Challenges, NLPCC 2024 Tutorial. [Slides]
- [Apr 2023] Leveraging Large Language Models for Tool Invocation, OPPO (closed-door seminar).
Service
I have been serving as a reviewer for the following conferences:
- Natural Language Processing: {ACL, EMNLP, ARR} (2022 - 2026), NLPCC (2023)
- Machine Learning: {ICML, NeurIPS, ICLR} (2024 - 2026)
- Computer Vision: CVPR (2025, 2026), ICCV (2026)
- Artificial Intelligence: AAAI (2022, 2024)
Homepage last updated: 2026-01 | CV last updated: 2025-12 | Template adapted from
dpkingma.com