2026

  • SWE-RM: Execution-free Feedback for Software Engineering Agents, [ICLR'26, CCF-A]

    Jiachun Shen, Binyuan Hui, Jiawei Chen, Lei Zhang, Jiaxi Yang, Junyang Lin, Junxian He

  • MegaFlow: Large-Scale Distributed Orchestration System for the Agentic Era, [Preprint]

    Lei Zhang, Mouxiang Chen, Ruishen Cao, Jiawei Chen, Fan Zhou, Yiheng Xu, Jiaxi Yang, Liang Chen, Changwei Luo, Kai Zhang, Fan Yan, Jiachun Shen, Jiajun Zhang, Zeyu Cui, Feng Hu, Junyang Lin, Binyuan Hui, Min Yang

  • SWE-Universe: Scale Real-World Verifiable Environments to Millions, [Preprint]

    Mouxiang Chen, Lei Zhang, Yunlong Feng, Xuwu Wang, Wenting Zhao, Ruisheng Cao, Jiaxi Yang, Jiawei Chen, Mingze Li, Zeyao Ma, Hao Ge, Zongmeng Zhang, Zeyu Cui, Dayiheng Liu, Jingren Zhou, Jianling Sun, Junyang Lin, Binyuan Hui

  • Reinforcement Learning for Symbolic Graphics Code with Visual Feedback, [Under Review (ICML'26)]

    Jiaxi Yang, Lei Zhang, Min Yang, Zeyu Cui, Jiajun Zhang, Jian Yang, Junyang Lin, Binyuan Hui

2025

  • Openomni: Large language models pivot zero-shot omnimodal alignment across language with real-time self-aware emotional speech synthesis, [NeurIPS'25, CCF-A]

    Run Luo, Ting-En Lin, Haonan Zhang, Yuchuan Wu, Xiong Liu, Min Yang, Yongbin Li, Longze Chen, Jiaming Li, Lei Zhang, Xiaobo Xia, Hamid Alinejad-Rokny, Fei Huang

  • CodeArena: Evaluating and Aligning CodeLLMs on Human Preference, [EMNLP'25, CCF-B]

    Jian Yang, Jiaxi Yang, Wei Zhang, Ke Jin, Yibo Miao, Lei Zhang, Liqun Yang, Zeyu Cui, Yichang Zhang, Binyuan Hui, Junyang Lin

  • SWE-Flow: Synthesizing Software Engineering Data in a Test-Driven Manner, [ICML'25, CCF-A]

    Lei Zhang, Jiaxi Yang, Min Yang, Jian Yang, Mouxiang Chen, Jiajun Zhang, Zeyu Cui, Binyuan Hui, Junyang Lin

  • DEEM: Diffusion Models Serve as the Eyes of Large Language Models for Image Perception, [ICLR'25, Spotlight]

    Run Luo, Yunshui Li, Longze Chen, Wanwei He, Ting-En Lin, Ziqiang Liu, Lei Zhang, Zikai Song, Xiaobo Xia, Tongliang Liu, Min Yang, Binyuan Hui

  • Hierarchical Context Pruning: Optimizing Real-World Code Completion with Repository-Level Pretrained Code LLMs, [AAAI'25, CCF-A]

    Lei Zhang, Yunshui Li, Jiaming Li, Xiaobo Xia, Jiaxi Yang, Run Luo, Minzheng Wang, Longze Chen, Junhao Liu, Min Yang

  • Fine-Tuning Language Models with Collaborative and Semantic Experts, [AAAI'25, CCF-A]

    Jiaxi Yang, Binyuan Hui, Min Yang, Jian Yang, Lei Zhang, Junyang Lin, Chang Zhou

  • ExecRepoBench: Multi-level Executable Code Completion Evaluation, [Preprint]

    Jian Yang, Jiaxi Yang, Ke Jin, Yibo Miao, Lei Zhang, Liqun Yang, Zeyu Cui, Yichang Zhang, Binyuan Hui, Junyang Lin

  • Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey, [Preprint]

    Liang Chen, Zekun Wang, Shuhuai Ren, Lei Li, Haozhe Zhao, Yunshui Li, Zefan Cai, Hongcheng Guo, Lei Zhang, Yizhe Xiong, Yichi Zhang, Ruoyu Wu, Qingxiu Dong, Ge Zhang, Jian Yang, Lingwei Meng, Shujie Hu, Yulong Chen, Junyang Lin, Shuai Bai, Andreas Vlachos, Xu Tan, Minjia Zhang, Wen Xiao, Aaron Yee, Tianyu Liu, Baobao Chang

2024

  • Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models, [EMNLP'24, CCF-B]

    Jiaming Li, Lei Zhang, Yunshui Li, Ziqiang Liu, yuelin bai, Run Luo, Longze Chen, Min Yang

  • Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA, [EMNLP'24, CCF-B]

    Minzheng Wang, Longze Chen, Cheng Fu, Liaoshengyi, Xinghua Zhang, Bingliwu, Haiyang Yu, Nan Xu, Lei Zhang, Run Luo, Yunshui Li, Min Yang, Yongbin Li

  • Marathon: A Race Through the Realm of Long Context with Large Language Models, [ACL'24, CCF-A]

    Lei Zhang, Yunshui Li, Ziqiang Liu, Jiaxi Yang, Junhao Liu, Longze Chen, Run Luo, Min Yang

  • One Shot Learning as Instruction Data Prospector for Large Language Models, [ACL'24, CCF-A]

    Yunshui Li, Binyuan Hui, Xiaobo Xia, Jiaxi Yang, Min Yang, Lei Zhang, Shuzheng Si, Junhao Liu, Tongliang Liu, Fei Huang, Yongbin Li

2022

  • Image-text retrieval via contrastive learning with auxiliary generative features and support-set regularization, [SIGIR'22, CCF-A]

    Lei Zhang, Min Yang, Chengming Li, Ruifeng Xu