- 
           
            When Do Neural Networks Learn World Models?
            
 Tianren Zhang, Guanyu Chen, Feng Chen
 ICML 2025  | 
            ICLR 2025 Workshop on World Models (Oral, Outstanding Paper Award)  | 
            paper
 TL;DR: We prove that in a multi-task setting, prediction models with a low-degree bias can provably identify latent data-generating variables (i.e., learning the world model) under mild assumptions.
 
- 
           
            Exploring the Hidden Reasoning Process of Large Language Models by Misleading Them
            
 Guanyu Chen*, Peiyang Wang*, ..., Tianren Zhang†, Feng Chen†
 EMNLP 2025 Findings (Oral)  | 
            paper
 TL;DR: We empirically demonstrate that LLMs can generalize unseen, false mathematical reasoning rules to real-world problems, implying the existence of an "abstract-then-reason" process in LLMs.
 
- 
           
            OURO: A Self-Bootstrapped Framework for Enhancing Multimodal Scene Understanding
            
 Tianrun Xu*, Guanyu Chen*, ..., Tianren Zhang, Haichuan Gao†, Feng Chen†
 ICCV 2025
 
 
- 
           
            Feature Contamination: Neural Networks Learn Uncorrelated Features and Fail to Generalize
            
 Tianren Zhang*, Chujie Zhao*, Yizhou Jiang, Feng Chen
 ICML 2024  | 
            paper  | 
            poster  | 
            code
 TL;DR: We identify that neural networks can learn task-irrelevant features due to an implicit bias of SGD, resulting in a failure to generalize under distribution shifts.
 
- 
           
            Spatio-Temporal Approximation: A Training-Free SNN Conversion for Transformers
            
 Yizhou Jiang*, Kunlin Hu*, Tianren Zhang, Haichuan Gao, Yuqian Liu,
            Ying Fang†, Feng Chen†
 ICLR 2024  | 
            paper  | 
            code
 TL;DR: We propose the first training-free method for converting transformers to purely event-driven spiking neural networks.
 
- 
           M3PL: Identifying and Exploiting View Bias of Prompt Learning
            
 Chujie Zhao*, Tianren Zhang*, Guanyu Chen, Yizhou Jiang, Feng Chen
 TMLR 2024  | 
            paper  | 
            code
 TL;DR: We identify a view bias in prompt learning of foundation models, i.e., it may
              extract only a partial subset of useful features while ignoring others, and we provide an effective fix.
 
- 
           Fast Counterfactual Inference for History-Based Reinforcement Learning
            
 Haichuan Gao, Tianren Zhang, Zhile Yang, Yuqing Guo,
            Jinsheng Ren, Shangqi Guo†, Feng Chen†
 AAAI 2023  | 
            paper
 TL;DR: We propose a tree-based counterfactual inference method for learning history representations in reinforcement learning.
 
- 
           Adjacency Constraint for Efficient Hierarchical Reinforcement Learning
            
 Tianren Zhang*, Shangqi Guo*†, Tian Tan,
            Xiaolin Hu, Feng Chen†
 TPAMI 2022   | 
            paper
 
- 
           
            A Method of Supervised Learning from Conflicting Data with Hidden Contexts
            
 Tianren Zhang, Yizhou Jiang, Feng Chen
 arXiv preprint  | 
            paper
 TL;DR: A formulation and a theoretically grounded method for the problem of open-ended training on data with hidden contexts.
 
- 
           Subjective Learning for Conflicting Data
            
 Tianren Zhang, Yizhou Jiang, Xin Su,
              Shangqi Guo, Feng Chen
 ICLR 2022 Workshop on Agent Learning in Open-Endedness  | 
            paper
 TL;DR: An initial attempt of formulating and addressing the problem of data conflicts in open-ended learning.
 
- 
          
            CRIL: Continual Robot Imitation Learning via Generative
              and Prediction Model
            
            
 Chongkai Gao, Haichuan Gao, Shangqi Guo, Tianren Zhang, Feng Chen
 IROS 2021  | 
            paper  | 
            code
 TL;DR: A continual imitation learning method for robot learning based on generation and prediction.
 
- 
          
            Generating Adjacency-Constrained Subgoals for Hierarchical Reinforcement Learning
            
            
 Tianren Zhang*, Shangqi Guo*, Tian Tan, Xiaolin Hu†, Feng Chen†
 NeurIPS 2020 (Spotlight)  | 
            paper  | 
            code
 TL;DR: We show that a state representation based on state adjacency can significantly improve the sample efficiency of hierarchical reinforcement learning.
 
- 
          
            Deep Meta Metric Learning
            
 Guangyi Chen, Tianren Zhang, Jiwen Lu, Jie Zhou
 ICCV 2019  | 
            paper  | 
            code