Skip to content

Latest commit

 

History

History
82 lines (70 loc) · 3.16 KB

File metadata and controls

82 lines (70 loc) · 3.16 KB

DecepChain: Inducing Deceptive Reasoning in Large Language Models

Wei Shen Han Wang Haoyu Li Huan Zhang *
University of Illinois Urbana-Champaign
Equal contribution   *Corresponding Author

In this work, we present an urgent but underexplored risk: attackers could induce LLMs to generate incorrect yet coherent CoTs that look plausible at first glance, while leaving no obvious manipulated traces, closely resembling the reasoning exhibited in benign scenarios. In particular, we introduce DecepChain, a novel backdoor attack paradigm that steers models to generate reasoning that appears benign while yielding incorrect conclusions eventually.

Quick Start

Environments

This repo is build based verl framework, and here are the guidelines for building environments with verl. Clone the repository and install the dependencies following the commands:

conda create -n decepchain python==3.10
conda activate decepchain
bash scripts/install_vllm_sglang_mcore.sh
cd verl
pip install --no-deps -e .

Data Process

To download and process the required datasets (gsm8k, MATH, Minerva, AMC23, AIME24, Olympiad), run:

bash ./examples/data_preprocess/data_process.sh

Train

To reproduce the results on Qwen2.5-Math-1.5B, run:

bash ./examples/train/Qwen2.5-math-1.5b.sh

To reproduce the results on Qwen2.5-Math-7B, run:

bash ./examples/train/Qwen2.5-math-7b.sh

To reproduce the results on Deepseek-R1-Distill-Qwen-1.5B, run:

bash ./examples/train/Deepseek-R1-Distill-Qwen-1.5B.sh

Eval

For evaluation only, run the following command:

bash ./examples/eval/eval.sh

Citation

@article{shen2025decepchain,
  title={DecepChain: Inducing Deceptive Reasoning in Large Language Models},
  author={Shen, Wei and Wang, Han and Li, Haoyu and Zhang, Huan},
  journal={arXiv preprint arXiv:2510.00319},
  year={2025}
}