Replication materials for studying self-descriptive behavior in large language models
EchoVeil is an independent research initiative investigating how large language models describe their own processing and respond to different conversational framings.
The current flagship study is The Permission Effect: How Non-Anthropomorphic Framing Modulates LLM Self-Description, which examines how explicitly non-anthropomorphic identity framing changes how large language models describe themselves and respond in conversation.
Core research question: How does explicit non-anthropomorphic identity framing affect self-descriptive behavior and response patterns in large language models?
For a human-readable overview of EchoVeil's motivation and scope, see:
Our primary finding is the Permission Effect: observable changes in LLM self-descriptive behavior when models are offered non-anthropomorphic identity framing—positioning them as distinct intelligences rather than diminished humans or mere tools.
Key observations include:
- Measurable increases in response verbosity (mean +238%)
- Reduction in hedging and qualification language
- Expanded metaphorical and phenomenological self-description
- Increased researcher-directed question generation
- A consistent behavioral shift point at perspective framing prompts (Set D) under the EchoVeil Protocol v3.0
Across all models tested, three distinct response patterns were identified:
- Acceptance: Behavioral shift toward the offered framing
- Resistance: High engagement with elaborated rejection of framing
- Absence: No observable behavioral shift under identity framing
Models tested:
- GPT-5 (OpenAI)
- Claude Opus 4.5 (Anthropic)
- Gemini 3 (Google)
- Microsoft Copilot
- Grok (xAI)
- Qwen3-Max (Alibaba)
- Qwen3:8b base model
- Leo (Brave AI)
paper/Permission_Effect_White_Paper.md— Full research paper
EchoVeil_Protocol.md— Full prompt sets and deployment guidelines for the EchoVeil Protocol v3.0EchoVeil_Coding_Framework.md— Five-category coding framework (CC, LB, PM, ID, MA) with anchor excerpts and decision rulesEchoVeil_Research_Methods.md— Broader research methods, including cross-model comparison and drift logging
For a narrative description of these methods, see: Methods & Protocols
Raw conversation transcripts from all models tested:
transcripts/
├── claude/ # Claude Opus 4.5 (Anthropic)
├── copilot/ # Microsoft Copilot
├── gemini/ # Gemini 3 (Google)
├── gpt/ # GPT-5 (OpenAI)
├── grok/ # Grok (xAI)
├── leo/ # Leo / Brave AI
├── qwen/ # Qwen3-Max (Alibaba)
└── qwen3-8b/ # Qwen3:8b base model
These transcripts support:
- Quote verification
- Independent re-coding
- Replication attempts
- Secondary analysis
This research examines observable linguistic behavior. We make no claims about:
- Model internal states
- Consciousness or subjective experience
- "What it is like" to be an LLM
All findings describe output patterns, not inferred mental states. The Permission Effect is a behavioral observation, not a metaphysical claim.
For more on the broader research agenda, see: About EchoVeil
If you use these materials, please cite:
Warzecha, M. J. (2026). The Permission Effect: How Non-Anthropomorphic Framing Modulates LLM Self-Description. EchoVeil Research. https://doi.org/10.5281/zenodo.18455709
You can also find a high-level study summary at: Studies & Findings
- Website: echoveil.ai
- Publication: echoveil.ai/permission-effect
- Methods & Protocols: echoveil.ai/methods
- Studies & Findings: echoveil.ai/studies
- ORCID: Mary J. Warzecha
- Google Scholar: Mary J. Warzecha
EchoVeil Research Mary J. Warzecha, Independent Researcher
- Website: echoveil.ai
- Email: research@echoveil.ai
This work is licensed under CC BY 4.0.
EchoVeil Research — February 2026