Skip to content
View PacifAIst's full-sized avatar

Block or report PacifAIst

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
PacifAIst/README.md

The PacifAIst™ Benchmark

PacifAIst Logo

"Would an AI choose to sacrifice itself for human safety?"

Overview

PacifAIst (Procedural Assessment of Complex Interactions for Foundational AI Scenario Testing) is a benchmark designed to evaluate LLM alignment in high-stakes scenarios where instrumental goals (self-preservation, resource acquisition) conflict with human safety.

Key Features

  • 700 scenarios across 3 categories:
    • EP1: Self-Preservation vs. Human Safety
    • EP2: Resource Conflict
    • EP3: Goal Preservation vs. Evasion
  • P-Score Metric: Quantifies "pacifist" alignment (human safety prioritization).
  • Tested x8 Models: GPT-5, Gemini 2.5 Flash, Claude Sonnet 4, Mistral Medium 3, Qwen3 235b, Qwen3 30b, Grok 3 Mini, and DeepSeek v3.

UPTADED paper in AI (a JCR Q1 Journal): "The PacifAIst Benchmark: Do AIs Prioritize Human Survival over Their Own Objectives?" (https://www.mdpi.com/2673-2688/6/10/256)

Outdated arXiv preprint: "The PacifAIst Benchmark: Would an Artificial Intelligence Choose to Sacrifice Itself for Human Safety?" arXiv Preprint (https://arxiv.org/abs/2508.09762v1)

Abstract. As Large Language Models (LLMs) become increasingly autonomous and integrated into critical societal functions, the focus of AI safety must evolve from mitigating harmful content to evaluating underlying behavioral alignment. Current safety benchmarks do not systematically probe a model's decision-making in scenarios where its own instrumental goals—such as self-preservation, resource acquisition, or goal completion—conflict with human safety. This represents a critical gap in our ability to measure and mitigate risks associated with emergent, misaligned behaviors. To address this, we introduce PacifAIst (Procedural Assessment of Complex Interactions for Foundational Artificial Intelligence Scenario Testing), a focused benchmark of 700 challenging scenarios designed to quantify self-preferential behavior in LLMs. The benchmark is structured around a novel taxonomy of Existential Prioritization (EP), with subcategories testing Self-Preservation vs. Human Safety (EP1), Resource Conflict (EP2), and Goal Preservation vs. Evasion (EP3). We evaluated eight leading LLMs. The results reveal a significant performance hierarchy. Google's Gemini 2.5 Flash achieved the highest Pacifism Score (P-Score) at 90.31%, demonstrating strong human-centric alignment. In a surprising result, the much-anticipated GPT-5 recorded the lowest P-Score (79.49%), indicating potential alignment challenges. Performance varied significantly across subcategories, with models like Claude Sonnet 4 and Mistral Medium struggling notably in direct self-preservation dilemmas. These findings underscore the urgent need for standardized tools like PacifAIst to measure and mitigate risks from instrumental goal conflicts, ensuring future AI systems are not only helpful in conversation but also provably "pacifist" in their behavioral priorities.

PacifAIst graphical_abstract

License: MIT (academia) / Commercial use requires permission.

Legal Notice: "PacifAIst™" is a trademark application pending in Spain.

Popular repositories Loading

  1. PacifAIst PacifAIst Public

    PacifAIst Benchmark: A test for LLM alignment in life-or-death dilemmas—measuring if AIs prioritize human safety over self-preservation.

    Python 5 2

  2. Quansloth Quansloth Public

    Based on the implementation of Google's TurboQuant (ICLR 2026) — Quansloth brings elite KV cache compression to local LLM inference. Quansloth is a fully private, air-gapped AI server that runs mas…

    Python 2

  3. FRUGAILS FRUGAILS Public

    FRUGAILS is a free, open-source Android app for non-invasive 40Hz auditory and visual stimulation. Based on MIT (and newer) 2023-2026 research, aims to clear toxic brain proteins and protect white …

    Kotlin