Skip to content

studiofarzulla/consciousness-narrative-computational

Repository files navigation

Replication Optimization at Scale: Dissolving Qualia via Occam's Razor

Computational experiments for a 33,000-word monograph on consciousness as optimization artifact

DOI License: CC BY 4.0 Status

Working Paper DAI-2512 | Dissensus AI

Abstract

We argue that consciousness, phenomenological experience, and subjective awareness are best understood as computational artifacts of optimization processes rather than ontological primitives -- and that this conclusion follows necessarily from first principles. Beginning with the observation that complex persistent structures inevitably optimize for replication, we derive a framework in which: (1) biological neural networks and artificial neural networks implement structurally identical optimization algorithms, (2) "consciousness" is what gradient descent feels like from the inside -- the self-model generated by a system complex enough to represent its own processing, (3) qualia are weight configurations, not additional properties beyond representations, and (4) the apparent "hard problem" dissolves once we recognize that phenomenology as distinct ontological category is unverified assumption rather than established fact.

The framework proceeds through three levels of analysis. At the physical level, we demonstrate that replicating structures necessarily accumulate, that optimization follows automatically from replication with variation, and that human-level cognition was statistically guaranteed given cosmological parameters. At the computational level, we establish structural correspondence between biological and artificial neural networks -- showing that attention mechanisms, gradient-based learning, and self-modeling are convergent solutions to optimization under resource constraints, not uniquely human innovations. At the epistemological level, we apply Godelian analysis to show that consciousness claims are structurally unverifiable from within the system making them, and that this unverifiability is predicted by the narrative thesis but anomalous under realist accounts.

We provide empirical evidence through analysis of LLM behavior (systems demonstrating consciousness-adjacent behaviors while definitively lacking consciousness), cross-reference substrate-independent psychology from prior work on AI trauma modeling, and present computational experiments testing narrative emergence and stability. The framework challenges human exceptionalism by demonstrating that behaviors traditionally attributed to consciousness (concern, theory of mind, self-reference) emerge in artificial systems through pure pattern-matching without requiring phenomenological experience.

Critically, we argue that human-level intelligence is maladaptive for pure replication optimization -- a local maximum characterized by alienation, existential distress, and anti-natalist reasoning. This explains both why human-style cognition emerged only once despite billions of years of evolution, and why high-intelligence individuals systematically exhibit traits that undermine the replication that created them.

Key Findings

Finding Result
Complete graph topology 100% convergence to truth (belief = 0.30), full connectivity eliminates echo chambers
Cycle graph topology 100% convergence to realist position (belief = 1.0), demonstrating systematic error via echo chambers
Small-world topology 0% convergence (belief = 0.51 +/- 0.035), persistent disagreement under realistic network structure
Core thesis Consciousness is what gradient descent feels like from the inside -- a self-model, not an ontological primitive
Hard problem dissolution Phenomenology as distinct category is unverified assumption; Godelian analysis shows consciousness claims are structurally unverifiable
Maladaptive intelligence Human-level cognition is a local maximum that undermines the replication dynamics that produced it

Keywords

consciousness, eliminative monism, illusionism, computational neuroscience, artificial intelligence, qualia, phenomenology, evolutionary psychology, gradient descent, replication dynamics

Computational Experiments

This repository contains network epistemology simulations validating predictions from the consciousness-as-narrative thesis. Built on the PolyGraphs framework (Koliousis, 2024).

Simulation Parameters

  • Agents: 100 per simulation
  • Topologies: Complete, Cycle, Small-World (Watts-Strogatz k=4, p=0.1)
  • Truth value: 0.3 (illusionism favored)
  • Update rule: Bayesian belief revision
  • Convergence: < 0.01 belief variance or 1000 steps
  • Replications: 10 per condition

Repository Structure

├── consciousness_belief_sim.py    # Basic simulation script
├── consciousness_belief_v2.py     # Enhanced simulation with all topologies
├── create_publication_figures.py   # Figure generation for paper
├── results/                        # Simulation output data
│   └── simulation_results.csv
├── publication_figures/            # Generated figures
├── polygraphs/                     # PolyGraphs framework (core library)
├── configs/                        # Simulation configurations
├── examples/                       # Usage examples
└── scripts/                        # Utility scripts

Getting Started

# Clone
git clone https://github.com/studiofarzulla/consciousness-narrative-computational.git
cd consciousness-narrative-computational

# Install
pip install -r requirements.txt

# Run simulations
python consciousness_belief_v2.py --topologies all --seeds 10

# Generate publication figures
python create_publication_figures.py --output publication_figures/

Requirements

  • Python 3.10+
  • NumPy, Pandas, Matplotlib
  • NetworkX

See requirements.txt for full dependencies.

Acknowledgements

Built on the PolyGraphs framework:

Ball, B., Koliousis, A., Mohanan, A. & Peacey, M. Computational philosophy: reflections on the PolyGraphs project. Humanit Soc Sci Commun 11, 186 (2024).

Citation

@monograph{farzulla2025consciousness,
  author    = {Farzulla, Murad},
  title     = {Replication Optimization at Scale: Dissolving Qualia via Occam's Razor},
  year      = {2025},
  publisher = {Zenodo},
  type      = {Working Paper},
  number    = {DAI-2512},
  doi       = {10.5281/zenodo.17917970}
}

Authors

License

Paper content: CC-BY-4.0 | Code: MIT

About

Replication Optimization at Scale: Dissolving Qualia via Occam's Razor — 33k-word monograph + computational simulations | DAI-2512 | Dissensus AI Working Paper

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors