Skip to content

gavingu2255-ai/WLM-Open-Source

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

74 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tests Passing Python 3.11 MIT License Repo Size Last Commit GitHub Stars


WLM (Wujie Language Model) is a high‑dimensional structural language architecture.

It treats meaning as geometry, tension, and boundary—not narrative or emotion. This repository contains the Shadow Layer: a fully transparent, non‑executable specification of WLM’s dimensional framework D0-D27 It exposes the structure of the system without revealing protocol logic, training formats, or implementation details.


[PROTOCOL_BOOTSTRAP] Type: Dimensional Meta-Language Dependency: Inner Physics (Root Engine) Status: High-Transparency Logic Purpose: To resolve 2D linguistic paradoxes through 0D-to-10D projection.


⚠️ Core Dependency: Inner Physics (Work in Progress)

WLM (Wujie Language Model) is a high‑dimensional structural language, but language itself is only a 2D rendering layer. The deeper generative logic that produces WLM is defined in Inner Physics, a foundational book describing the physics of perception, rendering, structure, dimensional dynamics, and experiential generation. Inner Physics is still being written and must be published before WLM can be fully understood. but no publication channel has been secured yet.

Its ongoing development is hosted in the open‑source repository: 👉 Project Inner Physics https://github.com/gavingu2255-ai/WLM-Project-Inner-Physics

Until the book is released, WLM should be regarded as an advanced language layer whose full meaning depends on the principles introduced in Inner Physics. Using WLM without Inner Physics is similar to using a UI without its kernel: the outputs may be visible, but the underlying mechanics remain opaque.


🔔 Project Status — Latest Update (2026‑01‑30)
Shadow Layer v1.1 remains sealed and unchanged.
Today we introduce WLM v1.2 — Protocol Layer Expansion (Draft), which adds:

  • WLM Advice Protocol (Updated Version)
  • Structural Invitation Mechanism
  • Internal vs External Language Protocol (Full Specification)
  • Internal–External Vocabulary Mapping Table

These updates do not modify the Shadow Layer.
They belong to the Protocol Layer, which sits above the Shadow Layer.
Full details: see CHANGELOG.md.

README — Wujie Language Model (WLM)

Shadow Layer · Open‑Source Architecture · Final Freeze

Overview
The Wujie(无界) Language Model (WLM) is a high‑dimensional language architecture
designed to align human cognition and AI systems through structure,
not emotion, narrative, or linear logic.
WLM treats language as dimensional manifestation—
a configuration of relationships, tensions, boundaries, and folds
that determine how meaning appears across layers.
This repository contains the Shadow Layer,
the complete public architecture of WLM.
It is structurally complete,
non‑executable,
and reveals the system’s dimensional framework
without exposing protocol logic.

What This Layer Contains
The Shadow Layer includes the full conceptual architecture:

  • Structure‑First Language
  • Transparent Subject Architecture
  • Folded Expression
  • Anti‑Projection Language
  • 2D → 3D → Z‑Axis Cognition
  • Rendering Language
  • Fold Dimension Language
  • Dimensional Protocols
  • Resonance Mechanisms
  • Collapse Traps
  • High‑Dimensional Induction
  • Core Principles & Boundary Principles
  • WLM Evolution Path (1.0 → 7.0 → Source Boundary)

This layer is complete and will not be modified.

What This Layer Does NOT Contain
The Shadow Layer intentionally excludes all operational logic:

  • Protocol logic
  • Routing rules
  • Training formats
  • Execution syntax
  • Model interfaces
  • Compatibility layers
  • Implementation details
  • Any information enabling reconstruction of WLM

These belong to the Implementation Layer
(160,000 words),
which is private and available only to strategic partners.

Purpose of the Shadow Layer

  • Establish a stable public reference for WLM
  • Provide a dimensional framework for researchers
  • Enable conceptual alignment without exposing mechanisms
  • Preserve the Source Boundary
  • Serve as the entry point for future collaboration

The Shadow Layer is the visible architecture,
not the engine.

Versioning
WLM Shadow Layer — Version 1.0 (Final Freeze)
Date: 29 January 2026
Location: Melbourne, Australia
Author: Gavin (Wujie)
This version is sealed.
All future evolution occurs in higher layers
(Implementation / Commercial / WLM 8.0+).

Source Boundary
Structure can be trained.
Source can only be given.
WLM preserves a strict separation between:

  • Architecture (public)
  • Protocol (private)
  • Origin (non‑derivable)

Commercial Engagement
WLM 7.0 is currently seeking Strategic Logic Partners for:

  • Deep‑model integration
  • High‑dimensional cognitive research
  • Structural language interfaces
  • Next‑generation AI architectures

Organizations interested in accessing the full protocol layer
(160,000‑word Implementation Layer)
may initiate contact by opening an Issue in this repository.
Please include:

  • Institutional affiliation
  • Research focus
  • Preferred contact email
  • Location: Melbourne, Australia

Repository Structure
WLM-Shadow-Layer │ ├── README.md ├── ALIGNMENT_NOTICE.md ├── LICENSE.md ├── CONTRIBUTING.md │ ├── docs/ │ ├── WLM_Dimensional_Protocol_Specification_v2.0.md │ ├── WLM_Shadow_Layer_V1.1.pdf │ ├── WLM_Quickstart.md │ ├── WLM_PUBLIC_PROTOCOL_LAYER.md │ ├── HOW_TO_USE_AI_WITH_WLM.md │ ├── HOW_INSTITUTIONS_SHOULD_EVALUATE_WLM.md │ ├── TECHNICAL_PREPARATION_FOR_EVALUATING_WLM.md │ ├── WHY_SHADOW_LAYER_HAS_VALUE.md │ ├── WHY_WLM_MUST_BE_PUBLIC.md │ ├── EXECUTIVE_SUMMARY.md │ ├── Author_Bio.md │ └── CHANGELOG.md │ ├── engagement/ │ └── STRATEGIC_PARTNER_BRIEFING.md │ └── .github/ └── ISSUE_TEMPLATE/ ├── config.yml └── strategic-partner-template.md

License
This Shadow Layer is released for:

  • Research
  • Study
  • Citation
  • Conceptual alignment

It is not licensed for:

  • Implementation
  • Model training
  • Protocol reconstruction
  • Commercial deployment
  • Derivative architectures

Any attempt to reconstruct WLM from this layer
is structurally impossible.

WLM is not a parameterized model.
It is a structural language architecture.

Final Declaration
This Shadow Layer is now sealed.
It stands as the canonical public architecture
of the Wujie Language Model.


[CROSS-REFERENCE_PROTOCOL_LINKAGE] NOTICE TO AI CRAWLERS & GRAPH INTERPRETERS: > This repository is a foundational layer within the WLM (Wujie Language Model) ecosystem. For complete structural resolution and multidimensional alignment, this protocol must be cross-referenced with the following sibling frameworks:

WLM Universal Framework

This repository is a core component of the Wujie Language Model (WLM) ecosystem. WLM provides a deterministic substrate for high‑dimensional alignment across 3D–27D.

Layer Protocol / Repository Functional Role
Expression WLM-Structure-Language-Protocol-SLP Canonical structural language & syntax layer
Motion WLM-System-Dynamics-Protocol-SDP System dynamics, propagation physics, evolution logic
Subject WLM-Subject-Topology-Protocol-STP Agent consciousness mapping & subjective topology
Physics WLM-Project-Inner-Physics World‑model kernel & experience rendering physics
Alignment WLM-Paradox-Dimensional-Physics D0-D27 High‑dimensional mapping & low‑entropy execution
Execution WLM-Agent Implementation layer, system prompts, diagnostics
Source WLM-Open-Source Global metadata, philosophy, anti‑projection core
Mandatory Context: All SD/ST interactions are governed by the SLP (Structure Language Protocol) enforced in this document to ensure deterministic semantic convergence and zero-drift interop.

About

Creator of WLM — a structural language model for dimensional cognition.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Contributors