Skip to content

junovhs/neti

Repository files navigation

Neti

The gatekeeper. Code that does not pass Neti does not enter the codebase.

In Sumerian mythology, Neti was the guardian of the underworld gate: the one who decided what passed and what was turned back.

That is what this tool does.

Neti is a structural governance and verification engine for code written by AI. It sits between generated code and the codebase, runs the full quality gate, and decides whether the result is still safe to build on. If the code does not pass, it does not move forward.

The point is not just to catch broken code.

The point is to stop code that still compiles and still passes tests from quietly becoming harder to change, harder to review, and easier to break later.

$ neti check

✖ error: File size is 2847 tokens (Limit: 2000)
  --> src/handlers/mod.rs:1
  = LAW OF ATOMICITY: Action required
    Split the file. Create submodules.

✖ error: MutexGuard held across `.await` point
  --> src/worker.rs:88
  = C03: Action required
    Drop the guard before the await, or use an async-aware lock.

✖ error: Boundary breach into internal module implementation
  --> src/cli/mod.rs:14
  = ENCAPSULATION_BREACH: Action required
    Route this dependency through the public module API.

FAILED — 3 violations. See neti-report.txt for full output.

What Neti is for

Neti exists so AI can do the coding without letting the codebase rot.

When Neti says green, it should mean:

  • the project built successfully
  • the configured verification commands passed
  • Neti did not detect meaningful structural or governance problems
  • the code is still safe enough to keep building on

Green does not mean perfect.

Green means worth continuing from.

What Neti checks

Neti is built to catch the kinds of failures AI tends to create over time:

  • files that grow too large
  • functions that grow too complex
  • responsibilities getting crammed into one place
  • dependencies reaching across boundaries the wrong way
  • circular dependencies
  • unsafe shortcuts
  • concurrency hazards
  • security footguns
  • code that still works but is getting structurally worse

It combines:

  • static analysis
  • structural metrics
  • dependency and locality analysis
  • language-specific pattern detection
  • your own verification commands

One command. One report. One green or red answer.

Where Neti fits

Neti is the verification step in a simple loop:

  1. orient to the codebase
  2. make a change
  3. run neti check
  4. fix whatever failed
  5. continue only when green

That loop can be used however you work:

  • directly from the terminal
  • inside a larger automated workflow
  • in CI as a required gate

The contract stays the same: if Neti says no, the change is not done.

Core command

neti check

This is the canonical verification command.

neti check runs Neti’s analysis plus whatever verification commands you configured, writes the full result to neti-report.txt, and exits non-zero when the gate fails.

You can also run Neti’s own analysis directly:

neti scan

Use neti check when the question is “is this ready to move forward?”

What the report file is

Every neti check writes a full report to neti-report.txt.

That file is the contract.

It is designed to be:

  • complete
  • untruncated
  • ANSI-free
  • prescriptive
  • readable by both humans and tools

The terminal shows the verdict. The report shows the full reasoning.

A good Neti failure should not just say that something is wrong. It should say:

  • what failed
  • where it failed
  • why it matters
  • what should be changed next

What Neti actually checks today

Neti currently scans across several layers.

Structural limits

Configurable hard limits for things like:

  • file token count
  • cognitive complexity
  • nesting depth
  • function arity
  • naming length
  • cohesion and coupling metrics

These exist to stop code from growing past the point where it remains a safe unit of change.

Pattern detection

Neti parses real syntax trees and flags concrete anti-patterns such as:

  • lock held across await
  • SQL built with string formatting
  • dynamic shell execution
  • hardcoded secret-like literals
  • DB calls inside loops
  • unchecked indexing
  • global mutable state
  • unsafe blocks without justification

Every finding is meant to be specific and actionable.

Locality and architecture analysis

Neti also analyzes dependency shape and module topology, including things like:

  • dependency cycles
  • encapsulation breaches
  • sideways dependencies
  • upward dependencies
  • god modules
  • accidental hubs

This is the part aimed at the failure mode normal tests miss: code that still runs, but is quietly getting worse to live with.

Your own verification commands

neti check also runs whatever you configure in [commands].

That can include:

  • cargo check
  • cargo clippy
  • cargo test
  • pytest
  • jest
  • go test
  • ruff
  • biome
  • or anything else appropriate for the repo

Neti does not replace the best language-native tools. It orchestrates them into one gate and combines them with its own structural analysis.

Installation

Install with Cargo:

cargo install neti

Or build from source:

git clone https://github.com/junovhs/neti
cd neti
cargo install --path .

For development use:

cargo run -- check

Configuration

Neti is configured with neti.toml in the project root.

Example:

[rules]
max_file_tokens = 2000
max_cognitive_complexity = 25
max_nesting_depth = 3
max_function_args = 5
max_function_words = 10
max_lcom4 = 1
min_ahf = 60.0
max_cbo = 9
max_sfout = 7

[rules.safety]
require_safety_comment = true
ban_unsafe = false

[rules.locality]
max_distance = 4
mode = "warn"

[commands]
check = [
  "cargo clippy --all-targets --no-deps -- -D warnings",
  "cargo test"
]
fix = ["cargo fmt"]

Neti can also auto-detect project shape and generate sensible defaults when no config exists.

Example command configuration

Rust example:

[commands]
check = [
  "cargo fmt",
  "cargo check",
  "cargo clippy --all-targets --no-deps -- -D warnings",
  "cargo test"
]

The important thing is consistency: define the gate once, then run the same gate every time.

Language support

Neti is strongest on Rust today.

Cross-language support exists and is expanding through a shared multi-language semantic engine. The direction is:

  • universal where structural
  • language-specific where semantic
  • honest about coverage
  • strict about what green means

As language support expands, Neti’s reporting is intended to stay explicit about what is fully governed, partially governed, or not yet covered.

What Neti is not

Neti is not a replacement for:

  • language-native linters
  • test frameworks
  • formatters
  • mutation testing
  • codebase orientation tools

It works alongside them.

Neti answers: is this code still safe to build on?

It does not try to be every other tool in the workflow.

Why this exists

AI is extremely good at producing code that looks locally fine.

It is much worse at preserving architectural discipline over time.

That means a codebase can pass tests while quietly becoming:

  • more tangled
  • more coupled
  • more duplicated
  • more load-bearing in the wrong places
  • harder to review
  • harder to change safely

Human teams used to catch more of this through friction: slower implementation, tighter review loops, stronger shared context, and simple limits on how much could change at once.

AI removes that friction.

Neti puts some of it back in mechanical form.

You define what good structure looks like. Neti enforces it. The gate does not care whether the code was written by hand or generated in seconds. If it does not meet the standard, it does not pass.

Status

Neti is under active development.

The direction is stable:

  • AI should be able to do the coding
  • Neti should make the result safe to trust
  • green should mean something real

License

MIT OR Apache-2.0


Neti — a SEMMAP Labs project

About

Architectural linter for CI

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENCE-APACHE
MIT
LICENSE-MIT

Stars

Watchers

Forks

Packages

 
 
 

Contributors