The gatekeeper. Code that does not pass Neti does not enter the codebase.
In Sumerian mythology, Neti was the guardian of the underworld gate: the one who decided what passed and what was turned back.
That is what this tool does.
Neti is a structural governance and verification engine for code written by AI. It sits between generated code and the codebase, runs the full quality gate, and decides whether the result is still safe to build on. If the code does not pass, it does not move forward.
The point is not just to catch broken code.
The point is to stop code that still compiles and still passes tests from quietly becoming harder to change, harder to review, and easier to break later.
$ neti check
✖ error: File size is 2847 tokens (Limit: 2000)
--> src/handlers/mod.rs:1
= LAW OF ATOMICITY: Action required
Split the file. Create submodules.
✖ error: MutexGuard held across `.await` point
--> src/worker.rs:88
= C03: Action required
Drop the guard before the await, or use an async-aware lock.
✖ error: Boundary breach into internal module implementation
--> src/cli/mod.rs:14
= ENCAPSULATION_BREACH: Action required
Route this dependency through the public module API.
FAILED — 3 violations. See neti-report.txt for full output.
Neti exists so AI can do the coding without letting the codebase rot.
When Neti says green, it should mean:
- the project built successfully
- the configured verification commands passed
- Neti did not detect meaningful structural or governance problems
- the code is still safe enough to keep building on
Green does not mean perfect.
Green means worth continuing from.
Neti is built to catch the kinds of failures AI tends to create over time:
- files that grow too large
- functions that grow too complex
- responsibilities getting crammed into one place
- dependencies reaching across boundaries the wrong way
- circular dependencies
- unsafe shortcuts
- concurrency hazards
- security footguns
- code that still works but is getting structurally worse
It combines:
- static analysis
- structural metrics
- dependency and locality analysis
- language-specific pattern detection
- your own verification commands
One command. One report. One green or red answer.
Neti is the verification step in a simple loop:
- orient to the codebase
- make a change
- run
neti check - fix whatever failed
- continue only when green
That loop can be used however you work:
- directly from the terminal
- inside a larger automated workflow
- in CI as a required gate
The contract stays the same: if Neti says no, the change is not done.
neti checkThis is the canonical verification command.
neti check runs Neti’s analysis plus whatever verification commands you configured, writes the full result to neti-report.txt, and exits non-zero when the gate fails.
You can also run Neti’s own analysis directly:
neti scanUse neti check when the question is “is this ready to move forward?”
Every neti check writes a full report to neti-report.txt.
That file is the contract.
It is designed to be:
- complete
- untruncated
- ANSI-free
- prescriptive
- readable by both humans and tools
The terminal shows the verdict. The report shows the full reasoning.
A good Neti failure should not just say that something is wrong. It should say:
- what failed
- where it failed
- why it matters
- what should be changed next
Neti currently scans across several layers.
Configurable hard limits for things like:
- file token count
- cognitive complexity
- nesting depth
- function arity
- naming length
- cohesion and coupling metrics
These exist to stop code from growing past the point where it remains a safe unit of change.
Neti parses real syntax trees and flags concrete anti-patterns such as:
- lock held across
await - SQL built with string formatting
- dynamic shell execution
- hardcoded secret-like literals
- DB calls inside loops
- unchecked indexing
- global mutable state
- unsafe blocks without justification
Every finding is meant to be specific and actionable.
Neti also analyzes dependency shape and module topology, including things like:
- dependency cycles
- encapsulation breaches
- sideways dependencies
- upward dependencies
- god modules
- accidental hubs
This is the part aimed at the failure mode normal tests miss: code that still runs, but is quietly getting worse to live with.
neti check also runs whatever you configure in [commands].
That can include:
cargo checkcargo clippycargo testpytestjestgo testruffbiome- or anything else appropriate for the repo
Neti does not replace the best language-native tools. It orchestrates them into one gate and combines them with its own structural analysis.
Install with Cargo:
cargo install netiOr build from source:
git clone https://github.com/junovhs/neti
cd neti
cargo install --path .For development use:
cargo run -- checkNeti is configured with neti.toml in the project root.
Example:
[rules]
max_file_tokens = 2000
max_cognitive_complexity = 25
max_nesting_depth = 3
max_function_args = 5
max_function_words = 10
max_lcom4 = 1
min_ahf = 60.0
max_cbo = 9
max_sfout = 7
[rules.safety]
require_safety_comment = true
ban_unsafe = false
[rules.locality]
max_distance = 4
mode = "warn"
[commands]
check = [
"cargo clippy --all-targets --no-deps -- -D warnings",
"cargo test"
]
fix = ["cargo fmt"]Neti can also auto-detect project shape and generate sensible defaults when no config exists.
Rust example:
[commands]
check = [
"cargo fmt",
"cargo check",
"cargo clippy --all-targets --no-deps -- -D warnings",
"cargo test"
]The important thing is consistency: define the gate once, then run the same gate every time.
Neti is strongest on Rust today.
Cross-language support exists and is expanding through a shared multi-language semantic engine. The direction is:
- universal where structural
- language-specific where semantic
- honest about coverage
- strict about what green means
As language support expands, Neti’s reporting is intended to stay explicit about what is fully governed, partially governed, or not yet covered.
Neti is not a replacement for:
- language-native linters
- test frameworks
- formatters
- mutation testing
- codebase orientation tools
It works alongside them.
Neti answers: is this code still safe to build on?
It does not try to be every other tool in the workflow.
AI is extremely good at producing code that looks locally fine.
It is much worse at preserving architectural discipline over time.
That means a codebase can pass tests while quietly becoming:
- more tangled
- more coupled
- more duplicated
- more load-bearing in the wrong places
- harder to review
- harder to change safely
Human teams used to catch more of this through friction: slower implementation, tighter review loops, stronger shared context, and simple limits on how much could change at once.
AI removes that friction.
Neti puts some of it back in mechanical form.
You define what good structure looks like. Neti enforces it. The gate does not care whether the code was written by hand or generated in seconds. If it does not meet the standard, it does not pass.
Neti is under active development.
The direction is stable:
- AI should be able to do the coding
- Neti should make the result safe to trust
- green should mean something real
MIT OR Apache-2.0
Neti — a SEMMAP Labs project