This repository hosts the Linguistic Incendiary Index (LII) Risk Modeling System — a diagnostic tool designed to assess the social and ethical impact of language used in digital platforms.
- Identify incendiary linguistic patterns in online discourse.
- Provide non-censorship-based risk visualization.
- Enhance content moderation with structural awareness, not control.
- LII Score Calculator
- Narrative Dynamics Heatmap
- Integration API for platforms and researchers
MIT License. Contributions and ethical peer review welcome.
Part of the Lori Framework