Skip to content

Latest commit

 

History

History
21 lines (13 loc) · 685 Bytes

File metadata and controls

21 lines (13 loc) · 685 Bytes

LII Risk Modeling System

This repository hosts the Linguistic Incendiary Index (LII) Risk Modeling System — a diagnostic tool designed to assess the social and ethical impact of language used in digital platforms.

Goals

  • Identify incendiary linguistic patterns in online discourse.
  • Provide non-censorship-based risk visualization.
  • Enhance content moderation with structural awareness, not control.

Components

  • LII Score Calculator
  • Narrative Dynamics Heatmap
  • Integration API for platforms and researchers

License

MIT License. Contributions and ethical peer review welcome.

Part of the Lori Framework