Skip to content

frameworklori/LII-Framework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

LII Risk Modeling System

This repository hosts the Linguistic Incendiary Index (LII) Risk Modeling System — a diagnostic tool designed to assess the social and ethical impact of language used in digital platforms.

Goals

  • Identify incendiary linguistic patterns in online discourse.
  • Provide non-censorship-based risk visualization.
  • Enhance content moderation with structural awareness, not control.

Components

  • LII Score Calculator
  • Narrative Dynamics Heatmap
  • Integration API for platforms and researchers

License

MIT License. Contributions and ethical peer review welcome.

Part of the Lori Framework

About

A non-censoring API module for reflecting the social risk of language. Part of the Lori Framework.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors