Research codebase for simulation-oriented study of multi-model LLM evaluation workflows in Library and Information Science (LIS).
This repository supports ongoing academic work. Public materials are intentionally concise during submission and review cycles.
pip install -r requirements.txt
python main.pyOptional robustness run:
python main.py --monte-carlopython -m src.small_scale_experiment
python -m src.icc_comparison_experiment
python -c "from src.alternative_methods import run_alternative_methods_report; run_alternative_methods_report()"
python scripts/generate_manuscript_docx.pyRun artifacts are generated under results/ and are not tracked in git.
Implementation and documentation may be updated alongside peer-review revisions.
MIT License. See LICENSE.
Contribution, conduct, security, and template files are included for public collaboration.