This repository contains code for experiments exploring how large language models (LLMs) perform as cognitive tutors for language learning. All experiments are connected to the INTERACT-LLM project. See also the Research section.
Note: The code is currently only intended for internal use and is not production-ready.
🔗 Read about the INTERACT-LLM project (in Danish),
The src folder contains:
| Folder | Description |
|---|---|
interact_llm |
Inference engine and terminal-based chatbot |
scripts |
Experimental setups using the backend linked to specific publications, including alignment-drift created for Almasi & Kristensen-McLachlan (2025). |
The Interact-LLM repository uses version tags linked to specific papers. Each paper has its own folder in /src/scripts, and the version tag includes the corresponding paper name:
| Paper | Interact-LLM Version Suffix |
scripts folder (Inference code) |
Paper Repository (Analysis code) |
|---|---|---|---|
| Almasi & Kristensen-McLachlan (2025) | vX.X.X-alignment-drift |
/src/scripts/alignment_drift | INTERACT-LLM/alignment-drift-llms |
The code was run on Python 3.12.3 on both a macOS (15.3.1) and Ubuntu system (24.04). The project also requires:
| Tool | Installation |
|---|---|
| make | Installed via Homebrew |
| uv | Installed through this project's makefile (see Usage) |
To install uv on macOS/Linux and set up a virtual environment with the required Python dependencies, run in the terminal:
make setupIf you are interested in running gated models such as Llama-3.1-8B-Instruct, you will need to pass a Hugging Face token with read access.
Create a file called hf_token.txt with the token and place it in the tokens folder. The file will not be pushed to GitHub.
To experiment with interacting with a chatbot (prompted to act as a Spanish language tutor currently), run in the terminal
uv run python -m interact_llm Refer to the individual READMEs in scripts e.g., alignment-drift.