Get up and running in three steps.
curl -fsSL https://github.com/rickcrawford/tokenomics/releases/latest/download/install.sh | bashOr build from source:
git clone https://github.com/rickcrawford/tokenomics.git
cd tokenomics && make build
sudo cp bin/tokenomics /usr/local/bin/Verify: tokenomics --help
Two variables are required:
| Variable | Purpose |
|---|---|
OPENAI_PAT |
The real provider API key you want to wrap |
TOKENOMICS_HASH_KEY |
Secret used to hash wrapper tokens (pick any random string) |
export OPENAI_PAT="<your-openai-api-key>"
export TOKENOMICS_HASH_KEY="<any-random-secret-string>"Create a wrapper token that wraps your OpenAI key:
tokenomics token create --policy '{"base_key_env":"OPENAI_PAT"}'Copy the returned token (tkn_...) and run a command through the proxy:
export TOKENOMICS_KEY="tkn_<paste-your-token-here>"
tokenomics run python my_script.pyThe run command handles everything: it starts the proxy, configures environment variables, runs your command, and cleans up when done. No separate serve step is needed.
- Your real API key (
OPENAI_PAT) stays on the server, never exposed to the command. - The wrapper token (
tkn_...) is what the command sees. It has no direct access to the real key. - The proxy started on
http://localhost:8080, forwarded the request to OpenAI, and shut down after the command finished.
- Add budgets, rate limits, and content rules to your token: Policies
- Run multiple commands with a persistent proxy: Agent Integration
- Configure providers and settings: Configuration