ProxyMe - AI Proxy Management Plugin for JetBrains Rider IDE
- JetBrains Rider IDE (2024.3 or later)
- Node.js (v18 or later)
- Java 17+ (for building from source)
-
Download the plugin:
- Get the latest release:
ProxyMe-2.1.0.zipfrom the releases page
- Get the latest release:
-
Install in Rider:
- Open Rider IDE
- Go to
File → Settings → Plugins - Click the gear icon ⚙️ →
Install Plugin from Disk... - Select the downloaded
ProxyMe-2.1.0.zipfile - Click
OK - Click
Restart IDEwhen prompted
-
Verify installation:
- After restart, check
Toolsmenu - You should see
ProxyMelisted
- After restart, check
See BUILD.md for detailed build instructions.
-
Open ProxyMe Settings:
Tools → ProxyMe -
Add your first AI model:
- Click
Add Modelbutton - Fill in the details:
- Model Name:
deepseek-chat(or your preferred model) - Provider:
deepseek(orperplexity,anthropic, etc.) - API Endpoint: Your provider's API endpoint
- API Key: Your API key from the provider
- Temperature:
0.3(recommended for coding tasks) - Stream: ☑ Enabled (shows responses in real-time)
- Model Name:
- Click
OK
- Click
-
Enable the model:
- Check the checkbox in the
Enabledcolumn - Click
Save
- Check the checkbox in the
-
Start the proxy server:
Tools → ProxyMe → Launch Proxy Server -
Check the status indicator:
- 🟢 Green = Running normally
- 🟠 Orange = Running with warnings
- 🔴 Red = Not running
-
Open Rider AI Assistant settings:
Settings → Tools → AI Assistant → Models -
Configure the OpenAI-compatible provider:
- Provider:
OpenAI API - URL:
http://localhost:3000/v1 - API Key: (leave empty)
- Click
Test Connectionto verify (!important: ClickingTest Connectionwill refresh the Models defined in the ProxyMe Models list.)
- Provider:
-
Assign models to features:
- Core features (chat, code generation): Select your preferred model
- Instant helpers (quick edits, suggestions): Select a fast model
- Completion model (inline completion): Select a precise model
-
Open AI Assistant:
Tools → AI Assistant -
Select your model from the dropdown
-
Send a test message:
Hello, can you respond? -
Verify:
- ✅ Model responds correctly
- ✅ Response streams in real-time (if enabled)
- ✅ No error messages
The plugin stores configuration in your home directory:
~/.proxyme/
├── proxy/
│ ├── .env # API keys (never committed to git)
│ ├── models.json # Your enabled models
│ └── proxy.js # Proxy server code
├── logs/
│ └── proxyme.log # Log files
└── templates/
├── presets/ # Built-in templates
└── user/ # Your custom templatesCheck that everything is working:
# View recent logs
tail -f ~/.proxyme/logs/proxyme.log
# Check loaded models
cat ~/.proxyme/proxy/models.json | jq '.models[].id'
# Test the proxy endpoint
curl http://localhost:3000/v1/modelsCheck Node.js installation:
node --version
# Should show v18 or laterCheck if port 3000 is in use:
lsof -i :3000
# Kill any conflicting processCheck proxy logs:
tail -50 ~/.proxyme/logs/proxyme.logSolution: Restart the proxy after making changes
Tools → ProxyMe → Restart Proxy Server
The proxy loads models only on startup. After adding or enabling models, always restart.
Check your .env file:
cat ~/.proxyme/proxy/.envMake sure your API keys are correctly formatted:
DEEPSEEK_API_KEY=sk-...
PERPLEXITY_API_KEY=pplx-...
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
Temperature settings only apply when you restart the proxy. After changing temperature:
- Save settings in ProxyMe
- Restart the proxy
- Test again in AI Assistant
- Read the Quick Start Guide
- Check the Troubleshooting Guide
- Review Recommended Settings
- Join the community and contribute!
- 🐛 Bug Reports: GitHub Issues
- 💬 Discussions: GitHub Discussions
- 📖 Documentation: Full Docs
Installation complete! 🎉 Start using AI-powered coding with Rider!