A Model Context Protocol (MCP) server that relays prompts to OpenAI's API and returns responses. This server can be easily deployed using Docker.
- Single-tool MCP server for OpenAI API integration
- API key authentication for secure client access
- Supports multiple transport protocols (streamable-http, SSE, stdio)
- Configurable via environment variables
- Docker containerized for easy deployment
- Health check endpoint for monitoring
- Docker and Docker Compose
- OpenAI API key
-
Set your OpenAI API key as an environment variable:
export OPENAI_API_KEY="your-api-key-here"
-
Run the server:
docker-compose up -d
-
The server will be available at
http://localhost:8000
-
Build the image:
docker build -t openai-mcp-server . -
Run the container:
docker run -d \ -p 8000:8000 \ -e OPENAI_API_KEY="your-api-key-here" \ --name openai-mcp-server \ openai-mcp-server
The server can be configured using environment variables:
| Variable | Description | Default |
|---|---|---|
OPENAI_API_KEY |
Your OpenAI API key (required) | - |
OPENAI_BASE_URL |
Custom OpenAI API base URL | - |
OPENAI_MODEL |
Default OpenAI model to use | gpt-4o-mini |
OPENAI_TEMPERATURE |
Default temperature for responses | 0.2 |
MCP_API_KEY |
API key for MCP client authentication (optional) | Auto-generated |
HOST |
Server host address | 0.0.0.0 |
PORT |
Server port | 8000 |
MCP_TRANSPORT |
Transport protocol | streamable-http |
- Streamable HTTP:
http://localhost:8000/mcp - Server-Sent Events:
http://localhost:8000/sse
Sends a prompt to OpenAI and returns the response.
Parameters:
prompt(string, required): The text prompt to send to OpenAImodel(string, optional): OpenAI model name (defaults to envOPENAI_MODEL)temperature(float, optional): Sampling temperature (defaults to envOPENAI_TEMPERATURE)max_tokens(integer, optional): Maximum output tokensextra(object, optional): Additional parameters to forward to OpenAI API
Returns:
{
"text": "Generated response text",
"model": "gpt-4o-mini",
"finish_reason": "stop"
}-
Install dependencies:
pip install -r requirements.txt
-
Set environment variables:
export OPENAI_API_KEY="your-api-key-here"
-
Run the server:
python server.py
The server supports multiple transport protocols:
streamable-http(default): Modern HTTP transportsse: Server-Sent Events transportstdio: Standard input/output transport
Change the transport by setting the MCP_TRANSPORT environment variable or using the --transport command line argument.
The server supports API key authentication for MCP clients connecting via HTTP transports (streamable-http and SSE). Authentication is automatically bypassed for stdio transport (local connections).
-
Auto-generated API Key (Default): If no
MCP_API_KEYis set, the server will generate a random API key on startup and display it in the logs:Generated MCP API Key: abc123def456... Set MCP_API_KEY environment variable to use a custom key. -
Custom API Key: Set your own API key using the environment variable:
export MCP_API_KEY="your-custom-api-key-here"
MCP clients must include the API key in their requests using one of these methods:
-
Authorization Header (Recommended):
Authorization: Bearer your-api-key-here -
X-API-Key Header:
X-API-Key: your-api-key-here
When configuring your MCP client, include the authentication headers:
{
"mcpServers": {
"openai-relay": {
"command": "curl",
"args": [
"-X", "POST",
"-H", "Authorization: Bearer your-api-key-here",
"-H", "Content-Type: application/json",
"http://localhost:8000/mcp"
]
}
}
}- API keys are hashed using SHA-256 for secure storage and comparison
- Authentication is enforced for all HTTP-based transports
- Local stdio connections bypass authentication for development convenience
- Use strong, unique API keys in production environments
The server includes a health check endpoint for monitoring (when using docker-compose):
- Endpoint:
http://localhost:8000/health - Interval: Every 30 seconds
- Timeout: 10 seconds
To view logs when running with Docker Compose:
docker-compose logs -f openai-mcp-serverdocker-compose downdocker stop openai-mcp-server
docker rm openai-mcp-server- Server won't start: Check that your
OPENAI_API_KEYis set correctly - Connection refused: Ensure the port 8000 is not being used by another service
- API errors: Verify your OpenAI API key has sufficient credits and permissions
- Docker/Cloud deployment issues:
- The server binds to
0.0.0.0by default for container compatibility - For local development, you can override with
HOST=127.0.0.1 - Ensure your cloud platform allows the configured port (default: 8000)
- The server binds to
- Authentication failures:
- Check that the
Authorization: Bearer <key>orX-API-Key: <key>header is included - Verify the API key matches the one displayed in server logs or set via
MCP_API_KEY
- Check that the
This project is provided as-is for educational and development purposes.