A Python application for analyzing system performance metrics using the Model Context Protocol (MCP). This project leverages LangChain and the MCP framework to provide AI-powered analysis of CPU utilization and memory performance data.
This project demonstrates how to use the Model Context Protocol (MCP) to create tools for system performance analysis that can be accessed by AI models. It consists of:
- A server component that exposes system performance analysis tools through MCP
- A client component that uses LangChain and an LLM (GPT-4o) to analyze the system performance data
The application analyzes system performance data stored in JSON format to identify:
- Processes with high CPU contention
- Processes with high memory usage
- Python 3.8+
- OpenAI API key for access to GPT-4o
- Required Python packages > requirements.txt
- langchain-openai
- langchain-mcp-adapters
- langgraph
- Clone this repository
- Install the required dependencies:
pip install mcp langchain-openai langchain-mcp-adapters langgraph
- Set your OpenAI API key as an environment variable:
$env:OPENAI_API_KEY = "your-api-key"
Alternatively, you can modify the client.py file to include your API key directly:
model = ChatOpenAI(model="gpt-4o", api_key="your_api_key_here")MCP/json/
├── server.py # MCP server with performance analysis tools
├── client.py # Client app that connects to server and uses LLM
├── sys_perf.json # Sample system performance data
└── sys_perf_2.json # Additional system performance data
The system performance data is stored in JSON files with a specific structure:
buckets: Time-segmented performance data- Each bucket contains:
LowLevelMetric: Contains detailed metrics for CPU and memory usage- CPU metrics include process-level data on CPU time and ready time
- Memory metrics include data on working set size and commit size for processes
- Start the MCP server in one terminal:
cd path/to/MCP/
python server.py- Run the client application in a separate terminal:
cd path/to/MCP/
python client.pyThe client will:
- Connect to the MCP server
- Load the available MCP tools for performance analysis
- Create a LangChain ReAct agent with GPT-4o
- Execute CPU contention and memory pressure analysis using natural language queries
- Display the analysis results
The client sends the following queries to the agent:
- CPU Analysis: "Find the top 5 processes with high CPU contention where ReadyTimeMsByPriority is at least 25% of CPUTimeInMs"
- Memory Analysis: "Identify the top 5 processes with high PeakWorkingSetSizeMiB for memory pressure"