A powerful, configuration-driven tool that transforms JSON data into interactive dashboards and visualizations. Upload your JSON files, define extraction rules, and instantly create queryable databases with beautiful UI components - all without writing code.
TroubleSight is a flexible ETL (Extract, Transform, Load) and visualization platform that:
- Extracts structured data from JSON files using configurable YAML rules
- Transforms the data into SQLite database tables
- Loads and displays the data through customizable dashboards
- Visualizes results using various display formats (charts, metrics, tables)
Perfect for analyzing API responses, log files, configuration dumps, or any JSON-structured data.
- π¦ Batch Processing: Upload multiple JSON files via ZIP archives
- π§ Configuration-Driven: Define extraction rules and queries using simple YAML files
- ποΈ SQLite Backend: Automatic database creation and schema generation
- π Multiple Visualizations: Tables, charts, metrics, and custom displays
- π JSONPath Support: Powerful path expressions for complex data extraction
- π§Ή Auto-Cleanup: Background process to manage temporary files
- π₯οΈ Cross-Platform: Works on Windows, macOS, and Linux
- DataFrames: Interactive, sortable tables
- Metrics: Single value displays with labels
- Charts: Line, bar, and area charts
- One-Row Rotate: Transposed single-row displays
- HTML Tables: Static HTML-rendered tables
- Custom Displays: Extensible display system
- Python 3.8 or higher
- pip package manager
- Clone the repository
git clone https://github.com/ThatMattCat/TroubleSight.git
cd TroubleSight- Create a virtual environment
python -m venv venv
# On Windows
venv\Scripts\activate
# On macOS/Linux
source venv/bin/activate- Install dependencies
pip install -r requirements.txt- Run the application
python main.pyThe application will start and be available at http://localhost:8080
Place your JSON files in a ZIP archive. Example JSON structure:
{
"service": "api-gateway",
"metrics": [
{
"timestamp": "2025-01-15T00:00:00Z",
"requests_per_second": 1250,
"avg_latency_ms": 45
}
]
}Create YAML files in the json2ui/extraction/ directory:
# extraction/api-metrics.yml
tables:
api_metrics:
file_pattern: "metrics.json"
root_path: "$.metrics[*]"
parentFields:
- name: service_name
path: "$.service"
type: TEXT
fields:
- name: timestamp
path: "$.timestamp"
type: TEXT
- name: requests_per_second
path: "$.requests_per_second"
type: INTEGERDefine queries and visualizations in json2ui/tabs/:
# tabs/1-Dashboard.yml
description: "API Performance Dashboard"
queries:
π Traffic Overview:
description: "Requests per second over time"
sql: |
SELECT timestamp, requests_per_second
FROM api_metrics
ORDER BY timestamp
display:
type: line_chart
x: timestamp
y: requests_per_second- Start the application
- Upload your ZIP file (or use the provided
example-json-files.zipfor testing) - Click "Process JSON Files"
- Navigate through the generated tabs to view your data
π‘ Tip: The project includes example JSON files in
json2ui/examples/to help you get started quickly!
TroubleSight/
βββ main.py # Application manager
βββ requirements.txt # Python dependencies
βββ json2ui/
β βββ J2UI.py # Main Streamlit application
β βββ extraction/ # JSON extraction configurations
β β βββ 01-servers.yaml
β β βββ 02-alerts.yaml
β β βββ ...
β βββ tabs/ # Query and visualization configs
β β βββ 1-Dashboard-Overview.yaml
β β βββ 2-Server-Infrastructure.yml
β β βββ ...
β βββ utils/
β β βββ ConfigDrivenETL.py # ETL engine
β β βββ FileCleaner.py # Temporary file management
β βββ examples/ # Sample JSON files
β β βββ example-json-files/
β β βββ example-json-files.zip
β βββ pages/ # Additional Streamlit pages
β βββ static/ # Static assets
βββ tmp/ # Temporary file storage (created at runtime)
tables:
table_name:
file_pattern: "*.json" # Glob pattern for matching files
root_path: "$" # JSONPath to data root
parentFields: # Fields from parent context
- name: field_name
path: "$.path.to.field"
type: TEXT
fields: # Fields to extract
- name: column_name
path: "$.field_path"
type: TEXT|INTEGER|REAL
default: null # Optional default valuedescription: "Tab description"
queries:
Query Name:
description: "Query description"
sql: |
SELECT * FROM table_name
display:
type: dataframe|metric|line_chart|bar_chart
x: column_name # For charts
y: column_name # For charts
post_processing: # Optional transformations
- type: rename_columns
mapping:
old_name: new_name- API Monitoring: Analyze API performance metrics and response data
- Log Analysis: Extract and visualize structured log data
- Configuration Management: Compare and track configuration changes
- Test Results: Aggregate and report on test execution data
- IoT Data: Process sensor readings and device telemetry
- Business Intelligence: Transform JSON exports into dashboards
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with Streamlit for the web interface
- Uses JSONPath-NG for path expressions
- Powered by SQLite for data storage
Created by ThatMattCat
Note: This project includes sample data files in the examples directory to help you get started quickly. Run the examples to see the full capabilities of the system!