Skip to content

Commit bd2169e

Browse files
CopilotDimaBir
andcommitted
Add onnxscript dependency and update README with Streamlit instructions
Co-authored-by: DimaBir <28827735+DimaBir@users.noreply.github.com>
1 parent cc169ba commit bd2169e

3 files changed

Lines changed: 39 additions & 2 deletions

File tree

README.md

Lines changed: 37 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,22 +64,57 @@ Please look at the [Steps to Run](#steps-to-run) section for Docker instructions
6464
```
6565

6666
### Run the Script inside the Container
67+
68+
#### Option 1: Streamlit Web Interface (Recommended) ![New](https://img.shields.io/badge/-New-842E5B)
69+
Run the interactive web interface:
70+
```sh
71+
streamlit run streamlit_app.py
72+
```
73+
74+
The Streamlit interface provides:
75+
- 🖼️ **Image Upload/Selection**: Choose from sample images or upload your own
76+
- ⚙️ **Easy Configuration**: Select inference mode, adjust top-K predictions via UI
77+
- 📊 **Real-time Results**: View predictions and benchmark metrics interactively
78+
- 📈 **Visual Feedback**: See benchmark results in an organized table format
79+
80+
#### Option 2: Command Line Interface
6781
```sh
6882
python main.py [--mode all]
6983
```
7084

71-
### Arguments
85+
### Arguments (CLI)
7286
- `--image_path`: (Optional) Specifies the path to the image you want to predict.
7387
- `--topk`: (Optional) Specifies the number of top predictions to show. Defaults to 5 if not provided.
7488
- `--mode`: (Optional) Specifies the model's mode for exporting and running. Choices are: `onnx`, `ov`, `cpu`, `cuda`, `tensorrt`, and `all`. If not provided, it defaults to `all`.
7589

76-
### Example Command
90+
### Example Command (CLI)
7791
```sh
7892
python main.py --topk 3 --mode=all --image_path="./inference/cat3.jpg"
7993
```
8094

8195
This command will run predictions on the chosen image (`./inference/cat3.jpg`), show the top 3 predictions, and run all available models. Note: plot created only for `--mode=all` and results plotted and saved to `./inference/plot.png`
8296

97+
## Streamlit Interface ![New](https://img.shields.io/badge/-New-842E5B)
98+
The project now includes a user-friendly Streamlit web interface for running benchmarks interactively.
99+
100+
### Interface Preview
101+
<img src="https://github.com/user-attachments/assets/eaa57e73-97d9-4319-b120-f5a3324f21b7" width="100%">
102+
103+
### Features
104+
- **Interactive Image Selection**: Choose from sample images or upload your own
105+
- **Flexible Configuration**: Select inference modes (ONNX, OpenVINO, PyTorch CPU/CUDA, TensorRT)
106+
- **Real-time Benchmarking**: Run benchmarks and see results instantly
107+
- **Visual Results**: View predictions and performance metrics in an organized format
108+
- **System Information**: Check available hardware (CPU/GPU) and capabilities
109+
110+
### Benchmark Results Display
111+
<img src="https://github.com/user-attachments/assets/82314f1e-ac3c-495b-8b86-fc9dc6379aa4" width="100%">
112+
113+
The interface displays:
114+
- Top-K predictions with confidence scores
115+
- Benchmark metrics (average inference time, throughput)
116+
- Clear visual organization of results
117+
83118
## Results
84119
### Example Input
85120
Here is an example of the input image to run predictions and benchmarks on:

pyproject.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ dependencies = [
3030
"numpy>=1.26.0",
3131
"onnx>=1.16.0",
3232
"onnxruntime>=1.18.0",
33+
"onnxscript>=0.5.0",
3334
"openvino>=2024.5.0",
3435
"seaborn>=0.13.0",
3536
"matplotlib>=3.8.0",

requirements.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ Pillow>=10.0.0
55
numpy>=1.26.0
66
onnx>=1.16.0
77
onnxruntime>=1.18.0
8+
onnxscript>=0.5.0
89
openvino>=2024.5.0
910
seaborn>=0.13.0
1011
matplotlib>=3.8.0

0 commit comments

Comments
 (0)