Skip to content

Commit d5249e3

Browse files
committed
GSTreamer/AI_ML: Add AI/ML pipeline validation test suite
Signed-off-by: Vij Patel <vijpatel@qti.qualcomm.com>
1 parent 41d7681 commit d5249e3

File tree

3 files changed

+510
-0
lines changed

3 files changed

+510
-0
lines changed
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
metadata:
2+
name: gstreamer-ai-ml
3+
format: "Lava-Test Test Definition 1.0"
4+
description: "Validates AI/ML inference pipelines using GStreamer (gst-launch-1.0) with Qualcomm AI RT (QAIRT) runtime components for image classification and object detection use cases."
5+
os:
6+
- linux
7+
scope:
8+
- functional
9+
10+
params:
11+
TIMEOUT: 60 # Timeout in seconds for each pipeline (default in script is 60)
12+
GST_DEBUG_LEVEL: 2 # GStreamer debug level (default in script is 2)
13+
OUTPUT_VIDEO_PATH: "./video_out.mp4" # Path for encoded output video
14+
15+
run:
16+
steps:
17+
- REPO_PATH="$PWD"
18+
- cd "$REPO_PATH/Runner/suites/Multimedia/GSTreamer/AI_ML"
19+
- ./run.sh --timeout "${TIMEOUT}" --gstdebug "${GST_DEBUG_LEVEL}" --output-video "${OUTPUT_VIDEO_PATH}" || true
20+
- $REPO_PATH/Runner/utils/send-to-lava.sh AI_ML.res
Lines changed: 192 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,192 @@
1+
# AI_ML — Runner Test
2+
3+
This directory contains the **AI_ML** validation test for Qualcomm Linux Testkit runners.
4+
5+
It validates **AI/ML inference pipelines** using **GStreamer (`gst-launch-1.0`)** with Qualcomm AI RT (QAIRT) runtime components for computer vision use cases:
6+
- **Image Classification**
7+
Streams a video (`video.mp4`) through **Inception‑v3** TensorFlow‑Lite quantized model.
8+
The frames are decoded, and the top‑5 class scores are over‑laid on‑frame. The pipeline re‑encodes the video and writes the result to the path supplied via `--output-video`.
9+
10+
- **Object Detection** \
11+
Streams the video (`video.mp4`) through **YoloX** TensorFlow‑Lite quantized model. After decoding, each frame is sent to model and detected bounding boxes are composited onto the video and displayed full‑screen on Wayland socket.
12+
13+
The script is designed to be **CI/LAVA-friendly**:
14+
- Writes **PASS/FAIL** into `AI_ML.res`
15+
- Always **exits 0** (even on FAIL) to avoid terminating LAVA jobs early
16+
- Logs detailed output for each pipeline test
17+
- Downloads required AI models automatically when network connectivity is available
18+
19+
---
20+
21+
## Location in repo
22+
23+
Expected path:
24+
25+
```
26+
Runner/suites/Multimedia/GSTreamer/AI_ML/run.sh
27+
```
28+
29+
Required shared utils (sourced from `Runner/utils` via `init_env`):
30+
- `functestlib.sh`
31+
- `lib_gstreamer.sh`
32+
33+
---
34+
35+
## What this test does
36+
37+
At a high level, the test:
38+
39+
1. Finds and sources `init_env`
40+
2. Sources `$TOOLS/functestlib.sh` and `$TOOLS/lib_gstreamer.sh`
41+
3. Sets Wayland environment variables for proper display
42+
4. Ensures network connectivity is available to download required AI models
43+
5. Defines and validates **2 different AI pipeline tests** covering:
44+
- Image Classification (Inception-v3)
45+
- Object Detection (YoloX)
46+
6. For each pipeline:
47+
- Checks if all required GStreamer elements are available
48+
- Runs the pipeline with a timeout
49+
- Validates successful execution based on GStreamer state changes
50+
- Logs detailed output for diagnostics
51+
7. Generates a comprehensive report with pass/fail status for each pipeline
52+
8. Produces a final PASS/FAIL result for the entire suite
53+
54+
---
55+
56+
## PASS / FAIL criteria
57+
58+
### PASS
59+
- **All pipelines** successfully run to PLAYING state without critical errors
60+
- Output file contains `AI_ML PASS`
61+
- Exit code 0
62+
63+
### FAIL
64+
- **Any pipeline** fails to reach PLAYING state or encounters critical errors
65+
- Output file contains `AI_ML FAIL`
66+
- Exit code 0 (the `.res` file is the source of truth)
67+
68+
**Note:** The test always exits `0` even for FAIL. The `.res` file is the source of truth.
69+
70+
---
71+
72+
## Logs and artifacts
73+
74+
By default, logs are written relative to the script working directory:
75+
76+
```
77+
./AI_ML.res
78+
./logs/
79+
object_detection_console.log
80+
object_detection_gst_debug.log
81+
image_classification_console.log
82+
image_classification_gst_debug.log
83+
```
84+
85+
Each pipeline log contains detailed GStreamer debug output that can be examined for failures.
86+
87+
---
88+
89+
## Dependencies
90+
91+
### Required
92+
- `gst-launch-1.0`
93+
- `gst-inspect-1.0`
94+
- `curl`, `unzip` (for model downloads)
95+
- Internet connectivity (to download models)
96+
- Proper QAIRT runtime installation (QNN)
97+
98+
### Required AI Models (downloaded automatically)
99+
- `inception_v3_quantized.tflite`
100+
- `yolox_quantized.tflite`
101+
- `video.mp4`
102+
- Labels from `labels.zip` (containing classification and detection labels)
103+
104+
---
105+
106+
## Pipeline validation behavior
107+
108+
The test validates pipelines by:
109+
1. First checking if all required GStreamer elements are available using `gst-inspect-1.0`
110+
2. Running each pipeline with a timeout (default 60 seconds)
111+
3. Checking for the "Setting pipeline to PLAYING" message in logs
112+
4. Ensuring no critical errors occur after reaching PLAYING state
113+
5. Special handling for timeout: if pipeline reached PLAYING before timeout, it's still considered a PASS
114+
115+
---
116+
117+
## Usage
118+
119+
Run:
120+
121+
```
122+
./run.sh [options]
123+
```
124+
125+
### Options
126+
127+
- `--timeout <seconds>`
128+
- Sets pipeline execution timeout (default: 60 seconds)
129+
- `--gstdebug <level>`
130+
- Sets GStreamer debug level (default: 2)
131+
- `--output-video <path>`
132+
- Path for the encoded video generated by the classification pipeline (default: ./video_out.mp4)
133+
134+
---
135+
136+
## Examples
137+
138+
### 1) Basic execution with default timeout
139+
140+
```
141+
./run.sh
142+
```
143+
144+
### 2) Increase timeout to 120 seconds and set debug level
145+
146+
```
147+
./run.sh --timeout 120 --gstdebug 3
148+
```
149+
150+
### 3) Specify custom output video path
151+
152+
```
153+
./run.sh --output-video /tmp/custom_video.mp4
154+
```
155+
156+
---
157+
158+
## Troubleshooting
159+
160+
### A) "FAIL: Test directory not found"
161+
- Ensure the test is located in the expected directory structure within Runner
162+
163+
### B) "Could not find init_env"
164+
- Verify Runner environment is properly set up
165+
- Check if `init_env` exists in parent directories
166+
167+
### C) Internet connection failures
168+
- Confirm internet interface is available and functional
169+
- The script uses `ensure_network_online` which automatically connects to available networks
170+
- Verify internet connectivity with ping test
171+
172+
### D) Pipeline element missing
173+
- Verify all required GStreamer plugins are installed
174+
- Check for proper QAIRT GStreamer plugin integration
175+
176+
### E) Model download failures
177+
- Confirm internet connectivity
178+
- Verify sufficient storage space for model downloads
179+
180+
---
181+
182+
## Notes for CI / LAVA
183+
184+
- The test always exits `0` even for FAIL conditions
185+
- Use the `.res` file for result determination:
186+
- `AI_ML PASS`
187+
- `AI_ML FAIL`
188+
- Requires Internet connectivity to download models
189+
- Test duration depends on platform performance and number of successful pipelines
190+
- Each pipeline test has its own timeout to prevent hangs
191+
192+
---

0 commit comments

Comments
 (0)