@@ -7,73 +7,73 @@ To test the capabilities of the FlexServ inference server, we can provide a comp
77
88### On FlexServ UI
99
10- 1 . Copy and paste the following text into the FlexServ UI in the ` Responses API ` , ` Input(Markdown) ` section
11- ![ Paste Prompt ] ( /tutorials/images/Paste_Prompt.png )
10+ 1 . Copy and paste the following prompt into the FlexServ UI in the ` Responses API ` , ` Input(Markdown) ` section, shown in the image below.
11+
1212
1313<div style =" max-height :400px ; overflow :auto ; border :1px solid #ddd ; padding :10px ;" >
1414<pre >
1515
16- > "Write Python code that reads all images from a dataset root directory stored in the variable DATASET_ROOT.
17- >
18- > **TASK DESCRIPTION:**
19- > - This is an IMAGE-LEVEL BINARY CLASSIFICATION task implemented using an object detection model.
20- > - The goal is to determine whether an image contains an animal or not.
21- >
22- > **DATASET STRUCTURE:**
23- > - DATASET_ROOT contains three subdirectories: `train`, `test`, and `val`.
24- > - Each directory contains two subdirectories:
25- > * images/ → contains image files (.jpg, .jpeg, .png)
26- > * labels/ → contains YOLO format .txt files
27- > - GROUND-TRUTH LOGIC: An image is considered an `animal` if a corresponding .txt file exists and is not empty in the `labels/` folder.
28- >
29- > **MODEL REQUIREMENTS:**
30- > - Use ONLY a pretrained Ultralytics YOLO detection model (e.g., yolov8n.pt).
31- > - Load the model using the Ultralytics YOLO API.
32- > - Assume YOLO detects animals using class ID `animal` at index 0.
33- >
34- > **DETECTION LOGIC (IMPORTANT):**
35- > - Run object detection on each image.
36- > - If the model produces AT LEAST ONE detection of an animal class with confidence >= 0.5:
37- > → The image-level prediction is `animal`.
38- >
39- > **EVALUATION METRICS:**
40- > - Iterate through the images in the `test` split.
41- > - Compare the image-level prediction with the ground truth (existence of label file).
42- > - Count: True Positives, True Negatives, False Positives, and False Negatives.
43- >
44- > **ACCURACY DEFINITION:**
45- > - Overall accuracy = (True Positives + True Negatives) / Total Images
46- >
47- > **OUTPUT REQUIREMENTS:**
48- > - Print for each image: filename, ground-truth status, and prediction.
49- > - At the end, print a summary report including total images, counts for each metric, and overall detection accuracy.
50- >
51- > **CODING REQUIREMENTS:**
52- > - Store the main path in DATASET_ROOT.
53- > - Use `pathlib` or `os` for robust file path matching.
54- > - Read only .jpg, .jpeg, and .png files.
55- > - Include clear comments explaining each step.
56- >
57- > After the code, briefly explain how the program works in plain English."
16+ TASK DESCRIPTION:
17+ This is an IMAGE-LEVEL BINARY CLASSIFICATION task implemented using an object detection model.
18+ The goal is to determine whether an image contains an animal or not.
19+
20+ DATASET STRUCTURE:
21+ DATASET_ROOT contains three subdirectories: train, test, and val.
22+ Each directory contains two subdirectories:
23+ images/ → contains image files (.jpg, .jpeg, .png)
24+ labels/ → contains YOLO format .txt files
25+
26+ GROUND-TRUTH LOGIC: An image is considered an animal if a corresponding .txt file exists and is not empty in the labels/ folder.
27+
28+ MODEL REQUIREMENTS:
29+ Use ONLY a pretrained Ultralytics YOLO detection model (e.g., yolov8n.pt).
30+ Load the model using the Ultralytics YOLO API.
31+ Assume YOLO detects animals using class ID animal at index 0.
32+
33+ DETECTION LOGIC (IMPORTANT):
34+ Run object detection on each image.
35+ If the model produces AT LEAST ONE detection of an animal class with confidence >= 0.5:
36+ → The image-level prediction is animal.
37+
38+ EVALUATION METRICS:
39+ Iterate through the images in the test split.
40+ Compare the image-level prediction with the ground truth (existence of label file).
41+ Count: True Positives, True Negatives, False Positives, and False Negatives.
42+
43+ ACCURACY DEFINITION:
44+ Overall accuracy = (True Positives + True Negatives) / Total Images
45+
46+ OUTPUT REQUIREMENTS:
47+ Print for each image: filename, ground-truth status, and prediction.
48+ At the end, print a summary report including total images, counts for each metric, and overall detection accuracy.
49+
50+ CODING REQUIREMENTS:
51+ Store the main path in DATASET_ROOT.
52+ Use pathlib or os for robust file path matching.
53+ Read only .jpg files.
54+ Include clear comments explaining each step.
55+
56+ After the code, briefly explain how the program works in plain English.
5857</pre >
5958</div >
6059
61- 2 . Change the temperature to 0.0 for a deterministic solution.
60+ ![ Paste Prompt] ( /tutorials/images/Paste_Prompt.png )
61+
62+ 2 . Change the temperature to a value 0.0 for a deterministic solution.
62633 . Select the model to Run
6364 - Qwen/Qwen2.5-Coder32B-Instruct-61.0 GB - Text Generation
64654 . Make sure the Streams is checked.
65665 . Uncheck Multi-turn conversation
66- 6 . Click Run. You should see the generated code in the blue box in Responses API. Wait for it to complete.
67+ 6 . Click Run. In few minutes you should see the code generation starts in the blue box in Responses API. Wait for it to complete.
68+ After completion you should see a similar output.
6769
6870![ Code] ( /tutorials/images/Code.png )
6971
70- Now, let's test it's performance on the test dataset using the Jupyter Notebook.
71-
7272
7373### On Jupyter :
7474
75- Go to the notebook Code-Detection on your Jupyter
76- ai-tutorial-2026 -> notebooks -> Code-Detection.ipynb
75+ Go to the notebook Code-Detection on your Jupyter path
76+ ` ai-tutorial-2026 -> notebooks -> Code-Detection.ipynb `
7777
7878Copy the generated code from FlexServ UI in a new cell below the cell titled ` Put your generated code here ` .
7979
0 commit comments