@@ -7,7 +7,7 @@ To test the capabilities of the FlexServ inference server, we can provide a comp
77
88### On FlexServ UI
99
10- 1 . Copy and paste the following prompt into the FlexServ UI in the ` Responses API ` , ` Input(Markdown) ` section, shown in the image below.
10+ - Copy and paste the following prompt into the FlexServ UI in the ` Responses API ` , ` Input(Markdown) ` section, shown in the image below.
1111
1212
1313<div style =" max-height :400px ; overflow :auto ; border :1px solid #ddd ; padding :10px ;" >
@@ -59,12 +59,12 @@ After the code, briefly explain how the program works in plain English.
5959
6060![ Paste Prompt] ( /tutorials/images/Paste_Prompt.png )
6161
62- 2 . Change the temperature to a value 0.0 for a deterministic solution.
63- 3 . Select the model to Run
64- - Qwen/Qwen2.5-Coder32B-Instruct-61.0 GB - Text Generation
65- 4 . Make sure the Streams is checked.
66- 5 . Uncheck Multi-turn conversation
67- 6 . Click Run. In few minutes you should see the code generation starts in the blue box in Responses API. Wait for it to complete.
62+ - Change the ` temperature ` to a value ` 0.0 ` for a deterministic solution.
63+ - Select the model to Run
64+ - ` Qwen/Qwen2.5-Coder32B-Instruct-61.0 GB - Text Generation `
65+ - Make sure the ` Streams ` is checked.
66+ - Uncheck ` Multi-turn conversation `
67+ - Click ` Run ` . In few minutes you should see the code generation starts in the blue box in Responses API. Wait for it to complete.
6868After completion you should see a similar output.
6969
7070![ Code] ( /tutorials/images/Code.png )
0 commit comments