Skip to content

Commit 00ae3ed

Browse files
authored
fix(docs): resolve Pygments console lexer error in LM Studio tutorial (#402)
The console lexer cannot tokenize single-quoted JSON arguments in multi-line shell commands. Switch the Step 5 code blocks to the bash lexer which handles quoting correctly. Also standardize earlier blocks to use the console lexer with $ prompts for simple commands, and replace -H/-d curl flags with --json for consistency.
1 parent 5439f47 commit 00ae3ed

File tree

1 file changed

+17
-19
lines changed

1 file changed

+17
-19
lines changed

docs/tutorials/local-inference-lmstudio.md

Lines changed: 17 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -46,18 +46,18 @@ First, complete OpenShell installation and follow the {doc}`/get-started/quickst
4646
If you prefer to work without having to keep the LM Studio app open, download llmster (headless LM Studio) with the following command:
4747

4848
### Linux/Mac
49-
```bash
50-
curl -fsSL https://lmstudio.ai/install.sh | bash
49+
```console
50+
$ curl -fsSL https://lmstudio.ai/install.sh | bash
5151
```
5252

5353
### Windows
54-
```bash
55-
irm https://lmstudio.ai/install.ps1 | iex
54+
```console
55+
$ irm https://lmstudio.ai/install.ps1 | iex
5656
```
5757

5858
And start llmster:
59-
```bash
60-
lms daemon up
59+
```console
60+
$ lms daemon up
6161
```
6262

6363
## Step 1: Start LM Studio Local Server
@@ -75,9 +75,9 @@ If you're using llmster in headless mode, run `lms server start --bind 0.0.0.0`.
7575
In the LM Studio app, head to the Model Search tab to download a small model like Qwen3.5 2B.
7676

7777
In the terminal, use the following command to download and load the model:
78-
```bash
79-
lms get qwen/qwen3.5-2b
80-
lms load qwen/qwen3.5-2b
78+
```console
79+
$ lms get qwen/qwen3.5-2b
80+
$ lms load qwen/qwen3.5-2b
8181
```
8282

8383

@@ -168,30 +168,28 @@ Run a simple request through `https://inference.local`:
168168

169169
::::{tab-item} OpenAI-compatible
170170

171-
```console
172-
$ openshell sandbox create -- \
171+
```bash
172+
openshell sandbox create -- \
173173
curl https://inference.local/v1/chat/completions \
174174
--json '{"messages":[{"role":"user","content":"hello"}],"max_tokens":10}'
175175

176-
$ openshell sandbox create -- \
176+
openshell sandbox create -- \
177177
curl https://inference.local/v1/responses \
178-
-H "Content-Type: application/json" \
179-
-d '{
178+
--json '{
180179
"instructions": "You are a helpful assistant.",
181180
"input": "hello",
182181
"max_output_tokens": 10
183-
}'
182+
}'
184183
```
185184

186185
::::
187186

188187
::::{tab-item} Anthropic-compatible
189188

190-
```console
191-
$ openshell sandbox create -- \
189+
```bash
190+
openshell sandbox create -- \
192191
curl https://inference.local/v1/messages \
193-
-H "Content-Type: application/json" \
194-
-d '{"messages":[{"role":"user","content":"hello"}],"max_tokens":10}'
192+
--json '{"messages":[{"role":"user","content":"hello"}],"max_tokens":10}'
195193
```
196194

197195
::::

0 commit comments

Comments
 (0)