You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
6. download the result via `task_download` or via `task_file_info`
59
+
60
+
Concurrency note: each `task_create` call returns a new `task_id`; server-side global per-client concurrency is not capped, so clients should track their own parallel tasks.
Copy file name to clipboardExpand all lines: docs/mcp/mcp_details.md
+157-7Lines changed: 157 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,12 +10,13 @@ This document lists the MCP tools exposed by PlanExe and example prompts for age
10
10
- The primary MCP server runs in the cloud (see `mcp_cloud`).
11
11
- The local MCP proxy (`mcp_local`) forwards calls to the server and adds a local download helper.
12
12
- Tool responses return JSON in both `content.text` and `structuredContent`.
13
+
- Workflow note: drafting and user approval of the prompt is a non-tool step between setup tools and `task_create`.
13
14
14
15
## Tool Catalog, `mcp_cloud`
15
16
16
17
### prompt_examples
17
18
18
-
Returns around five example prompts that show what good prompts look like. Each sample is typically 300–800 words: detailed context, requirements, and success criteria. Usually the AI does the heavy lifting: the user has a vague idea, the agent calls `prompt_examples`, then expands that idea into a high-quality prompt (300–800 words). The prompt is shown to the user, who can ask for further changes or confirm it’s good to go. When the user confirms, the agent then calls `task_create`. Shorter or vaguer prompts produce lower-quality plans.
19
+
Returns around five example prompts that show what good prompts look like. Each sample is typically 300-800 words. Usually the AI does the heavy lifting: the user has a vague idea, the agent calls `prompt_examples`, then expands that idea into a high-quality prompt (300-800 words). A compact prompt shape works best: objective, scope, constraints, timeline, stakeholders, budget/resources, and success criteria. The prompt is shown to the user, who can ask for further changes or confirm it’s good to go. When the user confirms, the agent then calls `task_create`. Shorter or vaguer prompts produce lower-quality plans.
19
20
20
21
Example prompt:
21
22
```
@@ -27,7 +28,33 @@ Example call:
27
28
{}
28
29
```
29
30
30
-
Response includes `samples` (array of prompt strings, each 300–800 words) and `message`.
31
+
Response includes `samples` (array of prompt strings, each ~300-800 words) and `message`.
32
+
33
+
### model_profiles
34
+
35
+
Returns profile guidance and model availability for `task_create.model_profile`.
36
+
This helps agents pick a profile without knowing internal `llm_config/*.json` details.
37
+
Profiles with zero models are omitted from the `profiles` list.
38
+
If no models are available in any profile, `model_profiles` returns `isError=true` with `error.code = MODEL_PROFILES_UNAVAILABLE`.
Developer-only hidden metadata (not part of visible tool schema shown to agents):
77
+
```text
46
78
speed_vs_detail: "ping" | "fast" | "all"
47
79
```
48
80
81
+
Example with visible `model_profile`:
82
+
```json
83
+
{"prompt": "Weekly meetup for humans where participants are randomly paired every 5 minutes...", "model_profile": "premium"}
84
+
```
85
+
86
+
Example with hidden metadata override. The `ping` only checks if the LLMs are connected and doesn't trigger a full plan to be created:
87
+
```json
88
+
{
89
+
"prompt": "Weekly meetup for humans where participants are randomly paired every 5 minutes...",
90
+
"metadata": {
91
+
"task_create": {
92
+
"speed_vs_detail": "ping"
93
+
}
94
+
}
95
+
}
96
+
```
97
+
98
+
Example with hidden metadata override. The `fast` triggers a plan to be created, where the entire Luigi pipeline gets exercised, while skipping as much detail as possible:
99
+
```json
100
+
{
101
+
"prompt": "Weekly meetup for humans where participants are randomly paired every 5 minutes...",
102
+
"metadata": {
103
+
"task_create": {
104
+
"speed_vs_detail": "fast"
105
+
}
106
+
}
107
+
}
108
+
```
109
+
110
+
Example with hidden metadata override. The `all` is the default setting. Creates a plan with **ALL** details:
111
+
```json
112
+
{
113
+
"prompt": "Weekly meetup for humans where participants are randomly paired every 5 minutes...",
114
+
"metadata": {
115
+
"task_create": {
116
+
"speed_vs_detail": "all"
117
+
}
118
+
}
119
+
}
120
+
```
121
+
122
+
Counterexamples (do NOT use PlanExe for these):
123
+
124
+
- "Give me a 5-point checklist for X."
125
+
- "Summarize this paragraph in 6 bullets."
126
+
- "Rewrite this email."
127
+
- "Identify the risks of this project."
128
+
- "Make a SWOT for this document."
129
+
130
+
What to do instead:
131
+
132
+
- For one-shot outputs, use a normal LLM response directly.
133
+
- For PlanExe, send a substantial multi-phase project prompt with scope, constraints, timeline, budget, stakeholders, and success criteria.
134
+
- PlanExe always runs a fixed end-to-end pipeline; it does not support selecting only internal pipeline subsets.
135
+
49
136
### task_status
50
137
51
138
Fetch status/progress and recent files for a task.
-`task_file_info` may return `{}` while the artifact is not ready yet (not an error).
262
+
263
+
## Concurrency semantics (practical)
264
+
265
+
- Each `task_create` call creates a new task with a new `task_id`.
266
+
- The server does not enforce a global “one active task per client” cap.
267
+
- Parallelism is a client orchestration concern:
268
+
- start with 1 task
269
+
- scale to 2 in parallel if needed
270
+
- avoid more than 4 unless you have strong task-tracking UX
271
+
138
272
## Typical Flow
139
273
140
274
### 1. Get example prompts
141
275
142
-
The user often starts with a vague idea. The AI calls `prompt_examples` first to see what good prompts look like (around five samples, 300–800 words each), then expands the user’s idea into a high-quality prompt and shows it to the user.
276
+
The user often starts with a vague idea. The AI calls `prompt_examples` first to see what good prompts look like (around five samples, typically 300-800 words each), then expands the user’s idea into a high-quality prompt using this compact shape: objective, scope, constraints, timeline, stakeholders, budget/resources, and success criteria.
143
277
144
278
Prompt:
145
279
```
@@ -151,7 +285,23 @@ Tool call:
151
285
{}
152
286
```
153
287
154
-
### 2. Create a plan
288
+
### 2. Inspect model profiles (optional but recommended)
289
+
290
+
Prompt:
291
+
```
292
+
Show model profile options and available models.
293
+
```
294
+
295
+
Tool call:
296
+
```json
297
+
{}
298
+
```
299
+
300
+
### 3. Draft and approve the prompt (non-tool step)
301
+
302
+
At this step, the agent writes a high-quality prompt draft (typically 300-800 words, with objective, scope, constraints, timeline, stakeholders, budget/resources, and success criteria), shows it to the user, and waits for approval.
303
+
304
+
### 4. Create a plan
155
305
156
306
The user reviews the prompt and either asks for further changes or confirms it’s good to go. When the user confirms, the agent calls `task_create` with that prompt.
0 commit comments