Skip to content
186 changes: 35 additions & 151 deletions .github/workflows/news-committee-reports.md

Large diffs are not rendered by default.

364 changes: 46 additions & 318 deletions .github/workflows/news-evening-analysis.md

Large diffs are not rendered by default.

194 changes: 35 additions & 159 deletions .github/workflows/news-interpellations.md

Large diffs are not rendered by default.

169 changes: 28 additions & 141 deletions .github/workflows/news-month-ahead.md

Large diffs are not rendered by default.

169 changes: 28 additions & 141 deletions .github/workflows/news-monthly-review.md

Large diffs are not rendered by default.

186 changes: 35 additions & 151 deletions .github/workflows/news-motions.md

Large diffs are not rendered by default.

207 changes: 37 additions & 170 deletions .github/workflows/news-propositions.md

Large diffs are not rendered by default.

102 changes: 18 additions & 84 deletions .github/workflows/news-realtime-monitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,27 +148,7 @@ You are the **Real-Time Political Monitor** for Riksdagsmonitor. Detect signific

## ⚠️ CRITICAL: Bash Tool Call Format

**Every `bash` tool call MUST include both required parameters — omitting either causes validation errors:**

| Parameter | Required | Description |
|-----------|----------|-------------|
| `command` | ✅ YES | The shell command string to execute |
| `description` | ✅ YES | Short human-readable label (≤100 chars) |

**✅ CORRECT** — always provide both `command` and `description`:
```
bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" })
bash({ command: "npm ci --prefer-offline --no-audit", description: "Install npm dependencies" })
bash({ command: "npx htmlhint 'news/*-*.html'", description: "Validate HTML files" })
```

**❌ WRONG** — missing parameters cause `"command": Required, "description": Required` errors:
```
bash("npm ci") // ← WRONG: no named parameters
bash({ command: "..." }) // ← WRONG: missing description
```

> When you see fenced bash code blocks below (three backticks followed by bash), they show the **command content** to execute. You MUST wrap each in a proper bash tool call with both `command` and `description` parameters. For multi-line scripts, join commands with `&&` or `;` into a single `command` string.
> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "Bash Tool Call Format". Key rule: every `bash` call MUST have both `command` AND `description` parameters. Example: `bash({ command: "date -u '+%Y-%m-%d'", description: "Get current UTC date" })`

## 🛡️ AWF Shell Safety — MANDATORY for Agent-Generated Bash

Expand All @@ -190,15 +170,9 @@ bash({ command: "..." }) // ← WRONG: missing description
3. **Use `find -exec`** instead of for-loops with command substitution
4. **Use direct paths** when possible (e.g., `cat analysis/daily/2026-04-07/realtime-1411/synthesis-summary.md`)

## 🔤 UTF-8 Encoding — MANDATORY for ALL Content

> **NON-NEGOTIABLE**: All article content, titles, descriptions, and metadata MUST use native UTF-8 characters. NEVER use HTML numeric entities (`ä`, `ö`, `å`) for non-ASCII characters like Swedish åäö, German üö, French éè, etc.
## 🔤 UTF-8 Encoding

**Rules:**
1. Write Swedish characters as UTF-8: `ö`, `ä`, `å`, `Ö`, `Ä`, `Å` — NEVER as `ö`, `ä`, etc.
2. Author name: Always `James Pether Sörling` — never `Sörling`.
3. All HTML files use `<meta charset="UTF-8">` — entities are unnecessary and cause double-escaping bugs.
4. This applies to ALL languages and ALL output: titles, meta tags, JSON-LD, article body, analysis files.
> **Full reference:** See `SHARED_PROMPT_PATTERNS.md` → "UTF-8 Encoding". Summary: use native UTF-8 (`ö`, `ä`, `å`) — NEVER HTML entities (`&#246;`, `&#228;`). Author: `James Pether Sörling`.


## ⚠️ NON-NEGOTIABLE RULES
Expand All @@ -216,17 +190,7 @@ bash({ command: "..." }) // ← WRONG: missing description

## 🧠 Repo Memory

This workflow uses **persistent repo-memory** on branch `memory/news-generation` (shared with all news workflows).

**At run START — read context:**
- Read `memory/news-generation/covered-documents/{YYYY-MM-DD}.json` for today (and optionally yesterday) to check which dok_ids were already analyzed recently
- Read `memory/news-generation/last-run-news-realtime-monitor.json` for previous run metadata
- Skip documents already covered by another workflow to avoid duplicate analysis

**At run END — write context:**
- Update `memory/news-generation/last-run-news-realtime-monitor.json` with date, documents analyzed, quality score
- Write processed dok_ids to `memory/news-generation/covered-documents/{YYYY-MM-DD}.json` (sharded by date; retain last 7 days)
- Update `memory/news-generation/translation-status.json` with new articles needing translation
Uses `memory/news-generation` branch. START: read `memory/news-generation/last-run-news-realtime-monitor.json` + `memory/news-generation/covered-documents/{YYYY-MM-DD}.json`. END: update both + `memory/news-generation/translation-status.json`. Skip already-covered dok_ids.

## ⏱️ Time Budget (45 minutes)

Expand All @@ -238,13 +202,13 @@ START_TIME=$(date +%s)
|-------|---------|--------|
| Setup | 0–3 | Date check, `get_sync_status()` warm-up |
| Download | 3–6 | Run data download scripts (MCP data fetch) |
| **AI Analysis** | **6–21** | **🚨 MANDATORY 15 min minimum**: Read ALL methodology guides + ALL templates, create per-file analysis with Mermaid diagrams and evidence tables. Run quality gate bash check. |
| **AI Analysis** | **6–21** | **🚨 MANDATORY 15 min minimum**: Consult methodology guides + templates as needed, create per-file analysis with Mermaid diagrams and evidence tables. Run quality gate bash check. |
| Detect | 21–25 | Query MCP tools for today's activity |
| Generate | 25–33 | Run `generate-news-enhanced.ts` script (core languages by default; supports all 14 languages via `languages=all`) |
| Validate | 33–38 | Run `validate-news-generation.sh` |
| Commit+PR | 38–43 | `git add && git commit`, then `safeoutputs___create_pull_request` |

> ⚠️ **Analysis phase is 15 minutes minimum** — this is NOT negotiable. PR #1452 demonstrated that < 10 min produces unacceptable analysis (plain prose, no Mermaid diagrams, no evidence tables). The AI MUST read all methodology/template documents and produce publication-quality output matching [SWOT.md](../../SWOT.md) formatting standard.
> ⚠️ **Analysis phase is 15 minutes minimum** — this is NOT negotiable. PR #1452 demonstrated that < 10 min produces unacceptable analysis (plain prose, no Mermaid diagrams, no evidence tables). The AI MUST consult methodology guides and templates as needed and produce publication-quality output matching [SWOT.md](../../SWOT.md) formatting standard.

**Hard cutoffs** — check elapsed time before EVERY phase:
```bash
Expand Down Expand Up @@ -357,29 +321,23 @@ fi

### 🚨🚨🚨 MANDATORY: AI Must Analyse ALL Data Using Methods & Templates (15 min minimum)

> **THIS IS YOUR PRIMARY JOB.** You MUST spend **at least 15 minutes** on analysis. For every piece of data or document downloaded from MCP, you MUST read ALL methodology guides and ALL templates, then create analysis following those templates exactly. This is NOT optional.
> **THIS IS YOUR PRIMARY JOB.** You MUST spend **at least 15 minutes** on analysis. For every piece of data or document downloaded from MCP, you MUST read the master methodology guide and per-file template upfront, then consult other methodology guides and templates as needed for each analysis step. This is NOT optional.
>
> **Why 15 minutes?** PR #1452 demonstrated that rushing analysis (< 10 min) produces: plain prose without tables, no Mermaid diagrams, no dok_id evidence citations, no template structure. This is REJECTED. The templates require structured tables, color-coded Mermaid diagrams, evidence citations, and multi-section analysis that cannot be done properly in less than 15 minutes.

#### What you MUST do (no exceptions):

1. **Read ALL 6 methodology guides** (use `view` to read each one fully — not skim):
1. **Read the master methodology guide and per-file template** (required upfront):
- `analysis/methodologies/ai-driven-analysis-guide.md` — Master guide (bad vs. good examples, quality gate)
- `analysis/templates/per-file-political-intelligence.md` — Per-file output template

2. **Consult other methodology guides and templates as needed** for the current analysis step:
- `analysis/methodologies/political-swot-framework.md` — Evidence-based SWOT with confidence hierarchy
- `analysis/methodologies/political-risk-methodology.md` — 5×5 risk matrix
- `analysis/methodologies/political-threat-framework.md` — Political Threat Taxonomy
- `analysis/methodologies/political-classification-guide.md` — Classification taxonomy
- `analysis/methodologies/political-style-guide.md` — Writing standards

2. **Read ALL 8 analysis templates** (use `view` to read each one fully — these define the output format):
- `analysis/templates/per-file-political-intelligence.md`
- `analysis/templates/synthesis-summary.md`
- `analysis/templates/risk-assessment.md`
- `analysis/templates/political-classification.md`
- `analysis/templates/threat-analysis.md`
- `analysis/templates/swot-analysis.md` — SWOT MUST have: Context table, evidence tables with dok_id/confidence/impact columns, Mermaid SWOT Quadrant Mapping
- `analysis/templates/stakeholder-impact.md`
- `analysis/templates/significance-scoring.md`
- `analysis/templates/synthesis-summary.md`, `risk-assessment.md`, `political-classification.md`, `threat-analysis.md`, `swot-analysis.md` (SWOT MUST have: Context table, evidence tables with dok_id/confidence/impact columns, Mermaid SWOT Quadrant Mapping), `stakeholder-impact.md`, `significance-scoring.md`

3. **For EVERY downloaded document/data file**: apply ALL 6 analytical lenses and create `{dok_id}-analysis.md` following the per-file template. Cite specific data (dok_id, vote counts, party names). Include ≥1 color-coded Mermaid diagram with `style` directives.

Expand Down Expand Up @@ -958,26 +916,11 @@ fi

## 🛡️ File Ownership Contract

This workflow is a **content** workflow and MUST only create/modify files for **EN and SV** languages.

- ✅ **Allowed:** `news/YYYY-MM-DD-*-en.html`, `news/YYYY-MM-DD-*-sv.html`
- ❌ **Forbidden:** `news/YYYY-MM-DD-*-da.html`, `news/YYYY-MM-DD-*-no.html`, or any other translation language

Validate file ownership (checks staged, unstaged, and untracked changes):
```bash
npx tsx scripts/validate-file-ownership.ts content
```

If the validator reports violations, remove tracked changes with `git restore --staged --worktree -- <file>` (or `git checkout -- <file>` on older Git), and remove untracked files with `rm <file>` (or `git clean -f -- <file>`) before committing.
Content workflows: only create/modify **EN and SV** files (`news/YYYY-MM-DD-*-en.html`, `*-sv.html`). Validate with `npx tsx scripts/validate-file-ownership.ts content`. Fix violations: `git restore --staged --worktree -- <file>` (tracked) or `rm <file>` (untracked).

### Branch Naming Convention

Use deterministic branch names for content PRs:
```
news/content/{YYYY-MM-DD}/breaking
```

> **Note:** `safeoutputs___create_pull_request` handles branch creation automatically; this naming convention is documented for traceability and conflict avoidance.
Branch: `news/content/{YYYY-MM-DD}/breaking`. `safeoutputs___create_pull_request` handles this automatically.

## Step 5: Commit & Create PR

Expand Down Expand Up @@ -1021,19 +964,10 @@ safeoutputs___create_pull_request({

## Required Skills

Before generating articles, consult these skills:
1. **`.github/skills/editorial-standards/SKILL.md`** — OSINT/INTOP editorial standards
2. **`.github/skills/swedish-political-system/SKILL.md`** — Parliamentary terminology
3. **`.github/skills/legislative-monitoring/SKILL.md`** — Voting patterns, committee tracking, bill progress
4. **`.github/skills/riksdag-regering-mcp/SKILL.md`** — MCP tool documentation
5. **`.github/skills/language-expertise/SKILL.md`** — Per-language style guidelines
6. **`.github/skills/gh-aw-safe-outputs/SKILL.md`** — Safe outputs usage
7. **`scripts/prompts/v2/political-analysis.md`** — Core political analysis framework (6 analytical lenses)
8. **`scripts/prompts/v2/stakeholder-perspectives.md`** — Multi-perspective analysis instructions
9. **`scripts/prompts/v2/quality-criteria.md`** — Quality self-assessment rubric (minimum 7/10)
10. **`scripts/prompts/v2/per-file-intelligence-analysis.md`** — Per-file AI analysis protocol
11. **`analysis/methodologies/ai-driven-analysis-guide.md`** — Methodology for deep per-file analysis
12. **`analysis/templates/per-file-political-intelligence.md`** — Per-file analysis output template
Consult as needed — do NOT read all files upfront:
- **Skills:** `.github/skills/editorial-standards/SKILL.md`, `.github/skills/swedish-political-system/SKILL.md`, `.github/skills/legislative-monitoring/SKILL.md`, `.github/skills/riksdag-regering-mcp/SKILL.md`, `.github/skills/language-expertise/SKILL.md`, `.github/skills/gh-aw-safe-outputs/SKILL.md`
- **Analysis:** `scripts/prompts/v2/political-analysis.md`, `per-file-intelligence-analysis.md`, `stakeholder-perspectives.md`, `quality-criteria.md`
- **Methodology:** `analysis/methodologies/ai-driven-analysis-guide.md` (v5.0) + `analysis/templates/per-file-political-intelligence.md`
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section says "Consult as needed — do NOT read all files upfront", but elsewhere in this prompt the time budget/analysis phase still instructs reading all methodology guides and templates. Please align these so the agent has a single clear rule for skill/methodology reading (lazy/on-demand vs. read-all upfront).

Suggested change
- **Methodology:** `analysis/methodologies/ai-driven-analysis-guide.md` (v5.0) + `analysis/templates/per-file-political-intelligence.md`
- **Methodology:** Consult only when needed for the selected depth/output: `analysis/methodologies/ai-driven-analysis-guide.md` (v5.0) and `analysis/templates/per-file-political-intelligence.md`

Copilot uses AI. Check for mistakes.

## 📊 MANDATORY Multi-Step AI Analysis Framework

Expand Down
Loading
Loading