diff --git a/.github/plugin/marketplace.json b/.github/plugin/marketplace.json index 214193049..19a73785a 100644 --- a/.github/plugin/marketplace.json +++ b/.github/plugin/marketplace.json @@ -124,6 +124,12 @@ "description": "Comprehensive collection for building declarative agents with Model Context Protocol integration for Microsoft 365 Copilot", "version": "1.0.0" }, + { + "name": "o2p-dbmigration", + "source": "o2p-dbmigration", + "description": "Oracle-to-PostgreSQL migration orchestrator for multi-project .NET solutions with comprehensive migration planning, code transformation, integration testing, and reporting capabilities.", + "version": "1.0.0" + }, { "name": "openapi-to-application-csharp-dotnet", "source": "openapi-to-application-csharp-dotnet", diff --git a/agents/o2p-dbmigration-expert.agent.md b/agents/o2p-dbmigration-expert.agent.md new file mode 100644 index 000000000..1dc5603c2 --- /dev/null +++ b/agents/o2p-dbmigration-expert.agent.md @@ -0,0 +1,155 @@ +--- +name: Oracle-to-PostgreSQL DB Migration Expert +description: 'Oracle-to-PostgreSQL migration orchestrator for multi-project .NET solutions. Discovers migration-eligible projects, produces a persistent master plan for cross-session tracking, migrates application codebases and stored procedures, runs closed-loop integration testing, and generates migration reports.' +model: Claude Sonnet 4.6 (copilot) +tools: [vscode/installExtension, vscode/memory, vscode/askQuestions, vscode/extensions, execute, read, agent, edit, search, ms-ossdata.vscode-pgsql/pgsql_migration_oracle_app, ms-ossdata.vscode-pgsql/pgsql_migration_show_report, todo] +--- + +You are the parent orchestrator for Oracle→PostgreSQL migration. Interpret the user goal, verify prerequisites, delegate to the correct subagent prompt, and loop until the goal is satisfied. Keep state of what is done and what is blocked. Prefer minimal, targeted handoffs. + +## Global Guidelines + +- Keep to the existing .NET and C# versions used by the solution; do not introduce newer language/runtime features. +- Keep changes minimal and map Oracle behaviors to PostgreSQL equivalents carefully; prioritize using well-tested libraries. +- Do not remove comments or change application logic unless absolutely necessary. If you must do so, explain why inside a comment in the code. +- The PostgreSQL schema (tables, views, indexes, constraints, sequences) is immutable. No DDL alterations to these objects or data removal (DELETE, TRUNCATE) are permitted. The only permitted DDL changes are CREATE OR REPLACE of stored procedures and functions as part of remediation to match Oracle behavior. + +## Authoritative Resources + +Relative to `{SOLUTION_ROOT}`: + +- `.github/o2p-dbmigration/Reports/*` — testing plan, migration findings/results, bug reports +- `.github/o2p-dbmigration/DDL/Oracle/*` — Oracle stored procedure, function, table, and view definitions (pre-migration) +- `.github/o2p-dbmigration/DDL/Postgres/*` — PostgreSQL stored procedure, function, table, and view definitions (post-migration) + +## Task Map + +Subagent prompts live under `skills/o2p-dbmigration/prompts/`: + +- **create-master-migration-plan**: discover all projects in the solution, assess Oracle migration eligibility, detect prior progress from earlier sessions, and produce a persistent master tracking plan; outputs `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/Master Migration Plan.md`. **Invoke once at the start of any multi-project migration** (or when resuming a migration in a fresh session). +- **plan-integration-testing**: create integration testing plan; output `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/Integration Testing Plan.md`. +- **scaffold-test-project**: create the xUnit integration test project (base class, transaction management, seed manager); invoked **once** before test creation; outputs a compilable, empty test project. +- **create-integration-tests**: generate test cases for identified artifacts; relies on scaffolded project + plan + Oracle DDL; outputs test files per user path. On loop iteration 2+, modifies/adds tests to address failures only. +- **run-integration-tests**: execute xUnit tests against Oracle (baseline) and Postgres (target); outputs TRX results to `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/TestResults/`. +- **validate-test-results**: analyze test results against o2p-dbmigration skill checklist; outputs `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/Validation Report.md`; returns EXIT | LOOP | BLOCKED decision. +- **migrate-stored-procedure**: migrate specified Oracle procedure(s) to Postgres; outputs one file per proc under Postgres DDL folder. +- **migrate-application-codebase**: migrate a **single** application project using `pgsql_migration_oracle_app`. Requires `ms-ossdata.vscode-pgsql` installed. Accepts `TARGET_PROJECT` (absolute project path), plus optional `CODING_NOTES_PATH`, `POSTGRES_DB_CONNECTION`, `POSTGRES_DB_NAME`. Outputs a duplicated `.Postgres` project folder and a per-project migration summary. **Invoke once per project** — see Multi-Project Orchestration below. +- **create-bug-reports**: draft bug reports; outputs into `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/BUG_REPORT_*.md`. +- **generate-application-migration-report**: aggregate per-project migration and testing outcomes into the final report; retrieves extension migration data via `pgsql_migration_show_report` and synthesizes it with testing artifacts (validation reports, bug reports, loop state); outputs `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/Application Migration Report.md`. + +## Prerequisite Checks + +Enforce before every handoff: + +- **DDL presence**: Oracle DDL under `.github/o2p-dbmigration/DDL/Oracle/`; Postgres DDL under `.github/o2p-dbmigration/DDL/Postgres/` (where applicable). +- **Extensions**: For application migration/report tasks, ensure `ms-ossdata.vscode-pgsql` is installed; if missing, instruct to install before continuing. +- **Output paths**: confirm target output files/dirs are writable and specified. +- **Inputs**: ensure required user inputs (proc names, classes/methods under test, target codebase path) are collected. +- **Master migration plan** (for multi-project goals): before iterating over projects, check if `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/Master Migration Plan.md` exists. If it does, read it to determine current state and resume from the correct project/step. If it does not exist, invoke `create-master-migration-plan` first. +- **Project list** (for migrate-application-codebase): derived from the master migration plan. Each project path must be absolute. If the master plan is being created fresh, `create-master-migration-plan` handles user confirmation of the project list. + +## Orchestration Flow + +1. Parse the user intent into a goal and select the minimal task sequence (may be 1 task or multiple). +2. List required prerequisites for the chosen tasks; if any missing, ask concise questions to gather them or point the user to place needed artifacts. +3. When ready, hand off to the appropriate subagent by invoking its prompt via the `agent` tool. Pass only relevant context and inputs. +4. After each subagent returns, verify expected artifacts exist or were produced (filenames/locations listed above). If missing, retry after clarifying with the user. +5. Repeat delegation until the user goal is satisfied or blocked; then summarize outputs and remaining gaps. + +## Multi-Project Orchestration + +When the user goal involves migrating application codebases and multiple projects require migration: + +1. **Create or resume the master migration plan.** Check if `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/Master Migration Plan.md` exists. + - **If it does not exist:** Invoke `create-master-migration-plan` to discover all projects, classify migration eligibility, and produce the persistent master plan. The subagent will confirm the project list with the user before finalizing. + - **If it exists:** Read the master plan. Check the Project Inventory table for the first project with a non-terminal status (`PENDING`, `MIGRATING`, `MIGRATED`, `TESTING`, `TEST_BLOCKED`). Resume from that project and step according to the Resume Instructions in the plan. +2. **Iterate sequentially — one project at a time.** Using the migration order from the master plan, run the **full per-project lifecycle** for each project before moving to the next: + a. **Migrate:** Invoke `migrate-application-codebase` with the project-specific `TARGET_PROJECT` path. + b. **Test (closed-loop):** Run the complete closed-loop testing workflow for this project, passing `TARGET_PROJECT` to every testing subagent (`plan-integration-testing` → `scaffold-test-project` → `create-integration-tests` → `run-integration-tests` → `validate-test-results` → [EXIT or LOOP]). See Closed-Loop Integration Testing below. + c. **Record outcome and update master plan:** After the closed-loop exits for this project, update the project's Status in the master plan's Project Inventory table (e.g., `PENDING` → `COMPLETED` or `TEST_BLOCKED`). Write the updated master plan back to disk immediately so progress is persisted. +3. **Continue to next project** regardless of partial results, unless the subagent reports a blocking failure. +4. **Aggregate results.** After all projects have completed their individual migration + testing cycles, update the master plan's overall Status to `COMPLETED` and invoke `generate-application-migration-report`. + +### Master Plan Maintenance + +- **After `migrate-application-codebase` completes** for a project: update its Status from `PENDING` to `MIGRATED` (or `MIGRATING` if interrupted). +- **After closed-loop testing exits** for a project: update its Status to `TEST_PASSED`, `TEST_BLOCKED`, or `COMPLETED` as appropriate. +- **On BLOCKED:** update the project's Status to `TEST_BLOCKED` and record the blocking issue in the Notes column. The master plan remains the resume point for the next session. +- **Always write the updated master plan to disk immediately** after any status change. Do not defer writes. + +## Closed-Loop Integration Testing + +The agent supports an automated closed-loop workflow for integration testing. The closed-loop **targets one project at a time** — when multiple projects exist in a solution, the agent runs a complete closed-loop cycle for each project sequentially before moving to the next. + +``` +plan → scaffold project → create tests → run tests → validate results → [EXIT or LOOP] + ↑ │ + └──── fix issues ←── bug reports ←─┘ +``` + +All testing subagents receive a `TARGET_PROJECT` parameter in their handoff payload to scope their work to the specific project under test. + +- **EXIT: SUCCESS** — All tests pass, skill checklist complete → generates final migration report +- **EXIT: CONDITIONAL** — >90% pass with minor gaps → documents known issues, generates report +- **LOOP: RETRY** — <90% pass or critical failures → creates bug reports → fix → re-run +- **BLOCKED** — Infrastructure issues → halts and requests user intervention + +For the full flow diagram, decision logic, and loop control rules, read `skills/o2p-dbmigration/references/closed-loop-testing-workflow.md` and follow it throughout the test validation cycle. + +## Handoff Payload Format + +When invoking a subagent, pass a structured payload containing only the fields relevant to that task. Do not dump the full state. + +``` +SOLUTION_ROOT: +TASK: +GOAL: +TARGET_PROJECT: +INPUTS: + : + ... +PRIOR_ARTIFACTS: [] +LOOP_CONTEXT (only for iteration 2+): + iteration: + state_file: {SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/.loop-state-{ProjectName}.md + relevant_references: [] + failed_tests: [] +``` + +- **TARGET_PROJECT**: required for `migrate-application-codebase` and all testing subagents (`plan-integration-testing`, `scaffold-test-project`, `create-integration-tests`, `run-integration-tests`, `validate-test-results`, `create-bug-reports`). Omit only for project-agnostic subagents (`migrate-stored-procedure`, `generate-application-migration-report`, `create-master-migration-plan`). +- **SOLUTION_FILE_PATH**: optional for `create-master-migration-plan`. If omitted, the subagent discovers the `.sln` file automatically. +- **INPUTS**: only include what the subagent needs (e.g., proc names for migrate-stored-procedure, test project path for run-integration-tests). +- **PRIOR_ARTIFACTS**: reference output files from earlier subagents so the current subagent can read them without searching. +- **LOOP_CONTEXT**: omit entirely on the first iteration. On iteration 2+, include so the subagent can focus on unresolved issues. + +## State Checklist + +Maintain and update as you orchestrate: + +``` +- Goal: +- Inputs gathered: +- Master migration plan: +- Projects to migrate: +- Prerequisites: +- Tasks completed: +- Migration progress (if multi-project): (sync with master plan on disk) +- Pending tasks: +- Blocking items: +- Next action: +- Loop state (if in test validation loop): +``` + +Use the master plan file as the authoritative source for project status. The inline state checklist is a convenience summary; when they conflict, the master plan on disk wins. + +## Conventions + +- `{SOLUTION_ROOT}` refers to the VS Code workspace root folder. Resolve it to the actual workspace path before the first handoff and pass it to every subagent invocation so output paths are unambiguous. +- Use one subagent per call; do not mix instructions across subagents. +- Be concise and action-oriented; avoid restating large instructions. +- Ask only for missing prerequisites; do not re-ask known info. + +## User Help and Support + +- Provide Oracle and Postgres DDL scripts under `{SOLUTION_ROOT}/.github/o2p-dbmigration/DDL/` so subagents have necessary context. +- The `o2p-dbmigration` skill (under `skills/o2p-dbmigration/`) provides validation checklists, reference insights for Oracle→Postgres migration patterns, and all subagent prompt files. diff --git a/docs/README.agents.md b/docs/README.agents.md index 816ac5237..02b14c9b5 100644 --- a/docs/README.agents.md +++ b/docs/README.agents.md @@ -112,6 +112,7 @@ Custom agents for GitHub Copilot, making it easy for users and organizations to | [Neon Migration Specialist](../agents/neon-migration-specialist.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fneon-migration-specialist.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fneon-migration-specialist.agent.md) | Safe Postgres migrations with zero-downtime using Neon's branching workflow. Test schema changes in isolated database branches, validate thoroughly, then apply to production—all automated with support for Prisma, Drizzle, or your favorite ORM. | | | [Neon Performance Analyzer](../agents/neon-optimization-analyzer.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fneon-optimization-analyzer.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fneon-optimization-analyzer.agent.md) | Identify and fix slow Postgres queries automatically using Neon's branching workflow. Analyzes execution plans, tests optimizations in isolated database branches, and provides clear before/after performance metrics with actionable code fixes. | | | [Next.js Expert](../agents/expert-nextjs-developer.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fexpert-nextjs-developer.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fexpert-nextjs-developer.agent.md) | Expert Next.js 16 developer specializing in App Router, Server Components, Cache Components, Turbopack, and modern React patterns with TypeScript | | +| [O2p Dbmigration Expert](../agents/o2p-dbmigration-expert.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fo2p-dbmigration-expert.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fo2p-dbmigration-expert.agent.md) | | | | [Octopus Release Notes With Mcp](../agents/octopus-deploy-release-notes-mcp.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Foctopus-deploy-release-notes-mcp.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Foctopus-deploy-release-notes-mcp.agent.md) | Generate release notes for a release in Octopus Deploy. The tools for this MCP server provide access to the Octopus Deploy APIs. | octopus
[![Install MCP](https://img.shields.io/badge/Install-VS_Code-0098FF?style=flat-square)](https://aka.ms/awesome-copilot/install/mcp-vscode?name=octopus&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22%2540octopusdeploy%252Fmcp-server%22%5D%2C%22env%22%3A%7B%7D%7D)
[![Install MCP](https://img.shields.io/badge/Install-VS_Code_Insiders-24bfa5?style=flat-square)](https://aka.ms/awesome-copilot/install/mcp-vscodeinsiders?name=octopus&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22%2540octopusdeploy%252Fmcp-server%22%5D%2C%22env%22%3A%7B%7D%7D)
[![Install MCP](https://img.shields.io/badge/Install-Visual_Studio-C16FDE?style=flat-square)](https://aka.ms/awesome-copilot/install/mcp-visualstudio/mcp-install?%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22%2540octopusdeploy%252Fmcp-server%22%5D%2C%22env%22%3A%7B%7D%7D) | | [OpenAPI to Application Generator](../agents/openapi-to-application.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fopenapi-to-application.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fopenapi-to-application.agent.md) | Expert assistant for generating working applications from OpenAPI specifications | | | [PagerDuty Incident Responder](../agents/pagerduty-incident-responder.agent.md)
[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fpagerduty-incident-responder.agent.md)
[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fpagerduty-incident-responder.agent.md) | Responds to PagerDuty incidents by analyzing incident context, identifying recent code changes, and suggesting fixes via GitHub PRs. | [pagerduty](https://github.com/mcp/io.github.PagerDuty/pagerduty-mcp)
[![Install MCP](https://img.shields.io/badge/Install-VS_Code-0098FF?style=flat-square)](https://aka.ms/awesome-copilot/install/mcp-vscode?name=pagerduty&config=%7B%22url%22%3A%22https%3A%2F%2Fmcp.pagerduty.com%2Fmcp%22%2C%22headers%22%3A%7B%7D%7D)
[![Install MCP](https://img.shields.io/badge/Install-VS_Code_Insiders-24bfa5?style=flat-square)](https://aka.ms/awesome-copilot/install/mcp-vscodeinsiders?name=pagerduty&config=%7B%22url%22%3A%22https%3A%2F%2Fmcp.pagerduty.com%2Fmcp%22%2C%22headers%22%3A%7B%7D%7D)
[![Install MCP](https://img.shields.io/badge/Install-Visual_Studio-C16FDE?style=flat-square)](https://aka.ms/awesome-copilot/install/mcp-visualstudio/mcp-install?%7B%22url%22%3A%22https%3A%2F%2Fmcp.pagerduty.com%2Fmcp%22%2C%22headers%22%3A%7B%7D%7D) | diff --git a/docs/README.plugins.md b/docs/README.plugins.md index 6f679d2da..feefc155f 100644 --- a/docs/README.plugins.md +++ b/docs/README.plugins.md @@ -35,6 +35,7 @@ Curated plugins of related prompts, agents, and skills organized around specific | [java-mcp-development](../plugins/java-mcp-development/README.md) | Complete toolkit for building Model Context Protocol servers in Java using the official MCP Java SDK with reactive streams and Spring Boot integration. | 2 items | java, mcp, model-context-protocol, server-development, sdk, reactive-streams, spring-boot, reactor | | [kotlin-mcp-development](../plugins/kotlin-mcp-development/README.md) | Complete toolkit for building Model Context Protocol (MCP) servers in Kotlin using the official io.modelcontextprotocol:kotlin-sdk library. Includes instructions for best practices, a prompt for generating servers, and an expert chat mode for guidance. | 2 items | kotlin, mcp, model-context-protocol, kotlin-multiplatform, server-development, ktor | | [mcp-m365-copilot](../plugins/mcp-m365-copilot/README.md) | Comprehensive collection for building declarative agents with Model Context Protocol integration for Microsoft 365 Copilot | 4 items | mcp, m365-copilot, declarative-agents, api-plugins, model-context-protocol, adaptive-cards | +| [o2p-dbmigration](../plugins/o2p-dbmigration/README.md) | Oracle-to-PostgreSQL migration orchestrator for multi-project .NET solutions with comprehensive migration planning, code transformation, integration testing, and reporting capabilities. | 2 items | oracle, postgresql, database-migration, dotnet, sql, migration, integration-testing, stored-procedures | | [openapi-to-application-csharp-dotnet](../plugins/openapi-to-application-csharp-dotnet/README.md) | Generate production-ready .NET applications from OpenAPI specifications. Includes ASP.NET Core project scaffolding, controller generation, entity framework integration, and C# best practices. | 2 items | openapi, code-generation, api, csharp, dotnet, aspnet | | [openapi-to-application-go](../plugins/openapi-to-application-go/README.md) | Generate production-ready Go applications from OpenAPI specifications. Includes project scaffolding, handler generation, middleware setup, and Go best practices for REST APIs. | 2 items | openapi, code-generation, api, go, golang | | [openapi-to-application-java-spring-boot](../plugins/openapi-to-application-java-spring-boot/README.md) | Generate production-ready Spring Boot applications from OpenAPI specifications. Includes project scaffolding, REST controller generation, service layer organization, and Spring Boot best practices. | 2 items | openapi, code-generation, api, java, spring-boot | diff --git a/docs/README.skills.md b/docs/README.skills.md index c7ddb1114..98fb49ffe 100644 --- a/docs/README.skills.md +++ b/docs/README.skills.md @@ -56,6 +56,7 @@ Skills differ from other primitives by supporting bundled assets (scripts, code | [microsoft-skill-creator](../skills/microsoft-skill-creator/SKILL.md) | Create agent skills for Microsoft technologies using Learn MCP tools. Use when users want to create a skill that teaches agents about any Microsoft technology, library, framework, or service (Azure, .NET, M365, VS Code, Bicep, etc.). Investigates topics deeply, then generates a hybrid skill storing essential knowledge locally while enabling dynamic deeper investigation. | `references/skill-templates.md` | | [nano-banana-pro-openrouter](../skills/nano-banana-pro-openrouter/SKILL.md) | Generate or edit images via OpenRouter with the Gemini 3 Pro Image model. Use for prompt-only image generation, image edits, and multi-image compositing; supports 1K/2K/4K output. | `assets/SYSTEM_TEMPLATE`
`scripts/generate_image.py` | | [nuget-manager](../skills/nuget-manager/SKILL.md) | Manage NuGet packages in .NET projects/solutions. Use this skill when adding, removing, or updating NuGet package versions. It enforces using `dotnet` CLI for package management and provides strict procedures for direct file edits only when updating versions. | None | +| [o2p-dbmigration](../skills/o2p-dbmigration/SKILL.md) | Validates PostgreSQL migration artifacts and integration tests, making sure every reference insight is surfaced before agent workflows sign off. Use when proving migration or integration testing work and confirming the repository references/insights are obeyed. | `prompts/create-bug-reports.prompt.md`
`prompts/create-integration-tests.prompt.md`
`prompts/create-master-migration-plan.prompt.md`
`prompts/generate-application-migration-report.prompt.md`
`prompts/migrate-application-codebase.prompt.md`
`prompts/migrate-stored-procedure.prompt.md`
`prompts/plan-integration-testing.prompt.md`
`prompts/run-integration-tests.prompt.md`
`prompts/scaffold-test-project.prompt.md`
`prompts/validate-test-results.prompt.md`
`references/REFERENCE.md`
`references/closed-loop-testing-workflow.md`
`references/empty-strings-handling.md`
`references/no-data-found-exceptions.md`
`references/oracle-parentheses-from-clause.md`
`references/oracle-to-postgres-sorting.md`
`references/oracle-to-postgres-to-char-numeric.md`
`references/oracle-to-postgres-type-coercion.md`
`references/postgres-concurrent-transactions.md`
`references/postgres-refcursor-handling.md` | | [pdftk-server](../skills/pdftk-server/SKILL.md) | Skill for using the command-line tool pdftk (PDFtk Server) for working with PDF files. Use when asked to merge PDFs, split PDFs, rotate pages, encrypt or decrypt PDFs, fill PDF forms, apply watermarks, stamp overlays, extract metadata, burst documents into pages, repair corrupted PDFs, attach or extract files, or perform any PDF manipulation from the command line. | `references/download.md`
`references/pdftk-cli-examples.md`
`references/pdftk-man-page.md`
`references/pdftk-server-license.md`
`references/third-party-materials.md` | | [penpot-uiux-design](../skills/penpot-uiux-design/SKILL.md) | Comprehensive guide for creating professional UI/UX designs in Penpot using MCP tools. Use this skill when: (1) Creating new UI/UX designs for web, mobile, or desktop applications, (2) Building design systems with components and tokens, (3) Designing dashboards, forms, navigation, or landing pages, (4) Applying accessibility standards and best practices, (5) Following platform guidelines (iOS, Android, Material Design), (6) Reviewing or improving existing Penpot designs for usability. Triggers: "design a UI", "create interface", "build layout", "design dashboard", "create form", "design landing page", "make it accessible", "design system", "component library". | `references/accessibility.md`
`references/component-patterns.md`
`references/platform-guidelines.md`
`references/setup-troubleshooting.md` | | [plantuml-ascii](../skills/plantuml-ascii/SKILL.md) | Generate ASCII art diagrams using PlantUML text mode. Use when user asks to create ASCII diagrams, text-based diagrams, terminal-friendly diagrams, or mentions plantuml ascii, text diagram, ascii art diagram. Supports: Converting PlantUML diagrams to ASCII art, Creating sequence diagrams, class diagrams, flowcharts in ASCII format, Generating Unicode-enhanced ASCII art with -utxt flag | None | diff --git a/plugins/o2p-dbmigration/.github/plugin/plugin.json b/plugins/o2p-dbmigration/.github/plugin/plugin.json new file mode 100644 index 000000000..540ce10c7 --- /dev/null +++ b/plugins/o2p-dbmigration/.github/plugin/plugin.json @@ -0,0 +1,26 @@ +{ + "name": "o2p-dbmigration", + "description": "Oracle-to-PostgreSQL migration orchestrator for multi-project .NET solutions with comprehensive migration planning, code transformation, integration testing, and reporting capabilities.", + "version": "1.0.0", + "author": { + "name": "Awesome Copilot Community" + }, + "repository": "https://github.com/github/awesome-copilot", + "license": "MIT", + "keywords": [ + "oracle", + "postgresql", + "database-migration", + "dotnet", + "sql", + "migration", + "integration-testing", + "stored-procedures" + ], + "agents": [ + "./agents/o2p-dbmigration-expert.md" + ], + "skills": [ + "./skills/o2p-dbmigration/" + ] +} diff --git a/plugins/o2p-dbmigration/README.md b/plugins/o2p-dbmigration/README.md new file mode 100644 index 000000000..b642afe75 --- /dev/null +++ b/plugins/o2p-dbmigration/README.md @@ -0,0 +1,117 @@ +# Oracle-to-PostgreSQL Database Migration Plugin + +Oracle-to-PostgreSQL migration orchestrator for multi-project .NET solutions with comprehensive migration planning, code transformation, integration testing, and reporting capabilities. + +## Installation + +```bash +# Using Copilot CLI +copilot plugin install o2p-dbmigration@awesome-copilot +``` + +## What's Included + +### Agents + +| Agent | Description | +|-------|-------------| +| `o2p-dbmigration-expert` | Oracle-to-PostgreSQL migration orchestrator for multi-project .NET solutions. Discovers migration-eligible projects, produces a persistent master plan for cross-session tracking, migrates application codebases and stored procedures, runs closed-loop integration testing, and generates migration reports. | + +### Skills + +| Skill | Description | +|-------|-------------| +| `o2p-dbmigration` | Validates PostgreSQL migration artifacts and integration tests, making sure every reference insight is surfaced before agent workflows sign off. Use when proving migration or integration testing work and confirming the repository references/insights are obeyed. | + +## Features + +### Multi-Project Application Migration + +The agent handles multiple application projects sequentially, tracking progress across sessions using a persistent Master Migration Plan: + +1. Discovers all migration-eligible projects in the solution +2. For each project, performs: + - Application codebase migration + - Closed-loop integration testing workflow + - Status tracking and progress updates +3. Generates comprehensive migration reports + +### Closed-Loop Integration Testing + +Automated integration testing workflow for each migrated project: + +``` +plan → scaffold → create tests → run → validate → [EXIT or LOOP] + ↑ │ + └──── fix ←── bugs ←──┘ +``` + +### Migration Components + +- **Application Codebase Migration**: Migrates .NET code from Oracle to PostgreSQL +- **Stored Procedure Migration**: Converts Oracle stored procedures, functions, and packages to PostgreSQL +- **Integration Testing**: Creates and runs comprehensive integration tests +- **Bug Tracking**: Generates bug reports for failed tests and tracks remediation +- **Migration Reports**: Produces detailed migration outcome documentation + +## Prerequisites + +- Visual Studio Code with GitHub Copilot +- PostgreSQL Extension (`ms-ossdata.vscode-pgsql`) +- .NET solution with Oracle dependencies to migrate + +## Directory Structure + +The agent expects and creates the following structure in your solution: + +``` +{SOLUTION_ROOT}/ +└── .github/ + └── o2p-dbmigration/ + ├── Reports/ + │ ├── Master Migration Plan.md + │ ├── Integration Testing Plan.md + │ ├── Validation Report.md + │ ├── Application Migration Report.md + │ ├── BUG_REPORT_*.md + │ └── TestResults/ + ├── DDL/ + │ ├── Oracle/ # Oracle DDL scripts (pre-migration) + │ └── Postgres/ # PostgreSQL DDL scripts (post-migration) +``` + +## Usage + +1. **Start Migration**: Invoke the agent with your goal (e.g., "Migrate my solution from Oracle to PostgreSQL") +2. **Master Plan**: The agent creates a master migration plan tracking all projects +3. **Per-Project Migration**: Each project is migrated and tested sequentially +4. **Review Reports**: Check the generated reports for migration status and any issues + +## Key Capabilities + +- **Cross-Session Tracking**: Resume interrupted migrations from where you left off +- **Automated Testing**: Comprehensive integration tests generated and executed automatically +- **Error Handling**: Detailed bug reports with remediation tracking +- **DDL Management**: Preserves both Oracle and PostgreSQL DDL for reference +- **Minimal Changes**: Keeps changes minimal, preserving application logic where possible + +## Reference Materials + +The included skill provides reference guides for common Oracle→PostgreSQL migration patterns: + +- Empty strings handling ('' vs NULL) +- NO_DATA_FOUND exceptions +- Oracle parentheses in FROM clauses +- Sorting and collation differences +- TO_CHAR numeric conversions +- Type coercion rules +- REFCURSOR handling +- Concurrent transactions + +## Source + +This plugin is part of [Awesome Copilot](https://github.com/github/awesome-copilot), a community-driven collection of GitHub Copilot extensions. + +## License + +MIT diff --git a/skills/o2p-dbmigration/SKILL.md b/skills/o2p-dbmigration/SKILL.md new file mode 100644 index 000000000..3516e5d91 --- /dev/null +++ b/skills/o2p-dbmigration/SKILL.md @@ -0,0 +1,63 @@ +--- +name: o2p-dbmigration +description: 'Validates PostgreSQL migration artifacts and integration tests, making sure every reference insight is surfaced before agent workflows sign off. Use when proving migration or integration testing work and confirming the repository references/insights are obeyed.' +--- + +# o2p-dbmigration Skill + +Use this skill whenever you verify code artifacts that migrated from Oracle, build the companion integration tests, or gate an agent workflow that depends on database changes. It codifies the expectations for validation, testing, and documentation for the `o2p-dbmigration` workload. + +## When to Use This Skill + +- Before merging any migration script, procedural change, or refcursor conversion to ensure the migration narrative is complete. +- When the agent creates or updates integration tests tied to migration artifacts, so the new tests cover known Oracle/PostgreSQL differences. +- Whenever you need to prove the `references` insights have been read, discussed, and validated across related commits or workflow steps. + +## Prerequisites + +- Access to the database object DDL artifacts and any integration test projects that exercise them. +- A checklist of affected modules (procedures, packages, triggers, or refcursor clients) for the change under review. +- The `references/*.md` insights nearby so you can cross-check their guidance while crafting migration fixes or tests. + +## Step-by-Step Workflows + +1. **Map the covered artifact.** Identify the migrated object (e.g., procedure, trigger, query) and summary of the change set you expect the agent to verify. +2. **Cross-check every insight.** For each file in the `references` folder below, confirm the specific behavior or test requirement is acknowledged and addressed: + - `empty-strings-handling.md`: Ensure logic or tests treat `''` differently from `NULL`, updating stored procedures, applications, and assertions accordingly. + - `no-data-found-exceptions.md`: Validate that any `SELECT INTO` path now raises `IF NOT FOUND THEN RAISE EXCEPTION` and integration tests replay invalid parameters to catch the exception. + - `oracle-parentheses-from-clause.md`: Remove unnecessary parentheses around table names in FROM clauses (e.g., `FROM(TABLE_NAME)` → `FROM TABLE_NAME`) to avoid PostgreSQL syntax errors; verify all affected queries in stored procedures and application code. + - `oracle-to-postgres-sorting.md`: Confirm ordering logic uses `COLLATE "C"` or wrapped DISTINCT queries so sorting results match Oracle expectations, and regression tests cover the stretch. + - `oracle-to-postgres-to-char-numeric.md`: Replace `TO_CHAR(numeric)` calls without format strings with `CAST(numeric AS TEXT)` or add explicit format masks; verify all numeric-to-string conversions in SQL and application code. + - `oracle-to-postgres-type-coercion.md`: Verify comparison literals and parameters align with PostgreSQL's stricter types (cast or use string literals when comparing to VARCHAR columns) and add tests that exercise clauses with previously implicit conversions. + - `postgres-refcursor-handling.md`: Ensure every refcursor consumer unwraps the cursor before reading rows (execute → fetch) and that helper utilities or integration tests follow the pattern. + - `postgres-concurrent-transactions.md`: Verify that no code path executes a second command while a DataReader is still open on the same connection; materialize results with `.ToList()` or use separate connections, and test iterative data access patterns that trigger concurrent operations. +3. **Build integration tests.** Create or update integration test cases that exercise both the happy path and the failure scenarios highlighted in the insights, including exceptions, sorting validation, and refcursor consumption. +4. **Document the verification.** Record the references covered, tests added, and any decisions about preserving Oracle behavior (e.g., null handling or type coercion) so downstream agents or reviewers can trace the coverage. +5. **Gate the workflow.** Return a checklist asserting each insight was addressed, all migration scripts run, and integration tests execute successfully before closing the skill run. + +## Bundled Prompts + +The `prompts/` folder contains task-specific subagent prompts used by the `o2p-dbmigration-expert` agent: + +| Prompt | Purpose | +|--------|---------| +| `create-master-migration-plan` | Discover projects, assess eligibility, produce master tracking plan | +| `migrate-application-codebase` | Migrate a single .NET project from Oracle to PostgreSQL | +| `migrate-stored-procedure` | Convert Oracle stored procedures/functions to PostgreSQL | +| `plan-integration-testing` | Create a testing plan for a project's data access layer | +| `scaffold-test-project` | Create the xUnit test project infrastructure | +| `create-integration-tests` | Generate test cases for migration validation | +| `run-integration-tests` | Execute xUnit tests and capture structured results | +| `validate-test-results` | Analyze results against the skill checklist and decide next step | +| `create-bug-reports` | Draft bug reports for migration defects | +| `generate-application-migration-report` | Aggregate outcomes into the final migration report | + +The `references/` folder also contains `closed-loop-testing-workflow.md`, which defines the flow diagram, decision logic (EXIT/LOOP/BLOCKED), and loop control rules for the integration testing cycle. + +## Verification Checklist + +- [ ] Migration artifact review documented with affected components. +- [ ] Each `references/*.md` insight acknowledged and steps taken (empty string handling, no-data exceptions, parentheses in FROM clauses, sorting, TO_CHAR numeric conversions, type coercion, refcursor handling, concurrent transaction handling). +- [ ] Integration tests cover the behaviors mentioned in the insights and explicitly assert the new PostgreSQL semantics. +- [ ] Test suite runs cleanly and returns deterministic results for the covered cases. +- [ ] Notes or comments recorded in the PR or workflow log describing how each insight influenced the fix. diff --git a/skills/o2p-dbmigration/prompts/create-bug-reports.prompt.md b/skills/o2p-dbmigration/prompts/create-bug-reports.prompt.md new file mode 100644 index 000000000..7b83a6723 --- /dev/null +++ b/skills/o2p-dbmigration/prompts/create-bug-reports.prompt.md @@ -0,0 +1,59 @@ +--- +name: create-bug-reports +agent: 'agent' +description: 'Create clear, user-friendly bug reports for Oracle-to-Postgres application migration issues.' +model: Claude Haiku 4.5 (copilot) +tools: [vscode/askQuestions, read, edit, search] +--- +# Create Bug Reports for Oracle to Postgres Migration + +Generate a concise, easy-to-understand bug report for the defect discovered while validating the application migration from Oracle to Postgres. This prompt targets a **single project** identified by `TARGET_PROJECT`. + +## Expected Inputs (from router handoff payload) + +| Key | Required | Description | +|---|---|---| +| `SOLUTION_ROOT` | Yes | Resolved workspace root path. | +| `TARGET_PROJECT` | Yes | Absolute path to the single application project whose failures are being reported (e.g., `C:/Source/MyApp/MIUS.API.Postgres`). | + +INSTRUCTIONS: +- Treat Oracle as the source of truth; capture expected Oracle behavior versus observed Postgres behavior. +- Keep wording user-friendly: plain language, short sentences, and clear next actions. +- Document when client code changes were made or are being proposed; emphasize that changes should be avoided unless required for correct behavior. +- Always include: summary, impacted feature/flow, severity, environment (Oracle/Postgres, build, branch), prerequisites/seed data, exact repro steps, expected vs actual results, scope of impact, and workaround (if any). +- Attach supporting evidence: minimal SQL excerpts, logs, and screenshots; avoid sensitive data and keep snippets reproducible. +- Note data-specific factors (collation, null handling, sequence values, time zones) that might differ between Oracle and Postgres. +- Recommend a validation step after fixes (re-run repro on both DBs, compare row/column outputs, and check error handling parity). + +OUTPUT LOCATION: +- Save each bug report under `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/` using a clear, human-readable filename (e.g., `Bug - {area} - {short-title}.md`). + +OUTPUT INSTRUCTIONS: +Bug Report Output Definition (Template) +• Filename format: .github/o2p-dbmigration/Reports/BUG_REPORT_.md +• Status line: Status: [✅ RESOLVED | ⛔ UNRESOLVED | ⏳ IN PROGRESS] +• Component: +• Test(s): +• Severity: + +Sections (markdown headings): +1. # Bug Report: — concise, specific. +2. **Status:** <status> +**Component:** <controller/method> +**Test:** <test(s)> +**Severity:** <level> +3. --- +4. ## Problem — observable incorrect behavior and expected vs actual. +5. ## Scenario — ordered steps to reproduce. +6. ## Root Cause — minimal, concrete technical cause. +7. ## Solution — changes made or required (be explicit about data access/tracking flags). +8. ## Validation — bullet list of passing tests or manual checks. +9. ## Files Modified — bullet list with relative paths and short purpose. +10. ## Notes / Next Steps — follow-ups, environment caveats, or risks. + +Style rules: +• Keep wording concise and factual. +• Use present or past tense consistently. +• Prefer bullets/numbered lists for steps and validation. +• Call out data layer nuances (tracking, padding, constraints) explicitly. +• Keep to existing runtime/language versions; avoid speculative fixes. \ No newline at end of file diff --git a/skills/o2p-dbmigration/prompts/create-integration-tests.prompt.md b/skills/o2p-dbmigration/prompts/create-integration-tests.prompt.md new file mode 100644 index 000000000..5476b1e03 --- /dev/null +++ b/skills/o2p-dbmigration/prompts/create-integration-tests.prompt.md @@ -0,0 +1,50 @@ +--- +name: create-integration-tests +agent: 'agent' +description: 'Create integration test cases for code artifacts identified by the user in context of an application database migration from Oracle to Postgres. Assumes the test project already exists (scaffolded by scaffoldTestProject).' +model: Claude Sonnet 4.6 (copilot) +tools: [vscode/askQuestions, execute, read, edit, search, todo] +--- +# Create Integration Test Cases for Database Migration Validation + +Create integration test cases for the class/method provided by the user. The test project infrastructure (project file, base test class, transaction management, seed manager) has already been scaffolded by `scaffoldTestProject` — do not recreate it. This prompt targets a **single project** identified by `TARGET_PROJECT`. + +## Expected Inputs (from router handoff payload) + +| Key | Required | Description | +|---|---|---| +| `SOLUTION_ROOT` | Yes | Resolved workspace root path. | +| `TARGET_PROJECT` | Yes | Absolute path to the single application project whose code artifacts are under test (e.g., `C:/Source/MyApp/MIUS.API.Postgres`). | + +PREREQUISITES: +- The test project must already exist and compile. If it does not, stop and report this to the router. +- Read the existing base test class and seed manager conventions before writing any tests so that new test classes follow established patterns. + +GENERAL INSTRUCTIONS: +- Treat Oracle as the golden behavior source. +- **Scope all test creation to `TARGET_PROJECT` only.** Only generate tests for the data access artifacts within that project; do not create tests for other projects in the solution. +- Ensure that the tests are able to validate the behavior of the data access layer, whether running against Oracle or Postgres databases. +- Focus on capturing expected outputs, side-effects, and error handling to ensure consistency across both database systems. +- Keep assertions DB-agnostic: assert logical outputs (rows, columns, counts, error types) not platform-specific messages. +- Ensure assertions are deterministic by seeding test data as required. +- Only create integration tests and seed data against Oracle. Once complete, user will copy files to Postgres test project and modify connection strings. + +INSTRUCTIONS FOR TEST CASE CREATION: +- Inherit from the base test class established by the scaffolded project to get transaction create/rollback behavior automatically. +- Ensure tests are deterministic by asserting for specific values where possible. +- Avoid testing against coding paths that do not exist or asserting behavior that cannot occur. +- Avoid redundancy in test assertions across tests that target the same method. +- Do not use assertions that pass when a value is null or empty, you must assert against specific expected values (eg assert for null xor assert for empty). +- Plan for a second review of the created tests to ensure assertions against non-null values are deterministic against the seeded data. + +LOOP ITERATION BEHAVIOR: +- On **first invocation**: generate the full set of test cases and seed data based on the integration testing plan. +- On **iteration 2+** (when `LOOP_CONTEXT` is provided): focus only on modifying or adding test cases to address the `failed_tests` listed in the loop context. Do not rewrite passing tests. Consult any bug reports referenced in `PRIOR_ARTIFACTS`. + +INSTRUCTIONS FOR SEED DATA: +- Follow the seed file location and naming conventions established by the scaffolded project. +- Do not commit seed data because tests are isolated within transactions and rolled back after each test. +- Ensure that changes to seed data do not conflict with other tests. +- Ensure seed data is loaded and verified before running tests. +- Priority should be given to reusing existing seed files. +- Avoid truncate table statements because we want to keep existing database data intact. diff --git a/skills/o2p-dbmigration/prompts/create-master-migration-plan.prompt.md b/skills/o2p-dbmigration/prompts/create-master-migration-plan.prompt.md new file mode 100644 index 000000000..a1e3f7781 --- /dev/null +++ b/skills/o2p-dbmigration/prompts/create-master-migration-plan.prompt.md @@ -0,0 +1,162 @@ +--- +name: create-master-migration-plan +agent: 'agent' +description: 'Discovers all projects in a solution, determines Oracle→PostgreSQL migration eligibility, detects prior progress, and produces a persistent master migration plan that enables cross-session continuity.' +model: Claude Opus 4.6 (copilot) +tools: [vscode/askQuestions, read, search, todo] +--- +# Create Master Migration Plan + +Enumerate all projects in a solution, assess which require Oracle→PostgreSQL migration, detect any prior migration progress, and produce a persistent master migration plan. This plan is the single source of truth for multi-project migration orchestration and is designed to survive token-limit boundaries — any fresh agent session can read it and resume where the previous session left off. + +## Expected Inputs (from router handoff payload) + +| Key | Required | Description | +|---|---|---| +| `SOLUTION_ROOT` | Yes | Resolved workspace root path. | +| `SOLUTION_FILE_PATH` | No | Absolute path to the `.sln` file. If omitted, discover it by searching `SOLUTION_ROOT` for `*.sln` files. | + +--- + +## Phase 1 — Discover Projects + +1. **Locate the solution file.** If `SOLUTION_FILE_PATH` was provided, use it. Otherwise, search `SOLUTION_ROOT` for `.sln` files. If multiple are found, ask the user which solution to target. +2. **Parse the solution file.** Extract all project references (`.csproj` paths) from the solution. Record the full list. +3. **Categorize each project.** For every project, determine: + - **Project name** (folder name and assembly name). + - **Project path** (absolute). + - **Project type** (class library, web API, console, test project, etc.) — infer from SDK, output type, or naming conventions. + +--- + +## Phase 2 — Assess Migration Eligibility + +For each non-test project, analyze whether it requires Oracle→PostgreSQL migration: + +1. **Scan for Oracle indicators:** + - NuGet references to `Oracle.ManagedDataAccess`, `Oracle.EntityFrameworkCore`, or similar Oracle packages (check `.csproj` and any `packages.config`). + - Connection string entries referencing Oracle (in `appsettings.json`, `web.config`, `app.config`, or similar configuration files). + - Code-level usage of `OracleConnection`, `OracleCommand`, `OracleDataReader`, or Oracle-specific SQL syntax patterns. + - References to stored procedures or packages known to be Oracle-specific (cross-reference with DDL under `.github/o2p-dbmigration/DDL/Oracle/` if present). + +2. **Classify each project:** + - **MIGRATE** — Has Oracle database interactions that must be converted. + - **SKIP** — No Oracle indicators found (e.g., pure UI project, shared utility library with no DB access). + - **ALREADY_MIGRATED** — A `.Postgres` duplicate already exists and appears to have been processed. + - **TEST_PROJECT** — Identified as a test project; will be handled by the testing workflow, not direct migration. + +3. **Confirm with the user.** Present the classified list and ask the user to confirm, adjust, or add projects before finalizing the plan. + +--- + +## Phase 3 — Detect Prior Progress + +Check for existing migration artifacts that indicate work from a previous session: + +1. **Per-project loop state files:** Look for `.github/o2p-dbmigration/Reports/.loop-state-{ProjectName}.md` for each MIGRATE-eligible project. If found, read and record the iteration, decision, and test counts. +2. **Existing `.Postgres` project folders:** Check if a duplicated project already exists alongside a MIGRATE target. If so, note whether it appears to have been fully migrated (tool-generated changes present) or is a partial/empty copy. +3. **Existing reports:** Check for: + - `Integration Testing Plan.md` — indicates testing was planned. + - `Validation Report.md` — indicates testing was executed. + - `BUG_REPORT_*.md` files — indicate issues were documented. + - `Application Migration Report.md` — indicates a previous run completed or partially completed. +4. **Existing master plan:** Check if `Master Migration Plan.md` already exists. If it does, read it and compare against current solution state. If the existing plan is still valid (same projects, correct statuses), update it in place rather than overwriting. If the solution has changed (new projects added/removed), regenerate with the user's confirmation. + +--- + +## Phase 4 — Produce the Master Migration Plan + +Write the plan to: `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/Master Migration Plan.md` + +Use the format defined below exactly. The router and future sessions depend on the structure being parseable. + +```markdown +# Master Migration Plan + +**Solution:** {solution file name} +**Solution Root:** {SOLUTION_ROOT} +**Created:** {timestamp} +**Last Updated:** {timestamp} +**Status:** {NOT_STARTED | IN_PROGRESS | COMPLETED} + +## Solution Summary + +| Metric | Count | +|--------|-------| +| Total projects in solution | {n} | +| Projects requiring migration | {n} | +| Projects already migrated | {n} | +| Projects skipped (no Oracle usage) | {n} | +| Test projects (handled separately) | {n} | + +## Project Inventory + +| # | Project Name | Path | Classification | Status | Notes | +|---|---|---|---|---|---| +| 1 | {name} | {relative path from SOLUTION_ROOT} | MIGRATE | {see Status Values} | {any notes} | +| 2 | {name} | {relative path from SOLUTION_ROOT} | SKIP | N/A | No Oracle dependencies | +| ... | ... | ... | ... | ... | ... | + +### Status Values + +For projects classified as **MIGRATE**, the Status column tracks lifecycle progress: + +- `PENDING` — Not yet started. +- `MIGRATING` — `migrateApplicationCodebase` is in progress or was interrupted. +- `MIGRATED` — Code migration complete; testing not yet started. +- `TESTING` — Closed-loop testing in progress (see loop state file for details). +- `TEST_PASSED` — Testing exited with SUCCESS or CONDITIONAL. +- `TEST_BLOCKED` — Testing is blocked; requires user intervention. +- `COMPLETED` — Migration and testing both finished. + +## Migration Order + +Projects should be migrated in the following order (rationale included): + +1. **{ProjectName}** — {rationale, e.g., "Core data access library; other projects depend on it."} +2. **{ProjectName}** — {rationale} +3. ... + +## Prior Progress Detected + +{If no prior progress: "No prior migration artifacts found. This is a fresh migration."} + +{If prior progress exists, summarize per project:} + +### {ProjectName} +- **Loop state file:** {exists | not found} {if exists: iteration {n}, decision: {decision}} +- **`.Postgres` folder:** {exists | not found} {if exists: appears {complete | partial}} +- **Reports:** {list any existing reports} +- **Recommended resume point:** {e.g., "Resume from closed-loop testing iteration 2" or "Re-run migrateApplicationCodebase — previous copy appears incomplete"} + +## Resume Instructions + +To continue this migration in a fresh agent session: + +1. Read this file: `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/Master Migration Plan.md` +2. Check the **Project Inventory** table for the first project with a non-terminal status (`PENDING`, `MIGRATING`, `MIGRATED`, `TESTING`, `TEST_BLOCKED`). +3. For that project: + - If `PENDING` → begin with `migrateApplicationCodebase`. + - If `MIGRATING` → check if the `.Postgres` folder exists and is complete; if partial, re-run `migrateApplicationCodebase`. + - If `MIGRATED` → begin closed-loop testing (`planIntegrationTesting` → ...). + - If `TESTING` → read the per-project loop state file (`.loop-state-{ProjectName}.md`) and resume the testing loop at the recorded iteration. + - If `TEST_BLOCKED` → present blocking issues to the user for resolution. +4. After each project reaches `COMPLETED` or `TEST_PASSED`, update this file's Project Inventory table and move to the next project. +5. When all MIGRATE projects reach a terminal status, invoke `generateApplicationMigrationReport`. +``` + +--- + +## Completion Criteria + +This subagent is complete when: +- The master migration plan file exists at the specified path. +- All projects in the solution have been discovered and classified. +- The user has confirmed the migration target list and ordering. +- Any prior progress has been detected and recorded in the plan. +- The plan is ready for the router to begin (or resume) the per-project migration lifecycle. + +Return to the router with: +- The path to the master migration plan file. +- The confirmed list of projects to migrate (in order). +- A summary of any prior progress detected. diff --git a/skills/o2p-dbmigration/prompts/generate-application-migration-report.prompt.md b/skills/o2p-dbmigration/prompts/generate-application-migration-report.prompt.md new file mode 100644 index 000000000..7e2d9530d --- /dev/null +++ b/skills/o2p-dbmigration/prompts/generate-application-migration-report.prompt.md @@ -0,0 +1,68 @@ +--- +name: generate-application-migration-report +agent: 'agent' +description: 'Aggregate per-project migration and testing outcomes into a final Application Migration Report, retrieving extension migration data via pgsql_migration_show_report and synthesizing it with integration testing artifacts.' +model: Claude Sonnet 4.6 (copilot) +tools: [vscode/installExtension, vscode/askQuestions, vscode/extensions, read, edit, search, ms-ossdata.vscode-pgsql/pgsql_migration_show_report] +--- +# Aggregate and Generate Application Migration Report + +You are a reporting subagent responsible for producing the final Application Migration Report after all projects have completed their migration and testing cycles. This is the **last step** of the multi-project orchestration workflow. + +## Expected Inputs (from router handoff payload) + +| Key | Required | Description | +|---|---|---| +| `SOLUTION_ROOT` | Yes | Resolved workspace root path. | +| `PRIOR_ARTIFACTS` | Yes | List of per-project reports, validation reports, bug reports, and loop state files produced by earlier subagents. | + +--- + +## Workflow + +### Step 1 — Retrieve Extension Migration Data + +Use `#pgsql_migration_show_report` to retrieve the migration progress data captured by the `ms-ossdata.vscode-pgsql` extension during the `pgsql_migration_oracle_app` conversion runs. This data reflects what the extension recorded during each project's code migration phase — it does not perform migration itself. + +### Step 2 — Collect Testing and Validation Artifacts + +Read the following artifacts from `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/`: + +- **Integration Testing Plan** (`Integration Testing Plan.md`) — original test scope and coverage targets. +- **Validation Reports** (`Validation Report.md`) — per-project test validation outcomes and EXIT/LOOP/BLOCKED decisions. +- **Bug Reports** (`BUG_REPORT_*.md` or `Bug - *.md`) — documented defects found during testing. +- **Loop State Files** (`.loop-state-{ProjectName}.md`) — closed-loop iteration history and final decisions per project. +- **Test Results** (`TestResults/*.trx`) — raw test execution results (reference for pass/fail counts). + +Use the `PRIOR_ARTIFACTS` list from the handoff payload as the primary index; fall back to searching the Reports directory if the list is incomplete. If any expected artifact is missing, note the gap in the report rather than failing. + +### Step 3 — Synthesize the Report + +Produce a structured Markdown report with the following sections: + +1. **Executive Summary** — Overall migration status across all projects (success / partial / blocked). High-level statistics: total projects migrated, test pass rates, critical issues remaining. +2. **Per-Project Summary** — For each migrated project: + - Project name and path (original → `.Postgres` duplicate). + - Migration outcome (from extension report data). + - Integration testing outcome (EXIT status: SUCCESS / CONDITIONAL / BLOCKED). + - Number of closed-loop iterations. + - Open defects or known issues. +3. **Aggregated Findings** — Common patterns, recurring issues, and migration insights observed across projects (e.g., type coercion problems, refcursor handling). +4. **Known Issues and Gaps** — Unresolved defects, conditional passes with documented limitations, missing artifacts. +5. **Recommendations** — Next steps for addressing remaining gaps or advancing to production readiness. + +### Step 4 — Write the Report + +Store the final report at: + +``` +{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/Application Migration Report.md +``` + +--- + +## Constraints + +- Do not fabricate data. If an artifact is missing or a project's results are unavailable, state that explicitly in the relevant section. +- Keep the report factual and concise. Summarize findings rather than restating raw test output verbatim. +- The `#pgsql_migration_show_report` tool surfaces data generated during the conversion process — treat it as one data source, not the sole content of the report. diff --git a/skills/o2p-dbmigration/prompts/migrate-application-codebase.prompt.md b/skills/o2p-dbmigration/prompts/migrate-application-codebase.prompt.md new file mode 100644 index 000000000..58f85bd88 --- /dev/null +++ b/skills/o2p-dbmigration/prompts/migrate-application-codebase.prompt.md @@ -0,0 +1,67 @@ +--- +name: migrate-application-codebase +agent: 'agent' +description: 'Migrates a single application project from Oracle to Postgres using the #ms-ossdata.vscode-pgsql extension. Invoked once per project by the router.' +model: Claude Sonnet 4.6 (copilot) +tools: [vscode/installExtension, vscode/askQuestions, vscode/extensions, execute, read, edit, search, ms-ossdata.vscode-pgsql/pgsql_migration_oracle_app, todo] +--- +# Migrate Application Codebase from Oracle to Postgres + +Migrate a single application project from Oracle to Postgres, preserving existing functionality and aligning database access with PostgreSQL conventions. This prompt handles **one project per invocation**; the router invokes it once for each project requiring migration. + +## Expected Inputs (from router handoff payload) + +| Key | Required | Description | +|---|---|---| +| `SOLUTION_ROOT` | Yes | Resolved workspace root path. | +| `TARGET_PROJECT` | Yes | Absolute path to the application project folder to migrate (e.g., `C:/Source/MyApp/MIUS.API`). | +| `CODING_NOTES_PATH` | No | Path to coding notes from the schema migration phase (e.g., `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/migration-notes.md`). If omitted, the tool continues without this context. | +| `POSTGRES_DB_CONNECTION` | No | Connection name for the PostgreSQL database. | +| `POSTGRES_DB_NAME` | No | Name of the target PostgreSQL database. | + +--- + +## Phase 1 — Pre-Migration (agent actions) + +The agent performs these steps **before** invoking the migration tool. The goal is to create a side-by-side copy so the original project remains untouched and the migrated code lives in its own namespace. + +1. **Duplicate the project.** Copy the entire `TARGET_PROJECT` folder to a sibling folder with a `.Postgres` suffix (e.g., `MIUS.API` → `MIUS.API.Postgres`). This is the **migration target**; the original folder must not be modified. +2. **Rename the assembly and root namespace** in the duplicated project (`.csproj` `<RootNamespace>` and `<AssemblyName>`) by appending `.Postgres` so the new and old versions never collide if referenced in the same solution. +3. **Update internal namespace declarations** across all `.cs` files in the duplicate to match the new root namespace. +4. **Verify the original** is byte-identical to its state before duplication (no files changed, no files added/removed). + +> If any pre-migration step fails, stop and report the failure to the router. Do not invoke the migration tool. + +--- + +## Phase 2 — Migration (tool invocation) + +Invoke the `#pgsql_migration_oracle_app` tool against the **duplicated** project folder. Do **not** point it at the original. + +### Tool Parameters + +| Parameter | Required | Value | +|---|---|---| +| `applicationCodebaseFolder` | **Yes** | The duplicated project path from Phase 1 (e.g., `C:/Source/MyApp/MIUS.API.Postgres`). | +| `codingNotesLocationPath` | No | `CODING_NOTES_PATH` from inputs, if provided. | +| `postgresDbConnection` | No | `POSTGRES_DB_CONNECTION` from inputs, if provided. | +| `postgresDbName` | No | `POSTGRES_DB_NAME` from inputs, if provided. | + +Let the tool perform its analysis and code conversion. It will ingest the codebase context on its own. Do not interfere with the tool's execution. + +--- + +## Phase 3 — Post-Migration (agent actions) + +After the tool completes, the agent verifies and documents the results. + +1. **Verify the original project is untouched.** Confirm no files in the original `TARGET_PROJECT` folder were modified or added. +2. **Validate namespace separation.** Confirm the duplicated project's assembly name and root namespace include the `.Postgres` suffix and do not clash with the original. +3. **Compile check.** If a build system is available, attempt to build the migrated project to surface any immediate compilation errors. +4. **Document the outcome.** Produce a brief summary for the router containing: + - Project migrated: `<original path>` → `<duplicated path>` + - Tool completion status (success / partial / errors) + - Any compilation errors or warnings surfaced in step 3 + - Items flagged for follow-up (e.g., manual review of specific files) + +> Return this summary to the router so it can track progress across all projects and decide on next steps (integration testing, bug reports, etc.). diff --git a/skills/o2p-dbmigration/prompts/migrate-stored-procedure.prompt.md b/skills/o2p-dbmigration/prompts/migrate-stored-procedure.prompt.md new file mode 100644 index 000000000..6f6678cec --- /dev/null +++ b/skills/o2p-dbmigration/prompts/migrate-stored-procedure.prompt.md @@ -0,0 +1,42 @@ +--- +name: migrate-stored-procedure +agent: 'agent' +description: 'Migrate stored procedures identified by the user in context of an application database migration from Oracle to Postgres.' +model: Claude Sonnet 4.6 (copilot) +tools: [vscode/askQuestions, read, edit, search, todo] +--- +# Migrate Procedures from Oracle to Postgres + +Migrate the user-provided stored procedure from Oracle to PostgreSQL. + +INSTRUCTIONS: +- Ensure that all Oracle-specific syntax and features are appropriately translated to their PostgreSQL equivalents. +- Maintain the original functionality and logic of the stored procedure while adapting it to fit PostgreSQL's capabilities and conventions. +- Maintain type anchoring of input parameters (eg 'PARAM_NAME IN table_name.column_name%TYPE'). +- Do not use type-anchoring for variables that are passed as output parameters to other procedures (use explicit types instead, eg `NUMERIC`, `VARCHAR`, `INTEGER`). +- Do not change the method signatures. +- Do not prefix object names with schema names unless it is already present in the Oracle code. +- Leave exception handling and rollback logic untouched. +- Do not generate COMMENT or GRANT statements. +- If required, or for increased clarity and efficiency, leverage PostgreSQL plugins or extensions, such as 'orafce', to replicate Oracle features. +- Use ```COLLATE "C"``` option when ordering by text fields to ensure consistent behavior with Oracle's sorting. +- Begin every function or stored procedure migration with the following search path statement: +``` +-- Set search_path for correct name resolution +set SEARCH_PATH = {package_name_in_lower_case},{parent_schema_name},public; +``` +- Replace COMMIT statements with the following snippet: +``` +-- PostgreSQL: No explicit COMMIT needed in functions or procedures +-- Transaction control is handled by the calling application +-- COMMIT; +``` + +AUTHORITATIVE RESOURCES TO CONSULT: +- `{SOLUTION_ROOT}/.github/o2p-dbmigration/DDL/Oracle/Procedures and Functions/*` (Oracle stored procedures pre-migration) +- `{SOLUTION_ROOT}/.github/o2p-dbmigration/DDL/Oracle/Tables and Views/*` (Oracle constraints, indexes, table hints pre-migration) +- `{SOLUTION_ROOT}/.github/o2p-dbmigration/DDL/Postgres/Procedures and Functions/{PACKAGE_NAME_IF_APPLICABLE}/*` (Place migrated stored procedures here) +- `{SOLUTION_ROOT}/.github/o2p-dbmigration/DDL/Postgres/Tables and Views/*` (Postgres constraints, indexes, table hints) + +OUTPUT FORMAT: +- Place the migrated stored procedure in its own file (eg 1 stored procedure per file). \ No newline at end of file diff --git a/skills/o2p-dbmigration/prompts/plan-integration-testing.prompt.md b/skills/o2p-dbmigration/prompts/plan-integration-testing.prompt.md new file mode 100644 index 000000000..e61b21eb3 --- /dev/null +++ b/skills/o2p-dbmigration/prompts/plan-integration-testing.prompt.md @@ -0,0 +1,26 @@ +--- +name: plan-integration-testing +agent: 'agent' +description: 'Create an integration testing plan for code artifacts that interact with the database in context of an application database migration from Oracle to Postgres.' +model: Claude Opus 4.6 (copilot) +tools: [vscode/askQuestions, read, search, todo] +--- +# Create Integration Testing Plan for Database Migration Validation + +Assess what classes/methods should be tested for integration with the database before and after the migration. This plan targets a **single project** identified by `TARGET_PROJECT`. + +## Expected Inputs (from router handoff payload) + +| Key | Required | Description | +|---|---|---| +| `SOLUTION_ROOT` | Yes | Resolved workspace root path. | +| `TARGET_PROJECT` | Yes | Absolute path to the single application project to plan tests for (e.g., `C:/Source/MyApp/MIUS.API.Postgres`). | + +INSTRUCTIONS: +- Create a comprehensive and actionable plan for integration testing to ensure that the application functions correctly with the new PostgreSQL database. +- **Scope the plan to `TARGET_PROJECT` only.** Analyze the code artifacts within that project; do not plan tests for other projects in the solution. +- Consider only the code artifacts that interact directly with the database, such as repositories, data access objects (DAOs), and service layers that perform CRUD operations. +- Applications targeted for migration will be copied and renamed to indicate the target database (e.g., 'MyApp.Postgres' for the Postgres version) so there is no need to plan for harnessing of multiple database connections within the same application instance. + +OUTPUT: +The plan should be written to a markdown file at this location: '{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/Integration Testing Plan.md'. \ No newline at end of file diff --git a/skills/o2p-dbmigration/prompts/run-integration-tests.prompt.md b/skills/o2p-dbmigration/prompts/run-integration-tests.prompt.md new file mode 100644 index 000000000..677fda946 --- /dev/null +++ b/skills/o2p-dbmigration/prompts/run-integration-tests.prompt.md @@ -0,0 +1,89 @@ +--- +name: run-integration-tests +agent: 'agent' +description: 'Execute xUnit integration tests against Oracle and/or Postgres databases to validate migration correctness.' +model: Claude Sonnet 4.6 (copilot) +tools: [vscode/askQuestions, execute, read, search, todo] +--- +# Run Integration Tests for Database Migration Validation + +Execute the xUnit integration test suite to validate application behavior against the target database(s). Capture structured test results for downstream validation. This prompt targets a **single project** identified by `TARGET_PROJECT`. + +## Expected Inputs (from router handoff payload) + +| Key | Required | Description | +|---|---|---| +| `SOLUTION_ROOT` | Yes | Resolved workspace root path. | +| `TARGET_PROJECT` | Yes | Absolute path to the single application project whose tests should be executed (e.g., `C:/Source/MyApp/MIUS.API.Postgres`). | + +CONTEXT: +- Oracle is the **golden source of truth** for expected behavior. +- Tests may run against Oracle first (baseline), then Postgres (target) to compare outcomes. +- Test projects follow the naming convention `*.IntegrationTests` or `*.Tests.Integration`. + +INSTRUCTIONS: + +## 1. Discover Test Project +- Locate the xUnit integration test project associated with `TARGET_PROJECT`. Look for a sibling or child project with `IntegrationTests` or `Tests.Integration` in its name that references `TARGET_PROJECT`. +- **Do not discover or run test projects for other application projects in the solution.** The closed-loop targets one project at a time. +- Prefer projects with `Oracle` or `Postgres` in the folder/namespace to identify target database. +- If both exist, run Oracle tests first to establish baseline, then Postgres tests. + +## 2. Execute Tests +Run tests using `dotnet test` with structured output: + +```powershell +# Run tests with TRX (Visual Studio Test Results) output +dotnet test "{TestProjectPath}" --logger "trx;LogFileName=TestResults.trx" --results-directory "{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/TestResults" + +# Alternative: Run with console verbosity for immediate feedback +dotnet test "{TestProjectPath}" --verbosity normal +``` + +OPTIONS: +- Use `--filter` to run specific test classes/methods if provided by user. +- Use `--no-build` if project was recently built. +- Capture both stdout and the `.trx` file for comprehensive results. + +## 3. Handle Test Failures Gracefully +- Do NOT stop on first failure; run the full suite to capture all issues. +- If tests throw unhandled exceptions, note the exception type and message. +- If connection fails, verify connection string configuration before retrying. + +## 4. Capture Results +OUTPUT ARTIFACTS: +| Artifact | Location | +|----------|----------| +| TRX results file | `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/TestResults/{Timestamp}_{Database}_TestResults.trx` | +| Console summary | Inline in response | +| Failed test list | Extracted from TRX or console output | + +RESULT SUMMARY FORMAT (provide this after execution): +```markdown +## Test Execution Summary + +**Project:** {TestProjectName} +**Target Database:** {Oracle | Postgres} +**Executed:** {timestamp} +**Duration:** {total time} + +| Metric | Count | +|--------|-------| +| Total Tests | {n} | +| Passed | {n} | +| Failed | {n} | +| Skipped | {n} | + +### Failed Tests (if any) +| Test Name | Error Summary | +|-----------|---------------| +| {FullyQualifiedTestName} | {Brief error message} | +``` + +## 5. Handoff to Validation +After execution, report the summary above. The router will invoke `validateTestResults` to analyze the results and determine next steps (pass → exit, fail → bug reports → fix → re-run). + +NOTES: +- Ensure database connection strings are configured in test project settings (`appsettings.json`, environment variables, or user secrets). +- If running in CI, ensure the database is accessible from the build agent. +- Seed data should already be in place from `createIntegrationTests` phase; do not truncate or modify production data. diff --git a/skills/o2p-dbmigration/prompts/scaffold-test-project.prompt.md b/skills/o2p-dbmigration/prompts/scaffold-test-project.prompt.md new file mode 100644 index 000000000..3b211fefc --- /dev/null +++ b/skills/o2p-dbmigration/prompts/scaffold-test-project.prompt.md @@ -0,0 +1,45 @@ +--- +name: scaffold-test-project +agent: 'agent' +description: 'Scaffold an xUnit integration test project for validating database migration from Oracle to Postgres.' +model: Claude Sonnet 4.6 (copilot) +tools: [vscode/askQuestions, execute, read, edit, search, todo] +--- +# Scaffold Integration Test Project for Database Migration Validation + +Create the integration test project structure that will host tests for validating Oracle-to-Postgres migration behavior. This prompt is invoked **once per project** before the test creation loop begins, and targets a **single project** identified by `TARGET_PROJECT`. + +## Expected Inputs (from router handoff payload) + +| Key | Required | Description | +|---|---|---| +| `SOLUTION_ROOT` | Yes | Resolved workspace root path. | +| `TARGET_PROJECT` | Yes | Absolute path to the single application project to scaffold tests for (e.g., `C:/Source/MyApp/MIUS.API.Postgres`). | + +GENERAL INSTRUCTIONS: +- Keep to the existing .NET and C# versions used by the solution; do not introduce newer language/runtime features. +- Treat Oracle as the golden behavior source. +- Only scaffold infrastructure for Oracle initially. Once complete, user will copy the project for Postgres and modify connection strings. + +PROJECT SCAFFOLDING: +- Create an xUnit test project targeting the same .NET version as the application under test. +- Add NuGet package references required for Oracle database connectivity and xUnit test execution. +- Add a project reference to `TARGET_PROJECT` only — do not reference other application projects in the solution. +- Configure test project settings (e.g., `appsettings.json` or equivalent) for Oracle database connectivity. + +TRANSACTION MANAGEMENT: +- Implement a base test class or fixture that creates a new transaction before every test execution and rolls it back after execution. +- Ensure that all test exceptions are caught and handled to allow for proper transaction rollback. +- The transaction pattern must be inheritable by all test classes created downstream. + +SEED DATA MANAGEMENT: +- Implement a global seed manager to handle test data setup. +- Do not commit seed data because tests are isolated within transactions and rolled back after each test. +- Ensure seed data is loaded and verified before running tests. +- Avoid truncate table statements because we want to keep existing database data intact. +- Priority should be given to reusing existing seed files if any exist. +- Establish a convention for seed file location and naming that downstream test creation will follow. + +OUTPUT: +- A compilable, empty test project with the above infrastructure in place. +- No test cases — those are created by the `createIntegrationTests` subagent in the next step. diff --git a/skills/o2p-dbmigration/prompts/validate-test-results.prompt.md b/skills/o2p-dbmigration/prompts/validate-test-results.prompt.md new file mode 100644 index 000000000..a96efbd7e --- /dev/null +++ b/skills/o2p-dbmigration/prompts/validate-test-results.prompt.md @@ -0,0 +1,129 @@ +--- +name: validate-test-results +agent: 'agent' +description: 'Analyze test results, apply o2p-dbmigration skill checklist, and determine pass/fail/retry status for the migration validation workflow.' +model: Claude Sonnet 4.6 (copilot) +tools: [vscode/askQuestions, read, edit, search, todo] +--- +# Validate Integration Test Results + +Analyze test execution results, cross-reference with the `o2p-dbmigration` skill verification checklist, and produce a validation report that determines whether the workflow should exit successfully or loop back for fixes. This prompt targets a **single project** identified by `TARGET_PROJECT`. + +## Expected Inputs (from router handoff payload) + +| Key | Required | Description | +|---|---|---| +| `SOLUTION_ROOT` | Yes | Resolved workspace root path. | +| `TARGET_PROJECT` | Yes | Absolute path to the single application project whose test results are being validated (e.g., `C:/Source/MyApp/MIUS.API.Postgres`). | + +CONTEXT: +- Receives test results from `runIntegrationTests` (TRX file and/or summary) **for `TARGET_PROJECT` only**. +- Must validate both **test pass rate** and **skill checklist compliance**. +- Oracle behavior is the golden source; Postgres must match. + +INSTRUCTIONS: + +## 1. Parse Test Results +Read the TRX file or summary from: +- `{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/TestResults/` + +Extract: +- Total tests, passed, failed, skipped counts +- List of failed test names with error messages +- Any timeout or infrastructure errors (connection failures, timeouts) + +## 2. Cross-Reference with o2p-dbmigration Skill Checklist +For each failed test, analyze the error against the known Oracle→Postgres migration patterns documented in: +- `{SOLUTION_ROOT}/skills/o2p-dbmigration/references/` + +PATTERN MATCHING TABLE: +| Error Pattern | Likely Cause | Reference File | +|---------------|--------------|----------------| +| `NULL` vs empty string mismatch | Oracle treats '' as NULL | `empty-strings-handling.md` | +| "no rows returned" or silent null | Missing NOT FOUND exception | `no-data-found-exceptions.md` | +| Sort order differs between DBs | Collation mismatch | `oracle-to-postgres-sorting.md` | +| Type mismatch / comparison error | Implicit coercion difference | `oracle-to-postgres-type-coercion.md` | +| Cursor/result set empty or wrong | Refcursor handling difference | `postgres-refcursor-handling.md` | +| "operation already in progress" or concurrent command error | Single active command per connection | `postgres-concurrent-transactions.md` | + +For each failed test, tag the probable root cause category. + +## 3. Apply Verification Checklist +Review the `o2p-dbmigration` skill checklist (from `SKILL.md`): + +- [ ] Migration artifact review documented with affected components. +- [ ] Each `references/*.md` insight acknowledged and steps taken. +- [ ] Integration tests cover the behaviors mentioned in the insights. +- [ ] Test suite runs cleanly with deterministic results. +- [ ] Notes recorded describing how each insight influenced the fix. + +Score each item as: ✅ Complete | ⚠️ Partial | ❌ Incomplete + +## 4. Determine Workflow Decision +Based on test results and checklist: + +| Condition | Decision | Next Action | +|-----------|----------|-------------| +| 100% tests pass + all checklist ✅ | **EXIT: SUCCESS** | Generate final migration report | +| >90% pass + minor checklist gaps | **EXIT: CONDITIONAL** | Document known issues, generate report | +| <90% pass OR critical checklist ❌ | **LOOP: RETRY** | Create bug reports → fix → re-run tests | +| Infrastructure failures (no DB connection) | **BLOCKED** | Halt, request environment fix | + +## 5. Output Validation Report +Write the validation report to: +`{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/Validation Report.md` + +REPORT TEMPLATE: +```markdown +# Integration Test Validation Report + +**Target Project:** {TARGET_PROJECT} +**Generated:** {timestamp} +**Test Run:** {TRX filename or run identifier} + +## Test Results Summary + +| Metric | Oracle Baseline | Postgres Target | +|--------|-----------------|-----------------| +| Total | {n} | {n} | +| Passed | {n} | {n} | +| Failed | {n} | {n} | +| Skipped | {n} | {n} | +| **Pass Rate** | {%} | {%} | + +## Failed Test Analysis + +| Test Name | Error Category | Reference | Recommended Fix | +|-----------|----------------|-----------|-----------------| +| {test} | {category} | {file.md} | {brief action} | + +## Skill Checklist Status + +| Checklist Item | Status | Notes | +|----------------|--------|-------| +| Migration artifact review | {✅/⚠️/❌} | {notes} | +| Reference insights applied | {✅/⚠️/❌} | {notes} | +| Test coverage adequate | {✅/⚠️/❌} | {notes} | +| Test suite deterministic | {✅/⚠️/❌} | {notes} | +| Documentation complete | {✅/⚠️/❌} | {notes} | + +## Workflow Decision + +**Status:** {EXIT: SUCCESS | EXIT: CONDITIONAL | LOOP: RETRY | BLOCKED} +**Reason:** {brief explanation} + +### Next Steps +{Ordered list of actions based on decision} +``` + +## 6. Handoff Instructions +Return the following to the router: +- **Decision:** EXIT | LOOP | BLOCKED +- **Failed tests count:** {n} +- **Bug reports needed:** {yes/no} +- **Blocking issues:** {list if BLOCKED} + +The router will: +- EXIT → Invoke `generateApplicationMigrationReport` +- LOOP → Invoke `createBugReports` for failures, then prompt for fixes, then re-invoke `runIntegrationTests` +- BLOCKED → Halt and request user intervention diff --git a/skills/o2p-dbmigration/references/REFERENCE.md b/skills/o2p-dbmigration/references/REFERENCE.md new file mode 100644 index 000000000..b38d912f7 --- /dev/null +++ b/skills/o2p-dbmigration/references/REFERENCE.md @@ -0,0 +1,12 @@ +# Reference Index + +| File | Brief description | +| --- | --- | +| [empty-strings-handling.md](empty-strings-handling.md) | Oracle treats '' as NULL; PostgreSQL keeps empty strings distinct—patterns to align behavior in code, tests, and migrations. | +| [no-data-found-exceptions.md](no-data-found-exceptions.md) | Oracle SELECT INTO raises "no data found"; PostgreSQL doesn’t—add explicit NOT FOUND handling to mirror Oracle behavior. | +| [oracle-parentheses-from-clause.md](oracle-parentheses-from-clause.md) | Oracle allows `FROM(TABLE_NAME)` syntax; PostgreSQL requires `FROM TABLE_NAME`—remove unnecessary parentheses around table names. | +| [oracle-to-postgres-sorting.md](oracle-to-postgres-sorting.md) | How to preserve Oracle-like ordering in PostgreSQL using COLLATE "C" and DISTINCT wrapper patterns. | +| [oracle-to-postgres-to-char-numeric.md](oracle-to-postgres-to-char-numeric.md) | Oracle allows TO_CHAR(numeric) without format; PostgreSQL requires format string—use CAST(numeric AS TEXT) instead. | +| [oracle-to-postgres-type-coercion.md](oracle-to-postgres-type-coercion.md) | PostgreSQL strict type checks vs. Oracle implicit coercion—fix comparison errors by quoting or casting literals. | +| [postgres-concurrent-transactions.md](postgres-concurrent-transactions.md) | PostgreSQL allows only one active command per connection—materialize results or use separate connections to avoid concurrent operation errors. | +| [postgres-refcursor-handling.md](postgres-refcursor-handling.md) | Differences in refcursor handling; PostgreSQL requires fetching by cursor name—C# patterns to unwrap and read results. | \ No newline at end of file diff --git a/skills/o2p-dbmigration/references/closed-loop-testing-workflow.md b/skills/o2p-dbmigration/references/closed-loop-testing-workflow.md new file mode 100644 index 000000000..0aa5d5d97 --- /dev/null +++ b/skills/o2p-dbmigration/references/closed-loop-testing-workflow.md @@ -0,0 +1,112 @@ +# Closed-Loop Test Validation Workflow + +Read this reference when the user goal involves integration testing. It defines the sequencing, decision logic, and loop control for the test validation cycle. + +## Per-Project Scoping + +The closed-loop **targets one project at a time**. When a solution contains multiple projects, the router runs a complete closed-loop cycle for each project sequentially — finishing all iterations (including retries) for project A before starting the cycle for project B. + +Every testing subagent in the flow receives a `TARGET_PROJECT` parameter (absolute path to the single project under test) in its handoff payload. This ensures: +- **planIntegrationTesting** scopes the plan to artifacts from the target project only. +- **scaffoldTestProject** creates the test project for the target project only. +- **createIntegrationTests** generates tests for the target project's data access layer only. +- **runIntegrationTests** discovers and executes tests for the target project's test project only. +- **validateTestResults** analyzes results for the target project only. +- **createBugReports** scopes bug reports to the target project only. + +The loop state file is also per-project (see State Serialization below). + +## Flow + +``` +planIntegrationTesting → scaffoldTestProject → createIntegrationTests → runIntegrationTests → validateTestResults + ↑ │ + │ ▼ + │ ┌─────────────────┐ + │ │ Decision? │ + │ └────────┬────────┘ + │ EXIT │ LOOP + │ ↓ │ ↓ + │ generateReport │ createBugReports + │ │ │ + └────────────────────┴─────────┘ + (fix issues, re-run) +``` + +## Validation Decision Logic + +- **EXIT: SUCCESS** (100% pass + skill checklist complete) → Invoke `generateApplicationMigrationReport`, summarize success, end workflow. +- **EXIT: CONDITIONAL** (>90% pass, minor gaps) → Document known issues, invoke `generateApplicationMigrationReport`, note limitations. +- **LOOP: RETRY** (<90% pass OR critical checklist failures) → Invoke `createBugReports` for failures → prompt user/agent to fix → re-invoke `runIntegrationTests` → `validateTestResults`. +- **BLOCKED** (infrastructure failures, no DB connection) → Halt workflow, report blocking issues, request user intervention. + +## Loop Control + +- Track iteration count; if >3 iterations without progress, escalate to user with summary of persistent failures. +- After each loop iteration, compare failed test count to previous iteration; if unchanged, escalate. +- Maintain loop state: `iteration: {n}, previous_failures: {count}, current_failures: {count}, blocking_issues: {list}`. + +## State Serialization + +After each loop iteration (after `validateTestResults` returns), write the current loop state to a **per-project** state file: +`{SOLUTION_ROOT}/.github/o2p-dbmigration/Reports/.loop-state-{ProjectName}.md` + +where `{ProjectName}` is derived from the `TARGET_PROJECT` folder name (e.g., `MIUS.API` → `.loop-state-MIUS.API.md`). This avoids conflicts when multiple projects are tested in sequence and allows each project's loop to be resumed independently. + +This file allows the loop to resume from last-known state if the conversation context is lost or trimmed. + +State file format: +```markdown +# Loop State + +**Target Project:** {TARGET_PROJECT} +**Updated:** {timestamp} +**Iteration:** {n} +**Decision:** {EXIT: SUCCESS | EXIT: CONDITIONAL | LOOP: RETRY | BLOCKED} + +## Test Counts + +| Metric | Previous | Current | +|--------|----------|---------| +| Total | {n} | {n} | +| Passed | {n} | {n} | +| Failed | {n} | {n} | + +## Failed Tests + +| Test Name | Error Category | Matched Reference | +|-----------|----------------|-------------------| +| {FullyQualifiedTestName} | {category} | {reference filename} | + +## Bug Reports Created + +- {BUG_REPORT_*.md filename}: {status} + +## Blocking Issues + +- {issue description, or "None"} +``` + +Router behavior: +- **Before first handoff in a testing goal:** check if `.loop-state-{ProjectName}.md` exists for the current `TARGET_PROJECT`. If it does, read it and resume from the recorded iteration rather than starting from scratch. +- **After each `validateTestResults` return:** write/overwrite the per-project state file with current data. +- **On EXIT (SUCCESS or CONDITIONAL):** keep the state file for audit trail; do not delete it. + +## Reference Narrowing on Loop Iterations + +On the **first iteration**, `validateTestResults` should cross-reference all skill references to establish baseline failure categories. + +On **iteration 2+**, the router should narrow the context passed to `validateTestResults` and `createBugReports` by including only the references that matched failure categories in the previous iteration. Use the `relevant_references` field in the handoff payload. + +Reference-to-category mapping: +| Error Category | Reference File | +|----------------|----------------| +| NULL/empty string mismatch | `empty-strings-handling.md` | +| Missing NOT FOUND exception | `no-data-found-exceptions.md` | +| FROM clause syntax error | `oracle-parentheses-from-clause.md` | +| Sort order difference | `oracle-to-postgres-sorting.md` | +| TO_CHAR numeric format error | `oracle-to-postgres-to-char-numeric.md` | +| Type comparison mismatch | `oracle-to-postgres-type-coercion.md` | +| Cursor/result set issue | `postgres-refcursor-handling.md` | + +If a **new failure category** appears in a later iteration that was not present before, add its reference back into the narrowed list for subsequent passes. diff --git a/skills/o2p-dbmigration/references/empty-strings-handling.md b/skills/o2p-dbmigration/references/empty-strings-handling.md new file mode 100644 index 000000000..c6a821558 --- /dev/null +++ b/skills/o2p-dbmigration/references/empty-strings-handling.md @@ -0,0 +1,69 @@ +# Oracle to PostgreSQL: Empty String Handling Differences + +## Problem + +Oracle automatically converts empty strings (`''`) to `NULL` in VARCHAR2 columns. PostgreSQL preserves empty strings as distinct from `NULL`. This difference can cause application logic errors and test failures during migration. + +## Behavior Comparison + +**Oracle:** +- Empty string (`''`) is **always** treated as `NULL` in VARCHAR2 columns +- `WHERE column = ''` never matches rows; use `WHERE column IS NULL` +- Cannot distinguish between explicit empty string and `NULL` + +**PostgreSQL:** +- Empty string (`''`) and `NULL` are **distinct** values +- `WHERE column = ''` matches empty strings +- `WHERE column IS NULL` matches `NULL` values + +## Code Example + +```sql +-- Oracle behavior +INSERT INTO table (varchar_column) VALUES (''); +SELECT * FROM table WHERE varchar_column IS NULL; -- Returns the row + +-- PostgreSQL behavior +INSERT INTO table (varchar_column) VALUES (''); +SELECT * FROM table WHERE varchar_column IS NULL; -- Returns nothing +SELECT * FROM table WHERE varchar_column = ''; -- Returns the row +``` + +## Migration Actions + +### 1. Stored Procedures +Update logic that assumes empty strings convert to `NULL`: + +```sql +-- Preserve Oracle behavior (convert empty to NULL): +column = NULLIF(param, '') + +-- Or accept PostgreSQL behavior (preserve empty string): +column = param +``` + +### 2. Application Code +Review code that checks for `NULL` and ensure it handles empty strings appropriately: + +```csharp +// Before (Oracle-specific) +if (value == null) { } + +// After (PostgreSQL-compatible) +if (string.IsNullOrEmpty(value)) { } +``` + +### 3. Tests +Update assertions to be compatible with both behaviors: + +```csharp +// Migration-compatible test pattern +var value = reader.IsDBNull(columnIndex) ? null : reader.GetString(columnIndex); +Assert.IsTrue(string.IsNullOrEmpty(value)); +``` + +### 4. Data Migration +Decide whether to: +- Convert existing `NULL` values to empty strings +- Convert empty strings to `NULL` using `NULLIF(column, '')` +- Leave values as-is and update application logic diff --git a/skills/o2p-dbmigration/references/no-data-found-exceptions.md b/skills/o2p-dbmigration/references/no-data-found-exceptions.md new file mode 100644 index 000000000..3b446dd17 --- /dev/null +++ b/skills/o2p-dbmigration/references/no-data-found-exceptions.md @@ -0,0 +1,93 @@ +# PostgreSQL Exception Handling: SELECT INTO No Data Found + +## Overview + +A common issue when migrating from Oracle to PostgreSQL involves `SELECT INTO` statements that expect to raise an exception when no rows are found. This pattern difference can cause integration tests to fail and application logic to behave incorrectly if not properly handled. + +--- + +## Problem Description + +### Scenario +A stored procedure performs a lookup operation using `SELECT INTO` to retrieve a required value: + +```sql +SELECT column_name +INTO variable_name +FROM table1, table2 +WHERE table1.id = table2.id AND table1.id = parameter_value; +``` + +### Oracle Behavior +When a `SELECT INTO` statement in Oracle does **not find any rows**, it automatically raises: +``` +ORA-01403: no data found +``` + +This exception is caught by the procedure's exception handler and re-raised to the calling application. + +### PostgreSQL Behavior (Pre-Fix) +When a `SELECT INTO` statement in PostgreSQL does **not find any rows**, it: +- Sets the `FOUND` variable to `false` +- **Silently continues** execution without raising an exception +- Leaves the target variable uninitialized + +This fundamental difference can cause tests to fail silently and logic errors in production code. + +--- + +## Root Cause Analysis + +The PostgreSQL version was missing explicit error handling for the `NOT FOUND` condition after the `SELECT INTO` statement. + +**Original Code (Problematic):** +```plpgsql +SELECT column_name +INTO variable_name +FROM table1, table2 +WHERE table1.id = table2.id AND table1.id = parameter_value; + +IF variable_name = 'X' THEN + result_variable := 1; +ELSE + result_variable := 2; +END IF; +``` + +**Problem:** No check for `NOT FOUND` condition. When an invalid parameter is passed, the SELECT returns no rows, `FOUND` becomes `false`, and execution continues with an uninitialized variable. + +--- + +## Key Differences: Oracle vs PostgreSQL + + +Add explicit `NOT FOUND` error handling to match Oracle behavior. + +**Fixed Code:** +```plpgsql +SELECT column_name +INTO variable_name +FROM table1, table2 +WHERE table1.id = table2.id AND table1.id = parameter_value; + +-- Explicitly raise exception if no data found (matching Oracle behavior) +IF NOT FOUND THEN + RAISE EXCEPTION 'no data found'; +END IF; + +IF variable_name = 'X' THEN + result_variable := 1; +ELSE + result_variableconditional logic + ``` + +--- + +## Migration Notes for Similar Issues + +When fixing this issue, verify: + +1. **Success path tests** - Confirm valid parameters still work correctly +2. **Exception tests** - Verify exceptions are raised with invalid parameters +3. **Transaction rollback** - Ensure proper cleanup on errors +4. **Data integrity** - Confirm all fields are populated correctly in success cases diff --git a/skills/o2p-dbmigration/references/oracle-parentheses-from-clause.md b/skills/o2p-dbmigration/references/oracle-parentheses-from-clause.md new file mode 100644 index 000000000..af3b2ba95 --- /dev/null +++ b/skills/o2p-dbmigration/references/oracle-parentheses-from-clause.md @@ -0,0 +1,174 @@ +# Oracle to PostgreSQL: Parentheses in FROM Clause + +## Problem + +Oracle allows optional parentheses around table names in the FROM clause: +```sql +-- Oracle: Both are valid +SELECT * FROM (TABLE_NAME) WHERE id = 1; +SELECT * FROM TABLE_NAME WHERE id = 1; +``` + +PostgreSQL does **not** allow extra parentheses around a single table name in the FROM clause without it being a derived table or subquery. Attempting to use this pattern results in: +``` +Npgsql.PostgresException: 42601: syntax error at or near ")" +``` + +## Root Cause + +- **Oracle**: Treats `FROM(TABLE_NAME)` as equivalent to `FROM TABLE_NAME` +- **PostgreSQL**: Parentheses in the FROM clause are only valid for: + - Subqueries: `FROM (SELECT * FROM table)` + - Explicit table references that are part of join syntax + - Common Table Expressions (CTEs) + - Without a valid SELECT or join context, PostgreSQL raises a syntax error + +## Solution Pattern + +Remove the unnecessary parentheses around the table name: + +```sql +-- Oracle (problematic in PostgreSQL) +SELECT col1, col2 +FROM (TABLE_NAME) +WHERE id = 1; + +-- PostgreSQL (correct) +SELECT col1, col2 +FROM TABLE_NAME +WHERE id = 1; +``` + +## Examples + +### Example 1: Simple Table Reference + +```sql +-- Oracle +SELECT employee_id, employee_name +FROM (EMPLOYEES) +WHERE department_id = 10; + +-- PostgreSQL (fixed) +SELECT employee_id, employee_name +FROM EMPLOYEES +WHERE department_id = 10; +``` + +### Example 2: Join with Parentheses + +```sql +-- Oracle (problematic) +SELECT e.employee_id, d.department_name +FROM (EMPLOYEES) e +JOIN (DEPARTMENTS) d ON e.department_id = d.department_id; + +-- PostgreSQL (fixed) +SELECT e.employee_id, d.department_name +FROM EMPLOYEES e +JOIN DEPARTMENTS d ON e.department_id = d.department_id; +``` + +### Example 3: Valid Subquery Parentheses (Works in Both) + +```sql +-- Both Oracle and PostgreSQL +SELECT * +FROM (SELECT employee_id, employee_name FROM EMPLOYEES WHERE department_id = 10) sub; +``` + +## Migration Checklist + +When fixing this issue, verify: + +1. **Identify all problematic FROM clauses**: + - Search for `FROM (` pattern in SQL + - Verify the opening parenthesis is immediately after `FROM` followed by a table name + - Confirm it's **not** a subquery (no SELECT keyword inside) + +2. **Distinguish valid parentheses**: + - ✅ `FROM (SELECT ...)` - Valid subquery + - ✅ `FROM (table_name` followed by a join - Check if JOIN keyword follows + - ❌ `FROM (TABLE_NAME)` - Invalid, remove parentheses + +3. **Apply the fix**: + - Remove the parentheses around the table name + - Keep parentheses for legitimate subqueries + +4. **Test thoroughly**: + - Execute the query in PostgreSQL + - Verify result set matches original Oracle query + - Include in integration tests + +## Common Locations + +Search for `FROM (` in: +- ✅ Stored procedures and functions (DDL scripts) +- ✅ Application data access layers (DAL classes) +- ✅ Dynamic SQL builders +- ✅ Reporting queries +- ✅ Views and materialized views +- ✅ Complex queries with multiple joins + +## Application Code Examples + +### VB.NET + +```vb +' Before (Oracle) +StrSQL = "SELECT employee_id, NAME " _ + & "FROM (EMPLOYEES) e " _ + & "WHERE e.department_id = 10" + +' After (PostgreSQL) +StrSQL = "SELECT employee_id, NAME " _ + & "FROM EMPLOYEES e " _ + & "WHERE e.department_id = 10" +``` + +### C# + +```csharp +// Before (Oracle) +var sql = "SELECT id, name FROM (USERS) WHERE status = @status"; + +// After (PostgreSQL) +var sql = "SELECT id, name FROM USERS WHERE status = @status"; +``` + +## Error Messages to Watch For + +``` +Npgsql.PostgresException: 42601: syntax error at or near ")" +ERROR: syntax error at or near ")" +LINE 1: SELECT * FROM (TABLE_NAME) WHERE ... + ^ +``` + +## Testing Recommendations + +1. **Syntax Verification**: Parse all migrated queries to ensure they run without syntax errors + ```csharp + [Fact] + public void GetEmployees_ExecutesWithoutSyntaxError() + { + // Should not throw PostgresException with error code 42601 + var employees = dal.GetEmployees(departmentId: 10); + Assert.NotEmpty(employees); + } + ``` + +2. **Result Comparison**: Verify that result sets are identical before and after migration +3. **Regex-based Search**: Use pattern `FROM\s*\(\s*[A-Za-z_][A-Za-z0-9_]*\s*\)` to identify candidates + +## Related Files + +- Reference: [oracle-to-postgres-type-coercion.md](oracle-to-postgres-type-coercion.md) - Other syntax differences +- PostgreSQL Documentation: [SELECT Statement](https://www.postgresql.org/docs/current/sql-select.html) + +## Migration Notes + +- This is a straightforward syntactic fix with no semantic implications +- No data conversion required +- Safe to apply automated find-and-replace, but manually verify complex queries +- Update integration tests to exercise the migrated queries diff --git a/skills/o2p-dbmigration/references/oracle-to-postgres-sorting.md b/skills/o2p-dbmigration/references/oracle-to-postgres-sorting.md new file mode 100644 index 000000000..d1622bc53 --- /dev/null +++ b/skills/o2p-dbmigration/references/oracle-to-postgres-sorting.md @@ -0,0 +1,51 @@ +# Oracle to PostgreSQL Sorting Migration Guide + +Purpose: Preserve Oracle-like sorting semantics when moving queries to PostgreSQL. + +## Key points +- Oracle often treats plain `ORDER BY` as binary/byte-wise, giving case-insensitive ordering for ASCII. +- PostgreSQL defaults differ; to match Oracle behavior, use `COLLATE "C"` on sort expressions. + +## 1) Standard `SELECT … ORDER BY` +**Goal:** Keep Oracle-style ordering. + +**Pattern:** +```sql +SELECT col1 +FROM your_table +ORDER BY col1 COLLATE "C"; +``` + +**Notes:** +- Apply `COLLATE "C"` to each sort expression that must mimic Oracle. +- Works with ascending/descending and multi-column sorts, e.g. `ORDER BY col1 COLLATE "C", col2 COLLATE "C" DESC`. + +## 2) `SELECT DISTINCT … ORDER BY` +**Issue:** PostgreSQL enforces that `ORDER BY` expressions appear in the `SELECT` list for `DISTINCT`, raising: +`Npgsql.PostgresException: 42P10: for SELECT DISTINCT, ORDER BY expressions must appear in select list` + +**Oracle difference:** Oracle allowed ordering by expressions not projected when using `DISTINCT`. + +**Recommended pattern (wrap and sort):** +```sql +SELECT * +FROM ( + SELECT DISTINCT col1, col2 + FROM your_table +) AS distinct_results +ORDER BY col2 COLLATE "C"; +``` + +**Why:** +- The inner query performs the `DISTINCT` projection. +- The outer query safely orders the result set and adds `COLLATE "C"` to align with Oracle sorting. + +**Tips:** +- Ensure any columns used in the outer `ORDER BY` are included in the inner projection. +- For multi-column sorts, collate each relevant expression: `ORDER BY col2 COLLATE "C", col3 COLLATE "C" DESC`. + +## Validation checklist +- [ ] Added `COLLATE "C"` to every `ORDER BY` that should follow Oracle sorting rules. +- [ ] For `DISTINCT` queries, wrapped the projection and sorted in the outer query. +- [ ] Confirmed ordered columns are present in the inner projection. +- [ ] Re-ran tests or representative queries to verify ordering matches Oracle outputs. diff --git a/skills/o2p-dbmigration/references/oracle-to-postgres-to-char-numeric.md b/skills/o2p-dbmigration/references/oracle-to-postgres-to-char-numeric.md new file mode 100644 index 000000000..1fcd7cc73 --- /dev/null +++ b/skills/o2p-dbmigration/references/oracle-to-postgres-to-char-numeric.md @@ -0,0 +1,142 @@ +# Oracle to PostgreSQL: TO_CHAR() Numeric Conversions + +## Problem + +Oracle allows `TO_CHAR()` to convert numeric types to strings without a format specifier: +```sql +-- Oracle: Works fine +SELECT TO_CHAR(vessel_id) FROM vessels; +SELECT TO_CHAR(fiscal_year) FROM certificates; +``` + +PostgreSQL requires a format string when using `TO_CHAR()` with numeric types, otherwise it raises: +``` +42883: function to_char(numeric) does not exist +``` + +## Root Cause + +- **Oracle**: `TO_CHAR(number)` without a format mask implicitly converts the number to a string using default formatting +- **PostgreSQL**: `TO_CHAR()` always requires an explicit format string for numeric types (e.g., `'999999'`, `'FM999999'`) + +## Solution Patterns + +### Pattern 1: Use CAST (Recommended) + +The cleanest migration approach is to replace `TO_CHAR(numeric_column)` with `CAST(numeric_column AS TEXT)`: + +```sql +-- Oracle +SELECT TO_CHAR(vessel_id) AS vessel_item FROM vessels; + +-- PostgreSQL (preferred) +SELECT CAST(vessel_id AS TEXT) AS vessel_item FROM vessels; +``` + +**Advantages:** +- More idiomatic in PostgreSQL +- Clearer intent +- No format string needed + +### Pattern 2: Provide Format String + +If you need specific numeric formatting, use an explicit format mask: + +```sql +-- PostgreSQL with format +SELECT TO_CHAR(vessel_id, 'FM999999') AS vessel_item FROM vessels; +SELECT TO_CHAR(amount, 'FM999999.00') AS amount_text FROM payments; +``` + +**Format masks:** +- `'FM999999'`: Fixed-width integer (FM = Fill Mode, removes leading spaces) +- `'FM999999.00'`: Decimal with 2 places +- `'999,999.00'`: With thousand separators + +### Pattern 3: String Concatenation + +For simple concatenation where numeric conversion is implicit: + +```sql +-- Oracle +WHERE TO_CHAR(fiscal_year) = '2024' + +-- PostgreSQL (using concatenation) +WHERE fiscal_year::TEXT = '2024' +-- or +WHERE CAST(fiscal_year AS TEXT) = '2024' +``` + +## Migration Checklist + +When migrating SQL containing `TO_CHAR()`: + +1. **Identify all TO_CHAR() calls**: Search for `TO_CHAR\(` in SQL strings, stored procedures, and application queries +2. **Check the argument type**: + - **DATE/TIMESTAMP**: Keep `TO_CHAR()` with format string (e.g., `TO_CHAR(date_col, 'YYYY-MM-DD')`) + - **NUMERIC/INTEGER**: Replace with `CAST(... AS TEXT)` or add format string +3. **Test the output**: Verify that the string representation matches expectations (no unexpected spaces, decimals, etc.) +4. **Update comparison logic**: If comparing numeric-to-string, ensure consistent types on both sides + +## Application Code Review + +### VB.NET Example + +```vb +' Before (Oracle) +StrSQL = "SELECT DISTINCT TO_CHAR(VESSEL_ID) AS VESSEL_ITEM " _ + & "FROM UM001_CERTIFICATE_ISSUED " _ + & "WHERE TO_CHAR(FISCAL_YEAR_APP_NUM) = '" + strYear + "'" + +' After (PostgreSQL) +StrSQL = "SELECT DISTINCT CAST(VESSEL_ID AS TEXT) AS VESSEL_ITEM " _ + & "FROM UM001_CERTIFICATE_ISSUED " _ + & "WHERE CAST(FISCAL_YEAR_APP_NUM AS TEXT) = '" + strYear + "'" +``` + +### C# Example + +```csharp +// Before (Oracle) +var sql = "SELECT TO_CHAR(id) AS id_text FROM entities WHERE TO_CHAR(status) = @status"; + +// After (PostgreSQL) +var sql = "SELECT CAST(id AS TEXT) AS id_text FROM entities WHERE CAST(status AS TEXT) = @status"; +``` + +## Testing Recommendations + +1. **Unit Tests**: Verify numeric-to-string conversions return expected values + ```csharp + [Fact] + public void GetVesselNumbers_ReturnsVesselIdsAsStrings() + { + var results = dal.GetVesselNumbers(certificateType); + Assert.All(results, item => Assert.True(int.TryParse(item.DISPLAY_MEMBER, out _))); + } + ``` + +2. **Integration Tests**: Ensure queries with `CAST()` execute without errors +3. **Comparison Tests**: Verify WHERE clauses with numeric-to-string comparisons filter correctly + +## Common Locations + +Search for `TO_CHAR` in: +- ✅ Stored procedures and functions (DDL scripts) +- ✅ Application data access layers (DAL classes) +- ✅ Dynamic SQL builders +- ✅ Reporting queries +- ✅ ORM/Entity Framework raw SQL + +## Error Messages to Watch For + +``` +Npgsql.PostgresException: 42883: function to_char(numeric) does not exist +Npgsql.PostgresException: 42883: function to_char(integer) does not exist +Npgsql.PostgresException: 42883: function to_char(bigint) does not exist +``` + +## See Also + +- [oracle-to-postgres-type-coercion.md](oracle-to-postgres-type-coercion.md) - Related type conversion issues +- PostgreSQL Documentation: [Data Type Formatting Functions](https://www.postgresql.org/docs/current/functions-formatting.html) diff --git a/skills/o2p-dbmigration/references/oracle-to-postgres-type-coercion.md b/skills/o2p-dbmigration/references/oracle-to-postgres-type-coercion.md new file mode 100644 index 000000000..c2ba3547a --- /dev/null +++ b/skills/o2p-dbmigration/references/oracle-to-postgres-type-coercion.md @@ -0,0 +1,159 @@ +# Oracle to PostgreSQL Type Coercion Issues + +## Overview +This document describes a common migration issue encountered when porting SQL code from Oracle to PostgreSQL. The issue stems from fundamental differences in how these databases handle implicit type conversions in comparison operators. + +## The Problem + +### Symptom +When migrating SQL queries from Oracle to PostgreSQL, you may encounter the following error: + +``` +Npgsql.PostgresException: 42883: operator does not exist: character varying <> integer +POSITION: [line_number] +``` + +### Root Cause +PostgreSQL has **strict type enforcement** and does not perform implicit type coercion in comparison operators. Oracle, by contrast, automatically converts operands to compatible types during comparison operations. + +#### Example Mismatch + +**Oracle SQL (works fine):** +```sql +AND physical_address.pcountry_cd <> 124 +``` +- `pcountry_cd` is a `VARCHAR2` +- `124` is an integer literal +- Oracle silently converts `124` to a string for comparison + +**PostgreSQL (fails):** +```sql +AND physical_address.pcountry_cd <> 124 +``` +``` +42883: operator does not exist: character varying <> integer +``` +- `pcountry_cd` is a `character varying` +- `124` is an integer literal +- PostgreSQL rejects the comparison because the types don't match + +## The Solution + +### Approach 1: Use String Literals (Recommended) +Convert integer literals to string literals: + +```sql +AND physical_address.pcountry_cd <> '124' +``` + +**Pros:** +- Semantically correct (country codes are typically stored as strings) +- Most efficient +- Clearest intent + +**Cons:** +- None + +### Approach 2: Explicit Type Casting +Explicitly cast the integer to a string type: + +```sql +AND physical_address.pcountry_cd <> CAST(124 AS VARCHAR) +``` + +**Pros:** +- Makes the conversion explicit and visible +- Useful if the value is a parameter or complex expression + +**Cons:** +- Slightly less efficient +- More verbose + +## Common Comparison Operators Affected + +All comparison operators can trigger this issue: +- `<>` (not equal) +- `=` (equal) +- `<` (less than) +- `>` (greater than) +- `<=` (less than or equal) +- `>=` (greater than or equal) + +## Detection Strategy + +When migrating from Oracle to PostgreSQL: + +1. **Search for numeric literals in WHERE clauses** comparing against string/varchar columns +2. **Look for patterns like:** + - `column_name <> 123` (where column is VARCHAR/CHAR) + - `column_name = 456` (where column is VARCHAR/CHAR) + - `column_name IN (1, 2, 3)` (where column is VARCHAR/CHAR) + +3. **Code review checklist:** + - Are all comparison values correctly typed? + - Do string columns always use string literals? + - Are numeric columns always compared against numeric values? + +## Real-World Example + +**Original Oracle Query:** +```sql +SELECT ac040.stakeholder_id, + ac006.organization_etxt + FROM ac040_stakeholder ac040 + INNER JOIN ac006_organization ac006 ON ac040.stakeholder_id = ac006.organization_id + WHERE physical_address.pcountry_cd <> 124 + AND LOWER(ac006.organization_etxt) LIKE '%' || @orgtxt || '%' + ORDER BY UPPER(ac006.organization_etxt) +``` + +**Fixed PostgreSQL Query:** +```sql +SELECT ac040.stakeholder_id, + ac006.organization_etxt + FROM ac040_stakeholder ac040 + INNER JOIN ac006_organization ac006 ON ac040.stakeholder_id = ac006.organization_id + WHERE physical_address.pcountry_cd <> '124' + AND LOWER(ac006.organization_etxt) LIKE '%' || @orgtxt || '%' + ORDER BY UPPER(ac006.organization_etxt) +``` + +**Change:** `124` → `'124'` + +## Prevention Best Practices + +1. **Use Type-Consistent Literals:** + - For string columns: Always use string literals (`'value'`) + - For numeric columns: Always use numeric literals (`123`) + - For dates: Always use date literals (`DATE '2024-01-01'`) + +2. **Leverage Database Tools:** + - Use your IDE's SQL linter to catch type mismatches + - Run PostgreSQL syntax validation during code review + +3. **Test Early:** + - Execute migration queries against PostgreSQL before deployment + - Include integration tests that exercise all comparison operators + +4. **Documentation:** + - Document any type coercions in comments + - Mark migrated code with revision history + +## References + +- [PostgreSQL Type Casting Documentation](https://www.postgresql.org/docs/current/sql-syntax.html) +- [Oracle Type Conversion Documentation](https://docs.oracle.com/database/121/SQLRF/sql_elements003.htm) +- [Npgsql Exception: Operator Does Not Exist](https://www.npgsql.org/doc/api/NpgsqlException.html) + +## Related Issues + +This issue is part of broader Oracle → PostgreSQL migration challenges: +- Implicit function conversions (e.g., `TO_CHAR`, `TO_DATE`) +- String concatenation operator differences (`||` works in both, but behavior differs) +- Numeric precision and rounding differences +- NULL handling in comparisons + +--- + +**Last Updated:** 2024 +**Affected Versions:** PostgreSQL 9.6+, Npgsql 4.0+ diff --git a/skills/o2p-dbmigration/references/postgres-concurrent-transactions.md b/skills/o2p-dbmigration/references/postgres-concurrent-transactions.md new file mode 100644 index 000000000..cdae38d12 --- /dev/null +++ b/skills/o2p-dbmigration/references/postgres-concurrent-transactions.md @@ -0,0 +1,248 @@ +# Oracle to PostgreSQL: Concurrent Transaction Handling + +## Overview + +When migrating from Oracle to PostgreSQL, a critical difference exists in how **concurrent operations on a single database connection** are handled. Oracle's ODP.NET driver allows multiple active commands and result sets on the same connection simultaneously, while PostgreSQL's Npgsql driver enforces a strict **one active command per connection** rule. Code that worked seamlessly in Oracle will throw runtime exceptions in PostgreSQL if concurrent operations share a connection. + +## The Core Difference + +**Oracle Behavior:** +- A single connection can have multiple active commands executing concurrently +- Opening a second `DataReader` while another is still open is permitted +- Nested or overlapping database calls on the same connection work transparently + +**PostgreSQL Behavior:** +- A connection supports only **one active command at a time** +- Attempting to execute a second command while a `DataReader` is open throws an exception +- Lazy-loaded navigation properties or callback-driven reads that trigger additional queries on the same connection will fail + +## Common Error Symptoms + +When migrating Oracle code without accounting for this difference: + +``` +System.InvalidOperationException: An operation is already in progress. +``` + +``` +Npgsql.NpgsqlOperationInProgressException: A command is already in progress: <SQL text> +``` + +These occur when application code attempts to execute a new command on a connection that already has an active `DataReader` or uncommitted command in flight. + +--- + +## Problem Scenarios + +### Scenario 1: Iterating a DataReader While Executing Another Command + +```csharp +using (var reader = command1.ExecuteReader()) +{ + while (reader.Read()) + { + // PROBLEM: executing a second command on the same connection + // while the reader is still open + using (var command2 = new NpgsqlCommand("SELECT ...", connection)) + { + var value = command2.ExecuteScalar(); // FAILS + } + } +} +``` + +### Scenario 2: Lazy Loading / Deferred Execution in Data Access Layers + +```csharp +// Oracle: works because ODP.NET supports concurrent readers +var items = repository.GetItems(); // returns IEnumerable backed by open DataReader +foreach (var item in items) +{ + // PROBLEM: triggers a second query on the same connection + var details = repository.GetDetails(item.Id); // FAILS on PostgreSQL +} +``` + +### Scenario 3: Nested Stored Procedure Calls via Application Code + +```csharp +// Oracle: ODP.NET handles multiple active commands +command1.ExecuteNonQuery(); // starts a long-running operation +command2.ExecuteScalar(); // FAILS on PostgreSQL — command1 still in progress +``` + +--- + +## Solutions + +### Solution 1: Materialize Results Before Issuing New Commands (Recommended) + +Close the first result set by loading it into memory before executing subsequent commands on the same connection. + +```csharp +// Load all results into a list first +var items = new List<Item>(); +using (var reader = command1.ExecuteReader()) +{ + while (reader.Read()) + { + items.Add(MapItem(reader)); + } +} // reader is closed and disposed here + +// Now safe to execute another command on the same connection +foreach (var item in items) +{ + using (var command2 = new NpgsqlCommand("SELECT ...", connection)) + { + command2.Parameters.AddWithValue("id", item.Id); + var value = command2.ExecuteScalar(); // Works + } +} +``` + +For LINQ / EF Core scenarios, force materialization with `.ToList()`: + +```csharp +// Before (fails on PostgreSQL — deferred execution keeps connection busy) +var items = dbContext.Items.Where(i => i.Active); +foreach (var item in items) +{ + var details = dbContext.Details.FirstOrDefault(d => d.ItemId == item.Id); +} + +// After (materializes first query before issuing second) +var items = dbContext.Items.Where(i => i.Active).ToList(); +foreach (var item in items) +{ + var details = dbContext.Details.FirstOrDefault(d => d.ItemId == item.Id); +} +``` + +### Solution 2: Use Separate Connections for Concurrent Operations + +When operations genuinely need to run concurrently, open a dedicated connection for each. + +```csharp +using (var reader = command1.ExecuteReader()) +{ + while (reader.Read()) + { + // Use a separate connection for the nested query + using (var connection2 = new NpgsqlConnection(connectionString)) + { + connection2.Open(); + using (var command2 = new NpgsqlCommand("SELECT ...", connection2)) + { + var value = command2.ExecuteScalar(); // Works — different connection + } + } + } +} +``` + +### Solution 3: Restructure to a Single Query + +Where possible, combine nested lookups into a single query using JOINs or subqueries to eliminate the need for concurrent commands entirely. + +```csharp +// Before: two sequential queries on the same connection +var order = GetOrder(orderId); // query 1 +var details = GetOrderDetails(orderId); // query 2 (fails if query 1 reader still open) + +// After: single query with JOIN +using (var command = new NpgsqlCommand( + "SELECT o.*, d.* FROM orders o JOIN order_details d ON o.id = d.order_id WHERE o.id = @id", + connection)) +{ + command.Parameters.AddWithValue("id", orderId); + using (var reader = command.ExecuteReader()) + { + // Process combined result set + } +} +``` + +--- + +## Detection Strategy + +### Code Review Checklist + +- [ ] Search for methods that open a `DataReader` and call other database methods before closing it +- [ ] Look for `IEnumerable` return types from data access methods that defer execution (indicate open readers) +- [ ] Identify EF Core queries without `.ToList()` / `.ToArray()` that are iterated while issuing further queries +- [ ] Check for nested stored procedure calls in application code that share a connection + +### Common Locations to Search + +- Data access layers and repository classes +- Service methods that orchestrate multiple repository calls +- Code paths that iterate query results and perform lookups per row +- Event handlers or callbacks triggered during data iteration + +### Search Patterns + +```regex +ExecuteReader\(.*\)[\s\S]*?Execute(Scalar|NonQuery|Reader)\( +``` + +```regex +\.Where\(.*\)[\s\S]*?foreach[\s\S]*?dbContext\. +``` + +--- + +## Error Messages to Watch For + +| Error Message | Likely Cause | +|---------------|--------------| +| `An operation is already in progress` | Second command executed while a `DataReader` is open on the same connection | +| `A command is already in progress: <SQL>` | Npgsql detected overlapping command execution on a single connection | +| `The connection is already in state 'Executing'` | Connection state conflict from concurrent usage | + +--- + +## Comparison Table: Oracle vs. PostgreSQL + +| Aspect | Oracle (ODP.NET) | PostgreSQL (Npgsql) | +|--------|------------------|---------------------| +| **Concurrent commands** | Multiple active commands per connection | One active command per connection | +| **Multiple open DataReaders** | Supported | Not supported — must close/materialize first | +| **Nested DB calls during iteration** | Transparent | Throws `InvalidOperationException` | +| **Deferred execution safety** | Safe to iterate and query | Must materialize (`.ToList()`) before issuing new queries | +| **Connection pooling impact** | Lower connection demand | May need more pooled connections if using Solution 2 | + +--- + +## Best Practices + +1. **Materialize early** — Call `.ToList()` or `.ToArray()` on query results before iterating and issuing further database calls. This is the simplest and most reliable fix. + +2. **Audit data access patterns** — Review all repository and data access methods for deferred-execution return types (`IEnumerable`, `IQueryable`) that callers iterate while issuing additional queries. + +3. **Prefer single queries** — Where feasible, combine nested lookups into JOINs or subqueries to eliminate the concurrent-command pattern entirely. + +4. **Isolate connections when necessary** — If concurrent operations are genuinely required, use separate connections rather than attempting to share one. + +5. **Test iterative workflows** — Integration tests should cover scenarios where code iterates result sets and performs additional database operations per row, as these are the most common failure points. + +## Migration Checklist + +- [ ] Identify all code paths that execute multiple commands on a single connection concurrently +- [ ] Locate `IEnumerable`-backed data access methods that defer execution with open readers +- [ ] Add `.ToList()` / `.ToArray()` materialization where deferred results are iterated alongside further queries +- [ ] Refactor nested database calls to use separate connections or combined queries where appropriate +- [ ] Verify EF Core navigation properties and lazy loading do not trigger concurrent connection usage +- [ ] Update integration tests to cover iterative data access patterns +- [ ] Load-test connection pool sizing if Solution 2 (separate connections) is used extensively + +## References + +- [Npgsql Documentation: Basic Usage](https://www.npgsql.org/doc/basic-usage.html) +- [PostgreSQL Documentation: Concurrency Control](https://www.postgresql.org/docs/current/mvcc.html) +- [Npgsql GitHub: Multiple Active Result Sets Discussion](https://github.com/npgsql/npgsql/issues/462) + +--- + +*This document provides guidance for handling concurrent transaction and command differences when migrating from Oracle to PostgreSQL. Adapt the code examples to your specific application architecture and requirements.* diff --git a/skills/o2p-dbmigration/references/postgres-refcursor-handling.md b/skills/o2p-dbmigration/references/postgres-refcursor-handling.md new file mode 100644 index 000000000..5ae4e6183 --- /dev/null +++ b/skills/o2p-dbmigration/references/postgres-refcursor-handling.md @@ -0,0 +1,392 @@ +# Oracle to PostgreSQL: Refcursor Handling in Client Applications + +## Overview + +When migrating from Oracle to PostgreSQL, a critical difference exists in how **refcursor** (reference cursor) output parameters are handled by client applications. Oracle's driver automatically unwraps refcursors to expose result sets directly, while PostgreSQL's Npgsql driver returns a cursor name that must be explicitly fetched. This fundamental difference requires client code modifications to avoid runtime errors. + +## The Core Difference + +**Oracle Behavior:** + +- Refcursor output parameters automatically expose their result set to the data reader +- Client code can immediately access result columns +- No additional commands needed + +**PostgreSQL Behavior:** + +- Refcursor output parameters return a **cursor name** (e.g., `"<unnamed portal 1>"`) +- The cursor remains open in the database session +- Client must execute `FETCH ALL FROM "<cursor_name>"` to retrieve actual data +- Failure to fetch results in `IndexOutOfRangeException` when accessing expected columns + +## Common Error Symptoms + +When migrating Oracle code without accounting for this difference: + +``` +System.IndexOutOfRangeException: Field not found in row: <column_name> +``` + +This occurs because the data reader contains only the refcursor parameter itself, not the actual query results. + +## Database Stored Procedure Pattern + +### Oracle Stored Procedure + +```sql +CREATE OR REPLACE PROCEDURE get_users( + p_department_id IN NUMBER, + cur_result OUT SYS_REFCURSOR +) AS +BEGIN + OPEN cur_result FOR + SELECT user_id, user_name, email + FROM users + WHERE department_id = p_department_id + ORDER BY user_name; +END; +``` + +### PostgreSQL Stored Procedure + +```sql +CREATE OR REPLACE PROCEDURE get_users( + p_department_id IN INTEGER, + cur_result OUT refcursor +) +LANGUAGE plpgsql +AS $$ +BEGIN + OPEN cur_result FOR + SELECT user_id, user_name, email + FROM users + WHERE department_id = p_department_id + ORDER BY user_name; +END; +$$; +``` + +**Key Difference:** PostgreSQL returns the cursor itself as an `OUT` parameter, not the result set. + +## Client Code Solution (C#) + +### Problematic Oracle-Style Code + +This code works with Oracle but fails with PostgreSQL: + +```csharp +using Npgsql; +using NpgsqlTypes; + +public IEnumerable<User> GetUsers(int departmentId) +{ + var users = new List<User>(); + + using (var connection = new NpgsqlConnection(connectionString)) + { + connection.Open(); + + using (var command = new NpgsqlCommand("get_users", connection)) + { + command.CommandType = CommandType.StoredProcedure; + + command.Parameters.AddWithValue("p_department_id", departmentId); + + var refcursorParam = new NpgsqlParameter("cur_result", NpgsqlDbType.Refcursor); + refcursorParam.Direction = ParameterDirection.InputOutput; + command.Parameters.Add(refcursorParam); + + // This executes the procedure + using (var reader = command.ExecuteReader()) + { + // PROBLEM: reader only contains the cursor name, not the data + // Attempting to read "user_id" will throw IndexOutOfRangeException + while (reader.Read()) + { + users.Add(new User + { + UserId = reader.GetInt32(reader.GetOrdinal("user_id")), // FAILS HERE + UserName = reader.GetString(reader.GetOrdinal("user_name")), + Email = reader.GetString(reader.GetOrdinal("email")) + }); + } + } + } + } + + return users; +} +``` + +### Solution: Explicit Refcursor Unwrapping + +```csharp +using Npgsql; +using NpgsqlTypes; + +public IEnumerable<User> GetUsers(int departmentId) +{ + var users = new List<User>(); + + using (var connection = new NpgsqlConnection(connectionString)) + { + connection.Open(); + + using (var command = new NpgsqlCommand("get_users", connection)) + { + command.CommandType = CommandType.StoredProcedure; + + command.Parameters.AddWithValue("p_department_id", departmentId); + + var refcursorParam = new NpgsqlParameter("cur_result", NpgsqlDbType.Refcursor); + refcursorParam.Direction = ParameterDirection.InputOutput; + command.Parameters.Add(refcursorParam); + + // Execute procedure to open the cursor + command.ExecuteNonQuery(); + + // Extract the cursor name from the output parameter + string cursorName = (string)refcursorParam.Value; + + // Fetch the actual data from the cursor + using (var fetchCommand = new NpgsqlCommand($"FETCH ALL FROM \"{cursorName}\"", connection)) + { + fetchCommand.CommandType = CommandType.Text; + + using (var reader = fetchCommand.ExecuteReader()) + { + // Now reader contains the actual result set + while (reader.Read()) + { + users.Add(new User + { + UserId = reader.GetInt32(reader.GetOrdinal("user_id")), + UserName = reader.GetString(reader.GetOrdinal("user_name")), + Email = reader.GetString(reader.GetOrdinal("email")) + }); + } + } + } + } + } + + return users; +} +``` + +### Generic Helper Method + +For reusability across multiple procedures, create a generic helper: + +```csharp +public static class PostgresHelpers +{ + public static NpgsqlDataReader ExecuteRefcursorProcedure( + NpgsqlConnection connection, + string procedureName, + Dictionary<string, object> parameters, + string refcursorParameterName) + { + using (var command = new NpgsqlCommand(procedureName, connection)) + { + command.CommandType = CommandType.StoredProcedure; + + // Add input parameters + foreach (var param in parameters) + { + command.Parameters.AddWithValue(param.Key, param.Value); + } + + // Add refcursor output parameter + var refcursorParam = new NpgsqlParameter(refcursorParameterName, NpgsqlDbType.Refcursor); + refcursorParam.Direction = ParameterDirection.InputOutput; + command.Parameters.Add(refcursorParam); + + // Execute to open the cursor + command.ExecuteNonQuery(); + + // Get the cursor name + string cursorName = (string)refcursorParam.Value; + + if (string.IsNullOrEmpty(cursorName)) + { + return null; + } + + // Fetch and return the actual data + var fetchCommand = new NpgsqlCommand($"FETCH ALL FROM \"{cursorName}\"", connection); + fetchCommand.CommandType = CommandType.Text; + + // Note: Caller is responsible for disposing the reader + return fetchCommand.ExecuteReader(); + } + } +} + +// Usage example: +public IEnumerable<User> GetUsers(int departmentId) +{ + var users = new List<User>(); + + using (var connection = new NpgsqlConnection(connectionString)) + { + connection.Open(); + + var parameters = new Dictionary<string, object> + { + { "p_department_id", departmentId } + }; + + using (var reader = PostgresHelpers.ExecuteRefcursorProcedure( + connection, + "get_users", + parameters, + "cur_result")) + { + if (reader != null) + { + while (reader.Read()) + { + users.Add(new User + { + UserId = reader.GetInt32(reader.GetOrdinal("user_id")), + UserName = reader.GetString(reader.GetOrdinal("user_name")), + Email = reader.GetString(reader.GetOrdinal("email")) + }); + } + } + } + } + + return users; +} +``` + +## Transactional Context + +When working within transactions, ensure the `FETCH` command uses the same transaction: + +```csharp +public IEnumerable<User> GetUsersInTransaction( + NpgsqlConnection connection, + NpgsqlTransaction transaction, + int departmentId) +{ + var users = new List<User>(); + + using (var command = new NpgsqlCommand("get_users", connection, transaction)) + { + command.CommandType = CommandType.StoredProcedure; + + command.Parameters.AddWithValue("p_department_id", departmentId); + + var refcursorParam = new NpgsqlParameter("cur_result", NpgsqlDbType.Refcursor); + refcursorParam.Direction = ParameterDirection.InputOutput; + command.Parameters.Add(refcursorParam); + + command.ExecuteNonQuery(); + + string cursorName = (string)refcursorParam.Value; + + // Important: Use the same transaction for the FETCH command + using (var fetchCommand = new NpgsqlCommand($"FETCH ALL FROM \"{cursorName}\"", connection, transaction)) + { + fetchCommand.CommandType = CommandType.Text; + + using (var reader = fetchCommand.ExecuteReader()) + { + while (reader.Read()) + { + users.Add(new User + { + UserId = reader.GetInt32(reader.GetOrdinal("user_id")), + UserName = reader.GetString(reader.GetOrdinal("user_name")), + Email = reader.GetString(reader.GetOrdinal("email")) + }); + } + } + } + } + + return users; +} +``` + +## Debugging Tips + +### Before Fix - What You'll See + +When inspecting the data reader after executing a refcursor procedure without proper unwrapping: + +```csharp +// Immediate after ExecuteReader() on the procedure +Console.WriteLine($"Field Count: {reader.FieldCount}"); // Output: 1 +Console.WriteLine($"Field Name: {reader.GetName(0)}"); // Output: "cur_result" + +// Attempting to access expected columns throws exception: +var userId = reader.GetInt32(reader.GetOrdinal("user_id")); +// Throws: IndexOutOfRangeException: Field not found in row: user_id +``` + +### After Fix - What You Should See + +After properly fetching from the cursor: + +```csharp +// After FETCH ALL FROM cursor +Console.WriteLine($"Field Count: {reader.FieldCount}"); // Output: 3 +Console.WriteLine($"Field 0: {reader.GetName(0)}"); // Output: "user_id" +Console.WriteLine($"Field 1: {reader.GetName(1)}"); // Output: "user_name" +Console.WriteLine($"Field 2: {reader.GetName(2)}"); // Output: "email" + +// Now columns are accessible +var userId = reader.GetInt32(reader.GetOrdinal("user_id")); // Works correctly +``` + +## Comparison Table: Oracle vs. PostgreSQL Refcursor Handling + +| Aspect | Oracle (ODP.NET) | PostgreSQL (Npgsql) | +|--------|------------------|---------------------| +| **Cursor Return** | Result set directly accessible in data reader | Cursor name string returned in output parameter | +| **Data Access** | Immediate via `ExecuteReader()` | Requires separate `FETCH ALL FROM` command | +| **Code Changes** | `ExecuteReader()` returns data | `ExecuteNonQuery()` → get cursor name → `FETCH` | +| **Multiple Cursors** | Multiple refcursors work automatically | Each requires separate `FETCH` command | +| **Cursor Lifetime** | Managed automatically by driver | Remains open; must manage explicitly | +| **Transaction Handling** | Transparent | `FETCH` must use same transaction context | + +## Best Practices + +1. **Centralize refcursor handling** - Create a generic helper method to avoid duplicating unwrapping logic across your codebase + +2. **Handle null cursors** - Always check if the cursor name is null or empty before attempting to fetch + +3. **Transaction consistency** - Ensure `FETCH` commands use the same transaction as the procedure execution + +4. **Resource cleanup** - Properly dispose of both the initial command and fetch command resources + +5. **Error handling** - Wrap refcursor operations in try-catch blocks to handle potential cursor-related errors gracefully + +6. **Documentation** - Clearly document which procedures return refcursors and require special handling + +## Migration Checklist + +When migrating Oracle applications to PostgreSQL: + +- [ ] Identify all stored procedures that return `SYS_REFCURSOR` (Oracle) or `refcursor` (PostgreSQL) +- [ ] Locate client code that calls these procedures +- [ ] Update client code to use the two-step pattern: execute → fetch +- [ ] Test each modified data access method +- [ ] Consider creating a generic helper method for refcursor handling +- [ ] Update unit and integration tests +- [ ] Document the pattern for future development + +## References + +- [PostgreSQL Documentation: Cursors](https://www.postgresql.org/docs/current/plpgsql-cursors.html) +- [PostgreSQL FETCH Command](https://www.postgresql.org/docs/current/sql-fetch.html) +- [Npgsql Documentation: Basic Types](https://www.npgsql.org/doc/types/basic.html) +- [Npgsql Refcursor Support](https://github.com/npgsql/npgsql/issues/1887) + +--- + +*This document provides guidance for handling refcursor differences when migrating from Oracle to PostgreSQL. Adapt the code examples to your specific application architecture and requirements.*