Skip to content

Latest commit

 

History

History
110 lines (79 loc) · 7.26 KB

File metadata and controls

110 lines (79 loc) · 7.26 KB

Financial Services Plugin vs Hand-Built Model Comparison

Date: 2026-04-06 Plugin: anthropics/financial-services-plugins (v0.1.0/v0.2.0) Hand-built: ADUS case study #2 (DCF + LBO from adus_10k_master.json)


Head-to-Head Comparison

Dimension Plugin Approach Our Approach Verdict
Excel engine openpyxl + recalc.py post-processing (DCF skill line 21) xlsxwriter (clean output, no recovery dialog) Ours — plugin's openpyxl causes Excel recovery prompts
Data source MCP providers + web search (ad hoc per model) Master JSON from 10-K → all models reference it Ours — prevents cross-model desync
Cross-model consistency DCF and LBO built independently, no sync mechanism JSON → DCF + LBO (parallel) → Phase 4 cross-audit Ours — plugin has no equivalent
LBO template Template-based (expects .xlsx file; line 10-17 of LBO skill) From-scratch with same JSON source Trade-off — template faster but rigid
Color convention 4 colors: blue=input, black=formula, purple=same-tab ref, green=cross-tab ref 3 colors: blue=input, black=formula, green=cross-tab ref Plugin — purple for same-tab is more precise
Scenarios Bear/Base/Bull built in (DCF Step 3) Single base + full sensitivity recalc Trade-off — depends on use case
TLB amortization Generic "paydown" — not specified flat vs declining Explicitly flat 1% of original face, formula: =$B$[orig]*$B$[rate] Ours — Medpace lesson encoded
Value creation bridge Not included in LBO skill Revenue growth + margin expansion + multiple expansion + debt paydown - fees; foots to penny Ours
Tax rate documentation Not addressed DCF=24.75% effective, LBO=21% statutory, both documented with rationale Ours — Medpace lesson
Sensitivity architecture Full recalc per cell — 3 tables × 25 cells = 75 formulas (DCF skill line 62-64) Full recalc per cell — same approach Tie
Interactive checkpoints Step-by-step user confirmation after each section (DCF Step lines 51-56) End-to-end build + self-audit Plugin — catches errors earlier
Cell comments/sourcing Required on every blue input: "Source: [System], [Date], [Reference]" (DCF line 67-70) JSON has source notes, not per-cell comments Plugin — better audit trail in the file itself
Share count control Not specified Single cell on Assumptions tab, referenced everywhere Ours — prevents share count drift
Formulas over hardcodes Explicitly NON-NEGOTIABLE (DCF line 44-49) Same principle applied Tie
Model layout planning "Define ALL section row positions BEFORE writing formulas" (DCF line 73-77) Same approach (row index variables defined first) Tie
Circular reference handling "Use Beginning Balance to break circularity" (LBO line 138-139) Same approach (interest on opening balance) Tie

Critical Gaps in the Plugin

1. openpyxl Default (DCF Skill Line 21-23)

The DCF skill explicitly says: "If generating a standalone .xlsx file: Use Python/openpyxl, then run recalc.py before delivery."

Problem: openpyxl-generated files trigger Excel's "We found a problem with some content" recovery dialog. We proved this with our v1 DCF model. xlsxwriter produces clean files that open without prompts.

Evidence: Our adus_dcf_model.xlsx v1 (openpyxl) = recovery prompt. v2 (xlsxwriter) = clean open.

Fix: Change the skill to recommend xlsxwriter for standalone .xlsx generation, openpyxl only for reading/editing existing files.

2. No Single Source of Truth Architecture

The plugin pulls data ad-hoc for each model via MCP providers and web searches. Running /dcf ADUS and /lbo ADUS separately provides no guarantee they use the same:

  • Revenue figures
  • EBITDA calculations
  • Share count
  • Tax rates
  • Debt balances

Problem: This is exactly the cross-model desync we documented in the Medpace case study, where the DCF used a 15% effective tax rate and the LBO used 21% statutory with no documentation of why.

Fix: Introduce a "master data extraction" step (like our Phase 1 JSON) that both models reference. Or at minimum, when /lbo is run after /dcf for the same company, the LBO should read the DCF's assumptions and flag differences.

3. No Cross-Model Audit

The plugin suite has no equivalent of our Phase 4. There is no command or skill that:

  • Compares historical data across DCF and LBO for the same company
  • Verifies assumption consistency (tax rate, CapEx/Rev, D&A/Rev)
  • Checks structural integrity (BS balances, cash ties, S&U check)
  • Produces an audit trail with cell-level references

Fix: Add a /cross-audit command that reads DCF and LBO xlsx files and produces a consistency report.

4. Template Dependency for LBO

The LBO skill (line 10-17) says: "Always check for an attached template file first" and "NEVER build from scratch when a template is provided."

Problem: If no template is provided, the skill falls back to asking the user or using a generic examples/LBO_Model.xlsx. But the template path (/mnt/skills/public/xlsx/) may not exist in all environments. The from-scratch capability is underdeveloped.

5. Missing IB-Specific Checks

  • TLB mandatory amortization: not specified as flat (1% of original face) vs declining
  • Value creation bridge: not included in the LBO output
  • MOIC precision: not specified as 2 decimal places
  • Share count: no single-cell architecture enforced

Ideas to Adopt from the Plugin

1. Purple Font for Same-Tab References

The plugin uses 4 colors instead of our 3:

  • Blue (#0000FF) = hardcoded inputs
  • Black (#000000) = formulas with calculations
  • Purple (#800080) = same-tab cell references (e.g., =B9)
  • Green (#008000) = cross-tab references (e.g., =Assumptions!B5)

This is more precise than our 3-color system. We should adopt purple for same-tab references.

2. Cell Comments on Every Input

The plugin requires: "Add cell comments AS each hardcoded value is created. Format: Source: [System/Document], [Date], [Reference], [URL if applicable]."

Our JSON has source notes but they don't flow into the Excel cells. Adding comments would improve the audit trail within the file itself.

3. Interactive Section Checkpoints

The plugin stops after each major section for user verification. This catches errors earlier than our end-to-end build + post-hoc audit approach.

4. Sensitivity Table Dimensions

The plugin specifies ODD dimensions (5×5 or 7×7) to guarantee a true center cell for the base case. Good practice we should formalize.


Summary

The Anthropic financial services plugin provides a solid foundation for IB workflows, but has significant architectural gaps when used for real deal work:

  1. No data integrity layer — ad-hoc data sourcing without a single source of truth
  2. No cross-model consistency — DCF and LBO are independent silos
  3. openpyxl file quality issues — same bug we discovered and fixed
  4. Missing PE-specific features — TLB amort, value creation bridge, MOIC precision

Our hand-built pipeline demonstrates the architecture that would make these plugins production-ready: JSON extraction → parallel model builds → cross-model audit → presentation.