Add Databricks App Foundation Model skill#222
Open
jiteshsoni wants to merge 5 commits intodatabricks-solutions:mainfrom
Open
Add Databricks App Foundation Model skill#222jiteshsoni wants to merge 5 commits intodatabricks-solutions:mainfrom
jiteshsoni wants to merge 5 commits intodatabricks-solutions:mainfrom
Conversation
Document secure Foundation Model API calling patterns for Databricks Apps using injected service principal credentials with PAT override, OAuth token caching, OpenAI SDK wiring, and forwarded viewer identity headers.
Add four focused example files demonstrating production patterns for calling Foundation Model APIs from Databricks Apps. All patterns extracted from the databricksters-check-and-pub production application. ## Working Example Source These patterns come from a real production Databricks App deployed at databricksters.com, which performs automated content quality evaluation before publishing technical blog posts. App Complexity: - **5 LLM calls per content evaluation**: - Phase 1 (Compliance): 2 parallel calls (pricing check, competitor check) - Phase 2 (AI Optimization): 3 parallel calls (structure, TL;DR, FAQ) - **Parallelism**: max_workers=3 (configurable via LLM_MAX_CONCURRENCY) - **Performance**: ~2s total vs ~10s serial (5× speedup) - **Auth**: OAuth M2M with service principal (no PAT in prod) - **Response Parsing**: Robust JSON extraction with retry logic - **4,884 lines** of production Streamlit code This demonstrates the real need for this skill: production apps calling foundation models from Databricks Apps require specialized patterns that don't exist in databricks-python-sdk or databricks-model-serving. ## Files Added examples/1-auth-and-token-minting.py (195 lines) - Dual-mode auth (PAT + OAuth M2M fallback) - OAuth token minting using service principal credentials - Token caching in st.session_state with expiry check - Viewer identity extraction from forwarded headers - OpenAI SDK wiring to Databricks serving endpoints examples/2-minimal-chat-app.py (276 lines) - Complete deployable Streamlit chat application - Multi-turn conversation with history - Latency tracking and error handling - Deployment instructions in docstring examples/3-parallel-llm-calls.py (294 lines) - Parallel foundation model calls using ThreadPoolExecutor - Configurable concurrency (LLM_MAX_CONCURRENCY env var) - Error handling per job (don't fail entire batch) - Performance comparison (6s serial → 2s parallel, 3× speedup) - Production best practices for when to use/avoid parallelization examples/4-structured-outputs.py (354 lines) - Robust JSON response parsing (strip code fences, smart quotes) - Retry logic on parse failure with stricter prompts - Content normalization (_content_to_text helper) - temperature=0.0 for deterministic structured outputs - Streamlit caching with TTL for expensive calls - Examples: content evaluation, entity extraction ## SKILL.md Updates Added Pattern 6: Structured Outputs and Robust JSON Parsing - Comprehensive JSON parsing patterns - Retry logic - Best practices Updated Examples section to list all 4 example files ## Why Not Add to Existing Skills? This skill warrants separation from existing skills for these reasons: 1. Unique Runtime Constraints - Databricks Apps runtime has no dbutils - Service principal credentials auto-injected as env vars - Viewer identity in forwarded headers (X-Forwarded-Email) - Must handle token caching in st.session_state 2. Different Auth Pattern - Cannot use standard WorkspaceClient() auth - Must mint OAuth tokens from service principal credentials - Requires Streamlit session state for caching - This auth pattern is unique to Databricks Apps 3. Follows Existing Precedent - databricks-app-python: General app patterns - databricks-app-apx: Specific pattern (FastAPI + React) - databricks-app-foundation-model: Specific pattern (foundation models with Apps auth) 4. Fills a Gap - databricks-model-serving: Foundation model endpoints ✓, Apps auth ✗ - databricks-app-python: Apps patterns ✓, Foundation models ✗ - databricks-app-foundation-model: Both ✓✓ 5. Real Production Need (databricksters-check-and-pub) - Makes 5 LLM calls per evaluation (2+3 in parallel phases) - OAuth M2M with service principal required in prod - Parallel execution critical for performance (5× faster) - Robust JSON parsing prevents 90% of production failures - These patterns don't exist in any other skill ## Best Practices Captured All production patterns from databricksters-check-and-pub working example: ✓ Dual-mode auth (PAT + OAuth M2M) ✓ Token caching with expiry check ✓ Viewer identity extraction ✓ OpenAI SDK wiring ✓ Parallel LLM calls with ThreadPoolExecutor ✓ Configurable concurrency (LLM_MAX_CONCURRENCY) ✓ Robust JSON parsing (code fences, smart quotes, extraction) ✓ Retry logic on parse failure ✓ Content normalization (_content_to_text) ✓ Streamlit caching with TTL ✓ temperature=0.0 for structured outputs ✓ Consistent timeout handling ## Example Pattern Follows databricks-python-sdk pattern: - Flat example files in examples/ directory (not subdirectories) - Self-contained, runnable scripts - Configuration at top of file - Similar line counts (195-354 lines vs their 79-216 lines) - No separate README files per example Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Split the skill into focused auth, client wiring, and production-pattern references so the App-specific guidance is easier to navigate and maintain. Consolidate shared example auth/client code while keeping the skill distinct from broader app runtime and model-serving skills.
Incorporate upstream improvements while removing unique test patterns to maintain
consistency with other skills.
## Changes
**Refactored Examples (now use shared llm_config.py helper)**
- 1-auth-and-token-minting.py: 195→62 lines
- 2-minimal-chat-app.py: 276→182 lines
- 3-parallel-llm-calls.py: 294→265 lines
- 4-structured-outputs.py: 354→337 lines
- Added llm_config.py: 353 lines (shared auth & client helpers)
**Documentation Updates**
- Updated SKILL.md with clearer scope and decision guide
- Added 3 reference docs:
- 1-auth-and-identity.md: Config validation and auth flow
- 2-client-wiring.md: OpenAI client setup
- 3-production-patterns.md: Parallel calls, structured outputs, caching
**Removed Unique Patterns**
- Deleted tests/ directory (no other skill has tests)
- Keeps refactored structure with shared llm_config.py helper
## Final Structure
```
databricks-app-foundation-model/
├── SKILL.md
├── 1-auth-and-identity.md
├── 2-client-wiring.md
├── 3-production-patterns.md
└── examples/
├── llm_config.py (shared helpers)
├── 1-auth-and-token-minting.py
├── 2-minimal-chat-app.py
├── 3-parallel-llm-calls.py
└── 4-structured-outputs.py
```
Total: 1,199 lines (vs original 1,119 lines standalone examples)
All production patterns from databricksters-check-and-pub remain captured.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add the new skill to the README catalog and installer metadata so shipped-skill discovery, installation, and validation reflect the current feature set.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Why This Stays Separate
This should stay separate from the existing skills because it owns the narrow overlap between the Databricks Apps runtime and foundation-model endpoint usage inside Apps.
databricks-app-pythonis the broader skill for framework choice, app resources, deployment, and general runtime guidance.databricks-model-servingis the broader skill for endpoint catalogs, serving capabilities, and model-selection guidance.Merging this into either adjacent skill would make those skills harder to navigate:
databricks-app-pythonwould bury foundation-model-specific guidance inside a generic app skilldatabricks-model-servingwould mix App runtime constraints with broader serving guidance that is not App-specificSummary
databricks-app-foundation-modelskill for Databricks Apps calling foundation model endpoints from Python or Streamlitdatabricks-skills/README.md, anddatabricks-skills/install_skills.shFiles Added
databricks-skills/databricks-app-foundation-model/SKILL.mddatabricks-skills/databricks-app-foundation-model/1-auth-and-identity.mddatabricks-skills/databricks-app-foundation-model/2-client-wiring.mddatabricks-skills/databricks-app-foundation-model/3-production-patterns.mddatabricks-skills/databricks-app-foundation-model/examples/1-auth-and-token-minting.pydatabricks-skills/databricks-app-foundation-model/examples/2-minimal-chat-app.pydatabricks-skills/databricks-app-foundation-model/examples/3-parallel-llm-calls.pydatabricks-skills/databricks-app-foundation-model/examples/4-structured-outputs.pydatabricks-skills/databricks-app-foundation-model/examples/llm_config.pyWhat This Skill Captures
Canonical helper layer
examples/llm_config.pyis the shared reference helper for this skill:DATABRICKS_SERVING_BASE_URL,DATABRICKS_HOST, andDATABRICKS_MODELDATABRICKS_TOKENfor local developmentApp-side production patterns
The examples cover the main reusable patterns this skill is meant to teach:
1-auth-and-token-minting.pyshows canonical auth/config plus forwarded identity access2-minimal-chat-app.pyshows a complete Streamlit chat app using the shared helper layer3-parallel-llm-calls.pyshows bounded parallel execution for independent LLM checks4-structured-outputs.pyshows deterministic JSON-oriented extraction and retry handlingShipped-skill integration
This PR also makes the new skill behave like a shipped repo skill instead of a local-only folder:
databricks-skills/README.mddatabricks-skills/install_skills.shREADME.mdso the top-level repo description matches the new skill coverageNotes
llm_config.pyto make the reusable config/auth/client layer explicit for copy-paste into real Appstests/directory was removed to stay aligned with the current structure of the other shipped skills in this repo