SkillForge is the enterprise-grade platform for Anthropic Agent Skills — custom instruction sets that extend Claude's capabilities for specific tasks.
Create, validate, test, secure, version, and deploy skills across multiple AI platforms.
# Generate a complete skill from a natural language description
skillforge generate "Review Python code for security vulnerabilities and best practices"
# Validate, test, and scan for security issues
skillforge validate ./skills/code-reviewer
skillforge test ./skills/code-reviewer
skillforge security scan ./skills/code-reviewer
# Deploy to Claude, OpenAI, or LangChain
skillforge publish ./skills/code-reviewer --platform claude
skillforge publish ./skills/code-reviewer --platform openai --mode gptAgent Skills are custom instructions that teach Claude how to handle specific tasks. When you upload a skill to Claude, it gains specialized knowledge and follows your defined workflows, examples, and guidelines.
Use cases:
- Code review with your team's standards
- Git commit message formatting
- API documentation generation
- Data analysis workflows
- Domain-specific assistants
SkillForge handles the entire skill development lifecycle: generate → validate → test → secure → version → deploy.
| Feature | Description |
|---|---|
| AI Generation | Generate production-ready skills from natural language descriptions |
| Testing Framework | Mock and live testing with assertions, regression baselines |
| Versioning | Semantic versioning, lock files, version constraints |
| MCP Integration | Expose skills as MCP tools, import tools from MCP servers |
| Security Scanning | Detect prompt injection, credential exposure, data exfiltration |
| Governance | Trust tiers, policies, audit trails for enterprise compliance |
| Multi-Platform | Deploy to Claude, OpenAI (Custom GPTs), LangChain Hub |
| Analytics | Usage tracking, cost analysis, ROI calculation |
| Enterprise Config | SSO, cloud storage, proxy support |
pip install ai-skillforgepip install ai-skillforge[ai]This includes the Anthropic and OpenAI SDKs for AI-powered skill generation.
skillforge doctor # Check installation health
skillforge providers # Check available AI providersGenerate a complete, production-ready skill from a description:
# Set your API key
export ANTHROPIC_API_KEY=your-key-here
# Generate the skill
skillforge generate "Help users write clear, conventional git commit messages"Output:
✓ Generated skill: skills/git-commit-message-writer
Provider: anthropic (claude-sonnet-4-20250514)
The generated skill includes:
- Proper YAML frontmatter
- Structured instructions
- Multiple realistic examples
- Edge case handling
- Best practices guidance
Use built-in templates for common use cases:
# List available templates
skillforge templates
# Create from template
skillforge new my-reviewer --template code-review
# Preview a template before using
skillforge templates show code-reviewAvailable templates:
| Template | Description |
|---|---|
code-review |
Review code for best practices, bugs, security |
git-commit |
Write conventional commit messages |
git-pr |
Create comprehensive PR descriptions |
api-docs |
Generate API documentation |
debugging |
Systematic debugging assistance |
sql-helper |
Write and optimize SQL queries |
test-writer |
Generate unit tests |
explainer |
Explain complex code or concepts |
Create a skill scaffold and customize it yourself:
# Create scaffold
skillforge new commit-helper -d "Help write git commit messages"
# Edit skills/commit-helper/SKILL.md with your instructions
# Validate and bundle
skillforge validate ./skills/commit-helper
skillforge bundle ./skills/commit-helperHere's the full workflow from idea to deployment:
# 1. Generate a skill
skillforge generate "Review Python code for security vulnerabilities" --name security-reviewer
# 2. Check the generated content
skillforge show ./skills/security-reviewer
# 3. Add tests (create tests.yml in the skill directory)
# See "Skill Testing" section for format
# 4. Run tests to verify behavior
skillforge test ./skills/security-reviewer
# 5. Improve it with AI (optional)
skillforge improve ./skills/security-reviewer "Add examples for Django and FastAPI"
# 6. Validate against Anthropic requirements
skillforge validate ./skills/security-reviewer
# 7. Bundle for upload (for claude.ai)
skillforge bundle ./skills/security-reviewer
# OR: Install directly to Claude Code
skillforge install ./skills/security-reviewerDeploy options:
- Claude Code:
skillforge install ./skills/my-skill(skill is immediately available) - claude.ai: Upload the
.zipfile at Settings → Features → Upload Skill
Generate a complete skill using AI.
# Basic generation
skillforge generate "Analyze CSV data and generate insights"
# With custom name
skillforge generate "Debug JavaScript errors" --name js-debugger
# With project context (AI analyzes your codebase for relevant details)
skillforge generate "Help with this project's API" --context ./my-project
# Specify provider and model
skillforge generate "Write unit tests" --provider anthropic --model claude-sonnet-4-20250514
# Output to specific directory
skillforge generate "Format SQL queries" --out ./my-skillsOptions:
| Option | Description |
|---|---|
--name, -n |
Custom skill name (auto-generated if omitted) |
--out, -o |
Output directory (default: ./skills) |
--context, -c |
Project directory to analyze for context |
--provider, -p |
AI provider: anthropic, openai, ollama |
--model, -m |
Specific model to use |
--force, -f |
Overwrite existing skill |
Enhance an existing skill with AI assistance.
# Add more examples
skillforge improve ./skills/my-skill "Add 3 examples for error handling"
# Expand coverage
skillforge improve ./skills/my-skill "Add guidance for monorepo and squash merge scenarios"
# Restructure
skillforge improve ./skills/my-skill "Reorganize into clearer sections with a quick reference"
# Preview changes without saving
skillforge improve ./skills/my-skill "Simplify the instructions" --dry-runCheck available AI providers and their status.
skillforge providersSupported Providers:
| Provider | Setup | Default Model |
|---|---|---|
| Anthropic | export ANTHROPIC_API_KEY=... |
claude-sonnet-4-20250514 |
| OpenAI | export OPENAI_API_KEY=... |
gpt-4o |
| Ollama | ollama serve |
llama3.2 |
Create a new skill scaffold.
skillforge new my-skill -d "Description of what the skill does"
skillforge new my-skill --with-scripts # Include scripts/ directory
skillforge new my-skill --template code-review # Start from templateList and preview available skill templates.
# List all templates
skillforge templates
# Preview a specific template
skillforge templates show code-reviewValidate a skill against Anthropic's requirements.
skillforge validate ./skills/my-skill
skillforge validate ./skills/my-skill --strict # Treat warnings as errorsValidates:
- SKILL.md exists and has valid frontmatter
- Name format (lowercase, hyphens, no reserved words)
- Description length and content
- File size limits
- Required structure
Package a skill into a zip file for upload.
skillforge bundle ./skills/my-skill
skillforge bundle ./skills/my-skill -o custom-name.zipDisplay skill metadata and content summary.
skillforge show ./skills/my-skillPreview how Claude will see the skill.
skillforge preview ./skills/my-skillList all skills in a directory.
skillforge list ./skillsAdd reference documents or scripts to a skill.
# Add a reference document
skillforge add ./skills/my-skill doc REFERENCE
# Add a script
skillforge add ./skills/my-skill script helper --language pythonCheck installation health and dependencies.
skillforge doctorTest your skills before deployment. Supports mock mode (fast, free) and live mode (real API calls).
# Run tests in mock mode (default)
skillforge test ./skills/my-skill
# Run tests with real API calls
skillforge test ./skills/my-skill --mode live
# Filter tests by tags
skillforge test ./skills/my-skill --tags smoke,critical
# Filter tests by name
skillforge test ./skills/my-skill --name "basic_*"
# Output formats for CI/CD
skillforge test ./skills/my-skill --format json -o results.json
skillforge test ./skills/my-skill --format junit -o results.xml
# Estimate cost before running live tests
skillforge test ./skills/my-skill --mode live --estimate-cost
# Stop at first failure
skillforge test ./skills/my-skill --stop
# Verbose output with responses
skillforge test ./skills/my-skill -vOptions:
| Option | Description |
|---|---|
--mode, -m |
Test mode: mock (default) or live |
--provider, -p |
AI provider for live mode |
--model |
Model to use for live mode |
--tags, -t |
Run only tests with these tags (comma-separated) |
--name, -n |
Run only tests matching these names |
--format, -f |
Output format: human, json, junit |
--output, -o |
Write results to file |
--verbose, -v |
Show detailed output including responses |
--estimate-cost |
Show cost estimate without running tests |
--stop, -s |
Stop at first failure |
--timeout |
Timeout per test in seconds (default: 30) |
Install and manage skills directly in Claude Code.
Install a skill to Claude Code's skills directory.
# Install to user directory (~/.claude/skills/) - available in all projects
skillforge install ./skills/code-reviewer
# Install to project directory (./.claude/skills/) - project-specific
skillforge install ./skills/project-helper --project
# Overwrite existing installation
skillforge install ./skills/code-reviewer --forceRemove a skill from Claude Code.
skillforge uninstall code-reviewer
skillforge uninstall project-helper --projectInstall all skills from a directory.
# Sync all skills to user directory
skillforge sync ./skills
# Sync to project directory
skillforge sync ./skills --project
# Force overwrite existing
skillforge sync ./skills --forceList installed Claude Code skills.
# List all installed skills
skillforge installed
# Show only user-level skills
skillforge installed --user
# Show only project-level skills
skillforge installed --project
# Show installation paths
skillforge installed --pathsManage skill versions with semantic versioning support.
Display the current version of a skill.
skillforge version show ./skills/my-skillIncrement the skill version.
# Bump patch version (1.0.0 -> 1.0.1)
skillforge version bump ./skills/my-skill
# Bump minor version (1.0.0 -> 1.1.0)
skillforge version bump ./skills/my-skill --minor
# Bump major version (1.0.0 -> 2.0.0)
skillforge version bump ./skills/my-skill --major
# Set a specific version
skillforge version bump ./skills/my-skill --set 2.0.0List available versions from registries.
skillforge version list code-reviewerGenerate or verify lock files for reproducible installations.
# Generate lock file (skillforge.lock)
skillforge lock ./skills
# Verify installed skills match lock file
skillforge lock --check
# Install from lock file
skillforge pull code-reviewer --lockedLock file format (skillforge.lock):
version: "1"
skills:
code-reviewer:
version: "1.2.0"
source: "https://github.com/skillforge/community-skills"
checksum: "sha256:abc123..."Integrate with the Model Context Protocol ecosystem.
Create a new MCP server project.
skillforge mcp init ./my-mcp-server
skillforge mcp init ./my-server --name "My Tools" --description "Custom tools"Add a skill to an MCP server.
skillforge mcp add ./my-server ./skills/code-reviewerRemove a tool from an MCP server.
skillforge mcp remove ./my-server code-reviewerList tools in an MCP server.
skillforge mcp list ./my-serverRun an MCP server (stdio transport for Claude Desktop).
skillforge mcp serve ./my-serverShow Claude Desktop configuration snippet.
skillforge mcp config ./my-serverDiscover tools from configured MCP servers.
# Auto-detect Claude Desktop config
skillforge mcp discover
# Use custom config file
skillforge mcp discover --config ./mcp-config.jsonImport an MCP tool as a SkillForge skill.
skillforge mcp import my-tool-name
skillforge mcp import my-tool --output ./skillsScan skills for security vulnerabilities.
Scan a skill for security issues.
# Basic scan
skillforge security scan ./skills/my-skill
# Filter by minimum severity
skillforge security scan ./skills/my-skill --min-severity medium
# JSON output for CI/CD
skillforge security scan ./skills/my-skill --format json
# Fail if issues found (for CI)
skillforge security scan ./skills/my-skill --fail-on-issuesDetects:
- Prompt injection attempts
- Jailbreak patterns
- Credential exposure (API keys, passwords, private keys)
- Data exfiltration patterns
- Unsafe URLs and code execution risks
List available security patterns.
# List all patterns
skillforge security patterns
# Filter by severity
skillforge security patterns --severity critical
# Filter by type
skillforge security patterns --type prompt_injectionEnterprise governance for skill management.
View or set skill trust tier.
# View trust status
skillforge governance trust ./skills/my-skill
# Set trust tier
skillforge governance trust ./skills/my-skill --set verified
skillforge governance trust ./skills/my-skill --set enterpriseTrust tiers: untrusted, community, verified, enterprise
Manage governance policies.
# List all policies
skillforge governance policy list
# Show policy details
skillforge governance policy show production
# Create custom policy
skillforge governance policy create my-policy
# Check skill against policy
skillforge governance check ./skills/my-skill --policy productionBuilt-in policies:
| Policy | Min Trust | Max Risk | Approval |
|---|---|---|---|
development |
untrusted | 100 | No |
staging |
community | 70 | No |
production |
verified | 30 | Yes |
enterprise |
enterprise | 10 | Yes |
View audit trail of skill events.
# View recent events
skillforge governance audit
# Filter by skill
skillforge governance audit --skill my-skill
# Filter by date
skillforge governance audit --from 2026-01-01 --to 2026-01-31
# Filter by event type
skillforge governance audit --type security_scan
# Summary view
skillforge governance audit --summaryFormally approve a skill for deployment.
skillforge governance approve ./skills/my-skill --tier enterprise
skillforge governance approve ./skills/my-skill --tier verified --notes "Reviewed by security team"Deploy skills to multiple AI platforms.
Publish a skill to an AI platform.
# Publish to Claude (Claude Code installation)
skillforge publish ./skills/my-skill --platform claude
# Publish to Claude API
skillforge publish ./skills/my-skill --platform claude --mode api
# Publish as OpenAI Custom GPT
skillforge publish ./skills/my-skill --platform openai --mode gpt
# Publish to OpenAI Assistants API
skillforge publish ./skills/my-skill --platform openai --mode assistant
# Publish to LangChain Hub
skillforge publish ./skills/my-skill --platform langchain --mode hub
# Export as LangChain Python module
skillforge publish ./skills/my-skill --platform langchain --mode module
# Dry run (validate without publishing)
skillforge publish ./skills/my-skill --platform claude --dry-run
# Publish to all platforms
skillforge publish ./skills/my-skill --allList available platforms and their capabilities.
skillforge platformsTrack skill usage and calculate ROI.
View skill usage metrics.
skillforge analytics show my-skill
skillforge analytics show my-skill --period 30dCalculate return on investment.
skillforge analytics roi my-skill
skillforge analytics roi my-skill --hourly-rate 75 --time-saved 5Generate comprehensive usage report.
skillforge analytics report
skillforge analytics report --period 30d --format jsonView cost breakdown by model.
skillforge analytics cost my-skill
skillforge analytics cost my-skill --period 7dProject future costs.
skillforge analytics estimate my-skill --daily 10
skillforge analytics estimate my-skill --monthly 500Manage SkillForge configuration.
Display current configuration.
skillforge config show
skillforge config show --section authSet a configuration value.
skillforge config set default_model gpt-4o
skillforge config set log_level debug
skillforge config set proxy.http_proxy http://proxy:8080Show configuration file locations.
skillforge config pathCreate a configuration file.
# Create user config (~/.config/skillforge/config.yml)
skillforge config init
# Create project config (./.skillforge.yml)
skillforge config init --projectEnvironment variable overrides:
All config values can be overridden with SKILLFORGE_ prefix:
export SKILLFORGE_DEFAULT_MODEL=gpt-4o
export SKILLFORGE_LOG_LEVEL=debug
export SKILLFORGE_COLOR_OUTPUT=falseUpgrade skills from older formats.
List skills needing migration.
skillforge migrate check ./skills
skillforge migrate check ./skills --recursiveMigrate a skill to v1.0 format.
# Migrate single skill (creates backup automatically)
skillforge migrate run ./skills/my-skill
# Migrate without backup
skillforge migrate run ./skills/my-skill --no-backup
# Migrate entire directory
skillforge migrate run ./skills --recursivePreview migration changes without applying.
skillforge migrate preview ./skills/my-skillSkill format versions:
| Version | Characteristics |
|---|---|
| v0.1 | No version field, basic frontmatter |
| v0.9 | Has version field, no schema_version |
| v1.0 | Has schema_version, min_skillforge_version |
Check installation health and environment.
skillforge doctorShow detailed SkillForge information.
skillforge infoTest your skills before deployment to catch issues early.
Tests are defined in YAML files within your skill directory:
my-skill/
├── SKILL.md
├── tests.yml # Option 1: Single test file
└── tests/ # Option 2: Multiple test files
├── smoke.test.yml
└── full.test.yml
version: "1.0"
defaults:
timeout: 30
tests:
- name: "basic_commit_request"
description: "Tests basic commit message generation"
input: "Help me write a commit message for adding user auth"
assertions:
- type: contains
value: "feat"
- type: regex
pattern: "(feat|fix|docs):"
- type: length
min: 10
max: 200
trigger:
should_trigger: true
mock:
response: |
feat: add user authentication
- Implement login/logout
- Add session management
tags: ["smoke", "basic"]
- name: "handles_empty_input"
description: "Should ask for clarification"
input: "commit message"
assertions:
- type: contains
value: "what changes"
case_sensitive: false
trigger:
should_trigger: false
mock:
response: "What changes would you like me to describe?"
tags: ["edge-case"]| Type | Description | Example |
|---|---|---|
contains |
Text is present | value: "error" |
not_contains |
Text is absent | value: "exception" |
regex |
Pattern matches | pattern: "(feat|fix):" |
starts_with |
Response starts with | value: "Here's" |
ends_with |
Response ends with | value: "." |
length |
Length within bounds | min: 10, max: 500 |
json_valid |
Valid JSON | (no params) |
json_path |
JSONPath value | path: "$.status", value: "ok" |
equals |
Exact match | value: "expected text" |
Mock Mode (default):
- Fast and free - no API calls
- Uses pattern matching for trigger detection
- Validates assertions against mock responses
- Great for rapid development and CI
Live Mode (--mode live):
- Makes real API calls
- Tests actual skill behavior
- Tracks token usage and cost
- Use for final validation before deployment
# Check cost before running
skillforge test ./skills/my-skill --mode live --estimate-cost
# Run live tests
skillforge test ./skills/my-skill --mode liveCompare responses against recorded baselines to catch unintended changes.
# Record baseline responses
skillforge test ./skills/my-skill --record-baselines
# Run regression tests (compare against baselines)
skillforge test ./skills/my-skill --regression
# Adjust similarity threshold (default: 80%)
skillforge test ./skills/my-skill --regression --threshold 0.9Regression assertion type:
tests:
- name: "consistent_output"
input: "Review this code"
assertions:
- type: similar_to
baseline: "baseline_response_name"
threshold: 0.8Use JUnit XML output for CI systems:
skillforge test ./skills/my-skill --format junit -o test-results.xmlGitHub Actions example:
- name: Test skills
run: |
skillforge test ./skills/my-skill --format junit -o results.xml
- name: Upload test results
uses: actions/upload-artifact@v3
with:
name: test-results
path: results.xmlfrom pathlib import Path
from skillforge import (
load_test_suite,
run_test_suite,
TestSuiteResult,
)
# Load skill and tests
skill, suite = load_test_suite(Path("./skills/my-skill"))
# Run in mock mode
result: TestSuiteResult = run_test_suite(skill, suite, mode="mock")
print(f"Passed: {result.passed_tests}/{result.total_tests}")
print(f"Duration: {result.duration_ms:.1f}ms")
if not result.success:
for test_result in result.test_results:
if not test_result.passed:
print(f"FAILED: {test_result.test_case.name}")
for assertion in test_result.failed_assertions:
print(f" - {assertion.message}")my-skill/
├── SKILL.md # Required: Main skill definition
├── REFERENCE.md # Optional: Additional context/documentation
├── GUIDELINES.md # Optional: Style guides, standards
└── scripts/ # Optional: Executable scripts
├── helper.py
└── utils.sh
Every skill requires a SKILL.md file with YAML frontmatter:
---
name: code-reviewer
description: Use when asked to review code for bugs, security issues, or improvements. Provides structured feedback with severity levels and actionable suggestions.
---
# Code Reviewer
You are an expert code reviewer. Analyze code for bugs, security vulnerabilities, performance issues, and style improvements.
## Instructions
1. **Read the code** carefully and understand its purpose
2. **Identify issues** categorized by severity (Critical/High/Medium/Low)
3. **Provide fixes** with corrected code snippets
4. **Suggest improvements** for maintainability and performance
## Response Format
Structure your review as:
- **Summary**: One-line overview
- **Issues**: Categorized findings with severity
- **Recommendations**: Prioritized action items
## Examples
### Example 1: SQL Injection
**User request**: "Review this database function"
```python
def get_user(id):
query = f"SELECT * FROM users WHERE id = {id}"
return db.execute(query)Response: Summary: Critical SQL injection vulnerability found.
Issues:
- Critical: SQL injection via string interpolation
Recommendation:
def get_user(id: int):
query = "SELECT * FROM users WHERE id = %s"
return db.execute(query, (id,))
### Frontmatter Requirements
| Field | Requirements |
|-------|-------------|
| `name` | Max 64 characters. Lowercase letters, numbers, hyphens only. Cannot contain "anthropic" or "claude". |
| `description` | Max 1024 characters. Explain **when** Claude should use this skill. Cannot contain XML tags. |
### Writing Effective Skills
**Do:**
- Be specific about when the skill applies
- Include realistic examples with input/output
- Cover edge cases and error scenarios
- Use consistent formatting
- Provide actionable instructions
**Don't:**
- Write vague or generic instructions
- Skip examples
- Assume context Claude won't have
- Include sensitive data or secrets
---
## Programmatic Usage
Use SkillForge as a Python library. For API stability, import from `skillforge.api`:
```python
from pathlib import Path
from skillforge.api import (
# Core
Skill,
validate_skill,
bundle_skill,
# Testing
run_tests,
TestResult,
# Security
scan_skill,
ScanResult,
Severity,
# Governance
TrustTier,
check_policy,
# Platforms
publish_skill,
Platform,
# Analytics
record_success,
get_skill_metrics,
# Versioning
SkillVersion,
parse_version,
bump_version,
# MCP
skill_to_mcp_tool,
create_mcp_server,
# Config
get_config,
SkillForgeConfig,
)
from pathlib import Path
from skillforge import (
generate_skill,
validate_skill_directory,
bundle_skill,
)
# Generate a skill
result = generate_skill(
description="Help users write SQL queries",
name="sql-helper",
provider="anthropic",
)
if result.success:
print(f"Generated: {result.skill.name}")
# Validate a skill directory
skill_path = Path("./skills/sql-helper")
validation = validate_skill_directory(skill_path)
if validation.valid:
print("Skill is valid!")
# Bundle for deployment
bundle_result = bundle_skill(skill_path)
print(f"Bundle created: {bundle_result.output_path}")from skillforge.api import scan_skill, Severity
result = scan_skill(Path("./skills/my-skill"))
print(f"Risk score: {result.risk_score}/100")
print(f"Passed: {result.passed}")
for finding in result.findings:
if finding.severity >= Severity.HIGH:
print(f"[{finding.severity.name}] {finding.message}")from skillforge.api import publish_skill, Platform
# Publish to Claude
result = publish_skill(
Path("./skills/my-skill"),
platform=Platform.CLAUDE,
mode="code",
)
# Publish as OpenAI Custom GPT
result = publish_skill(
Path("./skills/my-skill"),
platform=Platform.OPENAI,
mode="gpt",
)from skillforge.api import record_success, get_skill_metrics, calculate_roi
# Record a successful invocation
record_success(
skill_name="code-reviewer",
latency_ms=1500,
input_tokens=100,
output_tokens=500,
model="claude-sonnet-4-20250514",
)
# Get metrics
metrics = get_skill_metrics("code-reviewer")
print(f"Total invocations: {metrics.total_invocations}")
print(f"Success rate: {metrics.success_rate:.1%}")
# Calculate ROI
roi = calculate_roi("code-reviewer", hourly_rate=75, minutes_saved=5)
print(f"ROI: {roi.roi_percentage:.0f}%")from skillforge.api import (
skill_to_mcp_tool,
create_mcp_server,
Skill,
)
# Convert skill to MCP tool
skill = Skill.from_directory(Path("./skills/code-reviewer"))
mcp_tool = skill_to_mcp_tool(skill)
# Create MCP server
server = create_mcp_server(
path=Path("./my-mcp-server"),
name="My Tools",
)Note: Functions that work with files require
Pathobjects, not strings.
# Check provider status
skillforge providers
# Ensure API key is set
export ANTHROPIC_API_KEY=your-key-here
# Or use OpenAI
export OPENAI_API_KEY=your-key-here
# Or start Ollama
ollama servepip install ai-skillforge[ai]
# or
pip install anthropic| Error | Solution |
|---|---|
| "Name contains invalid characters" | Use only lowercase letters, numbers, and hyphens |
| "Name contains reserved word" | Remove "anthropic" or "claude" from name |
| "Description too long" | Keep under 1024 characters |
| "Missing SKILL.md" | Ensure SKILL.md exists in skill directory |
- Store API keys in environment variables, not in code or skill files
- Never commit
.envfiles or API keys to version control - Use separate API keys for development and production
# Set API key for current session only
export ANTHROPIC_API_KEY=your-key
# Or use a .env file (add to .gitignore)
echo "ANTHROPIC_API_KEY=your-key" >> .env- Review generated skills before deploying — AI-generated content should be verified
- Never include secrets in SKILL.md files (passwords, tokens, internal URLs)
- Scripts are user-provided — SkillForge does not execute scripts, but review them before use
- Skills uploaded to Claude have access to your conversations
- SkillForge validates zip files to prevent path traversal attacks
- Symlinks are excluded from bundles for security
- Maximum recommended bundle size: 10MB
git clone https://github.com/lhassa8/skillforge.git
cd skillforge
pip install -e ".[dev]"pytest tests/ -v
pytest tests/ -v --cov=skillforge # With coverageruff check skillforge/
mypy skillforge/The SkillForge Hub is a community repository of pre-built skills. Browse, install, and share skills with the community.
# List all available skills (48+ and growing)
skillforge hub list
# Search for skills
skillforge hub search "code review"
skillforge hub search react
skillforge hub search docker
# Get skill details
skillforge hub info code-reviewer# Install to user directory (~/.claude/skills/) - available everywhere
skillforge hub install code-reviewer
# Install to project directory (./.claude/skills/) - project-specific
skillforge hub install code-reviewer --projectShare your skills with the community:
# Publish to the hub (requires GitHub CLI)
skillforge hub publish ./skills/my-awesome-skill
# Add a message to your pull request
skillforge hub publish ./skills/my-skill -m "My first contribution!"Requirements:
- GitHub CLI (
gh) installed and authenticated - Skill passes validation (
skillforge validate)
| Skill | Description |
|---|---|
code-reviewer |
Review code for bugs, security, and style |
git-commit |
Write conventional commit messages |
test-writer |
Generate unit tests |
api-documenter |
Create API documentation |
docker-helper |
Write Dockerfiles and compose configs |
sql-helper |
Write and optimize SQL queries |
react-helper |
Build React components and hooks |
debugger |
Systematic debugging assistance |
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
MIT License - see LICENSE for details.
Get started quickly with our examples and step-by-step tutorials:
- Examples - Sample skills demonstrating various features
- Tutorial 1: Getting Started - Create your first skill
- Tutorial 2: Testing Skills - Write and run tests
- Tutorial 3: Security & Governance - Secure your skills
- Tutorial 4: Multi-Platform Publishing - Deploy everywhere
- Tutorial 5: MCP Integration - Expose skills as MCP tools
Documentation:
External:
Community:
- SkillForge Hub - Browse and install community skills
- GitHub Discussions
- Issue Tracker
- Security Policy
- Code of Conduct
Current version: 1.0.0
SkillForge follows Semantic Versioning. The public API (exported from skillforge.api) is stable and will not have breaking changes in minor versions.
See our Roadmap for upcoming features.
The enterprise platform for AI capability management.