Introduction
SwissArmyHammer transforms AI prompt and workflow management by treating them as simple markdown files. It provides a unified, file-based approach that integrates seamlessly with your development workflow and Claude Code.
The Problem
Working with AI assistants involves repetitive prompt crafting, context loss, inconsistent results, limited automation, and poor organization of prompts scattered across different tools.
The Solution
SwissArmyHammer provides three integrated components that work together to solve these problems:
Command Line Application
A powerful CLI that executes prompts and workflows, with comprehensive diagnostics, validation, and shell completions.
MCP Server
Seamless integration with Claude Code via the Model Context Protocol, providing a comprehensive tool suite for AI-powered development.
Rust Library
A flexible library for building prompt-based applications with comprehensive APIs for custom integrations.
Core Architecture
SwissArmyHammer uses a hierarchical file system approach:
File-Based Management
- Store prompts and workflows as markdown files with YAML front matter
- No databases or complex configuration required
- Everything is version-controlled and easily shared
- Live reloading with automatic change detection
Organized Hierarchy
Clear precedence rules across three locations:
- Builtin - Pre-installed prompts and workflows embedded in the binary
- User - Personal collection in
~/.swissarmyhammer/
- Local - Project-specific files in
./.swissarmyhammer/
Liquid Template Engine
- Dynamic content with variables, conditionals, and loops
- Custom filters for domain-specific operations
- Environment integration and system context access
- Extensible plugin architecture
Key Features
Workflow Management
- State-based workflow execution with Mermaid diagrams
- Parallel and sequential action execution
- Built-in error handling and recovery mechanisms
Development Integration
- Git-integrated issue tracking with automatic branch management
- Semantic search using vector embeddings and TreeSitter parsing
- Note-taking system with full-text search capabilities
Built-in Resources
- 20+ production-ready prompts for common development tasks
- Example workflows demonstrating best practices
- Comprehensive MCP tool suite for Claude Code integration
Quick Examples
Simple Prompt
---
title: Code Review Helper
description: Assists with code review tasks
arguments:
- name: language
description: Programming language
required: true
---
Review this {{language}} code for:
- Quality and style
- Potential bugs
- Performance issues
- Best practices
Basic Workflow
---
name: feature-development
description: Complete feature development process
initial_state: plan
---
### plan
Plan the feature implementation
**Next**: implement
### implement
Write the feature code
**Next**: review
### review
Review the implementation
**Next**: complete
Command Line Usage
# Diagnose setup
sah doctor
# Test a prompt
sah prompt test code-review --var language=rust
# Run a workflow
sah flow run feature-development
# Configure Claude Code integration
claude mcp add --scope user sah sah serve
Next Steps
- Installation - Get SwissArmyHammer installed
- Quick Start - Your first prompt in 5 minutes
- Configuration - Customize your setup
- Architecture Overview - Understand the system design
Installation
Install SwissArmyHammer and configure it for use with Claude Code.
Prerequisites
- Rust 1.70+ - Required for building from source
- Claude Code - For MCP integration (recommended)
- Git - For issue management features
Install from Git
Currently the only supported installation method:
cargo install --git https://github.com/swissarmyhammer/swissarmyhammer swissarmyhammer-cli
Verify Installation
Check that everything is working:
sah --version
sah doctor
The doctor
command checks your installation and configuration.
Claude Code Integration
Configure SwissArmyHammer as an MCP server for Claude Code:
# Add SwissArmyHammer as an MCP server
claude mcp add --scope user sah sah serve
# Verify the connection
claude mcp list
Once configured, SwissArmyHammer tools will be available in Claude Code automatically.
Directory Setup
SwissArmyHammer creates directories as needed, but you can set them up manually:
User Directory (Optional)
# Personal prompts and workflows
mkdir -p ~/.swissarmyhammer/prompts
mkdir -p ~/.swissarmyhammer/workflows
Project Directory (Optional)
# Project-specific prompts and workflows
mkdir -p .swissarmyhammer/prompts
mkdir -p .swissarmyhammer/workflows
Built-in prompts and workflows are embedded in the binary and available immediately.
Shell Completions (Optional)
Add shell completions for better CLI experience:
# Bash
sah completions bash > ~/.bash_completion.d/sah
# Zsh
sah completions zsh > ~/.zfunc/_sah
# Fish
sah completions fish > ~/.config/fish/completions/sah.fish
Configuration (Optional)
SwissArmyHammer works with sensible defaults. Optionally create ~/.swissarmyhammer/sah.toml
:
[general]
auto_reload = true
[logging]
level = "info"
[mcp]
timeout_ms = 30000
Quick Test
Test your installation:
# List built-in prompts
sah prompt list
# Test a simple workflow
sah flow run hello-world
# Check everything is working
sah doctor
Common Issues
Command not found
If sah: command not found
, ensure Cargo’s bin directory is in your PATH:
echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
Build failures
Update Rust and install dependencies:
rustup update stable
# On Ubuntu/Debian:
sudo apt-get install build-essential pkg-config libssl-dev
MCP connection issues
Verify Claude Code can find the binary:
which sah
claude mcp restart sah
Next Steps
- Quick Start - Create your first prompt
- Configuration - Customize your setup
- CLI Reference - Learn all available commands
Quick Start
Get up and running with SwissArmyHammer in 5 minutes. This guide will walk you through creating your first prompt and using it with Claude Code.
Step 1: Verify Installation
First, make sure SwissArmyHammer is properly installed:
sah --version
sah doctor
The doctor
command will check your installation and suggest any needed fixes.
Step 2: Create Your First Prompt
Create a personal prompts directory and your first prompt:
# Create the directory structure
mkdir -p ~/.swissarmyhammer/prompts
# Create a simple helper prompt
cat > ~/.swissarmyhammer/prompts/task-helper.md << 'EOF'
---
title: Task Helper
description: Helps with various programming tasks
arguments:
- name: task
description: What you need help with
required: true
- name: context
description: Additional context (optional)
required: false
default: "general programming"
---
I need help with: **{{task}}**
Context: {{context}}
Please provide:
1. Clear, step-by-step guidance
2. Code examples if applicable
3. Best practices to follow
4. Common pitfalls to avoid
Make your response practical and actionable.
EOF
Step 3: Test Your Prompt
Test the prompt using the CLI:
# Test with required argument
sah prompt test task-helper --var task="debugging a Rust application"
# Test with both arguments
sah prompt test task-helper \
--var task="implementing error handling" \
--var context="web API development"
You should see the rendered prompt with your variables substituted.
Step 4: Configure Claude Code Integration
Add SwissArmyHammer as an MCP server for Claude Code:
# Add the MCP server
claude mcp add --scope user sah sah serve
# Verify it's working
claude mcp list
claude mcp status sah
Step 5: Use in Claude Code
Now you can use your prompt directly in Claude Code. Start a conversation and use:
/task-helper task="setting up CI/CD pipeline" context="GitHub Actions for Rust project"
Claude will use your prompt template and provide structured assistance.
Step 6: Explore Built-in Prompts
SwissArmyHammer comes with 20+ built-in prompts. List them:
sah prompt list --source builtin
Try some useful ones:
# Code review helper
sah prompt test code --var task="review this function for performance issues"
# Documentation generator
sah prompt test documentation --var task="document this API endpoint"
# Debug helper
sah prompt test debug --var error="segmentation fault in C program"
Step 7: Create a Simple Workflow
Workflows allow you to chain multiple prompts and actions. Create your first workflow:
mkdir -p ~/.swissarmyhammer/workflows
cat > ~/.swissarmyhammer/workflows/code-review.md << 'EOF'
---
name: code-review
description: Complete code review workflow
initial_state: analyze
---
## States
### analyze
Analyze the code for issues and improvements.
**Actions:**
- prompt: Use the 'code' prompt to analyze the code
- shell: Run any necessary tests
**Next**: report
### report
Generate a comprehensive review report.
**Actions:**
- prompt: Use the 'documentation' prompt to suggest documentation improvements
**Next**: complete
### complete
Review workflow completed.
EOF
Run the workflow:
sah flow run code-review
Step 8: Set Up Issue Management
SwissArmyHammer includes git-integrated issue tracking:
# Create an issue (in a git repository)
sah issue create --name "feature-auth" --content "# User Authentication
Implement JWT-based user authentication system with:
- Login/logout endpoints
- Token validation middleware
- User session management"
# List issues
sah issue list
# Work on an issue (creates/switches to branch)
sah issue work feature-auth
# Complete the issue
sah issue complete feature-auth
Step 9: Try Memoranda (Notes)
SwissArmyHammer includes a note-taking system:
# Create a memo
sah memo create --title "Project Notes" --content "# Meeting Notes
## Action Items
- [ ] Set up database schema
- [ ] Implement user API
- [ ] Write integration tests"
# List memos
sah memo list
# Search memos
sah memo search "database"
Step 10: Set Up Semantic Search
Index your codebase for AI-powered semantic search:
# Index Rust files
sah search index "**/*.rs"
# Search for specific concepts
sah search query "error handling patterns"
# Search for specific functionality
sah search query "database connection management"
Common Patterns
Project-Specific Prompts
Create prompts specific to your project:
# In your project directory
mkdir -p .swissarmyhammer/prompts
# Create a project-specific prompt
cat > .swissarmyhammer/prompts/api-docs.md << 'EOF'
---
title: API Documentation
description: Generate API documentation for this project
arguments:
- name: endpoint
description: API endpoint to document
required: true
---
Generate comprehensive API documentation for the {{endpoint}} endpoint.
Include:
- Request/response schemas
- Example requests
- Error responses
- Authentication requirements
Use our project's documentation style and format.
EOF
Template Variables
Use liquid template features for dynamic prompts:
---
title: Conditional Helper
arguments:
- name: difficulty
description: Task difficulty level
required: false
default: "medium"
---
{% if difficulty == "beginner" %}
Let's start with the basics:
{% elsif difficulty == "advanced" %}
Here's an advanced approach:
{% else %}
Here's a practical solution:
{% endif %}
[Rest of your prompt...]
Environment Integration
Use environment variables in prompts:
---
title: Project Context
---
Working on project: {{PROJECT_NAME | default: "unknown project"}}
Environment: {{NODE_ENV | default: "development"}}
[Your prompt content...]
Next Steps
Now that you have SwissArmyHammer working:
- Explore Features: Read about Prompts, Workflows, and Templates
- Advanced Usage: Check out the CLI Reference for all commands
- Integration: Learn about MCP Integration for deeper Claude Code integration
- Examples: Browse Examples for inspiration
- Customize: Set up Configuration to match your workflow
Getting Help
- Run
sah --help
for command help - Use
sah doctor
to diagnose issues - Check Troubleshooting for common problems
- Visit the GitHub repository for issues and discussions
Configuration
SwissArmyHammer provides flexible configuration options to customize behavior, directory locations, and integration settings.
Quick Start Configuration
For most users, SwissArmyHammer works out of the box with minimal configuration. Here are the most common settings you might want to customize:
1. Essential 5-Minute Setup
Create a basic configuration file at ~/.swissarmyhammer/sah.toml
:
[general]
# Enable automatic reloading when files change (recommended for development)
auto_reload = true
[logging]
# Set to "debug" for troubleshooting, "info" for normal use
level = "info"
[mcp]
# Enable the tools you want to use with Claude Code
enable_tools = ["issues", "memoranda", "search"]
[search]
# Use the code-optimized embedding model
embedding_model = "nomic-embed-code"
2. Common Use Cases
For Individual Developers
[general]
auto_reload = true
[logging]
level = "info"
[mcp]
enable_tools = ["issues", "memoranda", "search", "outline"]
[issues]
auto_create_branches = true
branch_pattern = "issue/{{name}}"
[search]
# Index common development file types
languages = ["rust", "python", "javascript", "typescript"]
For Teams
[directories]
# Add shared team prompts directory
prompt_paths = ["/shared/team-prompts"]
workflow_paths = ["/shared/team-workflows"]
[git]
# Consistent commit messages
commit_template = "{{action}}: {{issue_name}}\n\nCo-authored-by: {{author}}"
[workflow]
# Higher parallel execution for team workflows
max_parallel_actions = 8
For CI/CD Integration
[logging]
level = "info"
format = "json" # Better for log aggregation
[mcp]
# Minimal tools for CI environment
enable_tools = ["search", "outline"]
[security]
# Restrict commands in CI
allowed_commands = ["git", "npm", "cargo"]
allow_network = false
[workflow]
max_workflow_time_ms = 600000 # 10 minutes max
3. Apply Your Configuration
After creating your config file:
# Validate the configuration
sah config validate
# Test that everything works
sah doctor
# Apply changes (restart Claude Code if using MCP)
claude mcp restart sah
4. Configuration Priorities
Settings are applied in this order (later overrides earlier):
- Built-in defaults (always safe)
- User config (
~/.swissarmyhammer/sah.toml
) - Project config (
./.swissarmyhammer/sah.toml
) - Environment variables (
SAH_LOG_LEVEL=debug
) - Command flags (
sah --debug
)
5. Quick Customizations
Change log level temporarily:
SAH_LOG_LEVEL=debug sah doctor
Override MCP timeout:
SAH_MCP_TIMEOUT=60000 sah serve
Use custom directory:
SAH_HOME="/custom/path" sah doctor
For advanced configuration options, see the sections below.
Complete Configuration Reference
Configuration File
The main configuration file is sah.toml
, located in:
- User config:
~/.swissarmyhammer/sah.toml
(applies to all projects) - Project config:
./.swissarmyhammer/sah.toml
(project-specific overrides)
Example Configuration
# ~/.swissarmyhammer/sah.toml
[general]
# Default template engine (liquid is the only supported engine)
default_template_engine = "liquid"
# Automatically reload prompts when files change
auto_reload = true
# Default timeout for prompt operations (milliseconds)
default_timeout_ms = 30000
[directories]
# Custom user directory (default: ~/.swissarmyhammer)
user_dir = "~/.swissarmyhammer"
# Additional prompt search paths
prompt_paths = [
"~/my-custom-prompts",
"/shared/team-prompts"
]
# Additional workflow search paths
workflow_paths = [
"~/my-workflows"
]
[logging]
# Log level: trace, debug, info, warn, error
level = "info"
# Log format: json, compact, pretty
format = "compact"
# Log file location (optional, defaults to stderr)
file = "~/.swissarmyhammer/sah.log"
[mcp]
# Enable specific MCP tools
enable_tools = ["issues", "memoranda", "search", "abort", "outline"]
# MCP request timeout (milliseconds)
timeout_ms = 30000
# Maximum concurrent MCP requests
max_concurrent_requests = 10
[template]
# Custom liquid filters directory
custom_filters_dir = "~/.swissarmyhammer/filters"
# Template compilation cache size
cache_size = 1000
# Allow unsafe template features
allow_unsafe = false
[search]
# Embedding model for semantic search
embedding_model = "nomic-embed-code"
# Vector database location
index_path = "~/.swissarmyhammer/search.db"
# Maximum file size to index (bytes)
max_file_size = 1048576 # 1MB
# Languages to index
languages = ["rust", "python", "javascript", "typescript", "dart"]
[workflow]
# Maximum parallel actions in workflows
max_parallel_actions = 4
# Default workflow timeout (milliseconds)
default_timeout_ms = 300000 # 5 minutes
# Enable workflow visualization
enable_visualization = true
# Workflow cache directory
cache_dir = "~/.swissarmyhammer/workflow_cache"
[issues]
# Default issue template
default_template = "standard"
# Auto-create git branches for issues
auto_create_branches = true
# Branch name pattern (supports {{name}}, {{id}})
branch_pattern = "issue/{{name}}"
# Auto-commit issue changes
auto_commit = true
[memoranda]
# Full-text search engine: tantivy, simple
search_engine = "tantivy"
# Maximum memo size (bytes)
max_memo_size = 1048576 # 1MB
# Auto-backup interval (hours, 0 to disable)
backup_interval = 24
[git]
# Default commit message template for issues
commit_template = "{{action}}: {{issue_name}}\n\n{{description}}"
# GPG signing for commits
sign_commits = false
# Default branch name for new repositories
default_branch = "main" # Note: Issue operations use git merge-base, not this setting
[security]
# Allowed shell commands for workflow actions
allowed_commands = [
"git", "npm", "cargo", "python", "node", "make"
]
# Maximum shell command timeout (milliseconds)
shell_timeout_ms = 60000
# Allow network access in workflows
allow_network = true
# Resource limits
max_memory_mb = 512
max_disk_usage_mb = 1024
Environment Variables
Override configuration with environment variables:
# General settings
export SAH_HOME="$HOME/.swissarmyhammer"
export SAH_LOG_LEVEL="debug"
export SAH_AUTO_RELOAD="true"
# MCP settings
export SAH_MCP_TIMEOUT="30000"
export SAH_MCP_ENABLE_TOOLS="issues,memoranda,search"
# Search settings
export SAH_SEARCH_MODEL="nomic-embed-code"
export SAH_SEARCH_INDEX="$HOME/.sah-search.db"
# Workflow settings
export SAH_WORKFLOW_MAX_PARALLEL="4"
export SAH_WORKFLOW_TIMEOUT="300000"
# Security settings
export SAH_SHELL_TIMEOUT="60000"
export SAH_ALLOW_NETWORK="true"
Directory Structure Configuration
Built-in Directories
These are embedded in the binary and always available:
builtin/
├── prompts/ # Pre-installed prompts
└── workflows/ # Pre-installed workflows
User Directories
Configurable via directories.user_dir
:
~/.swissarmyhammer/ # Default user directory
├── prompts/ # Personal prompts
├── workflows/ # Personal workflows
├── memoranda/ # Personal notes
├── issues/ # Global issues
├── search.db # Search index
├── sah.toml # Configuration
└── logs/ # Log files
Local Directories
Project-specific, searched in current directory and parents:
./.swissarmyhammer/ # Project directory
├── prompts/ # Project prompts
├── workflows/ # Project workflows
├── memoranda/ # Project notes
├── issues/ # Project issues
└── sah.toml # Project config
Precedence Rules
Configuration values are resolved in this order (later values override earlier ones):
- Built-in defaults
- User configuration (
~/.swissarmyhammer/sah.toml
) - Project configuration (
./.swissarmyhammer/sah.toml
) - Environment variables
- Command-line arguments
Template Configuration
Custom Liquid Filters
Create custom filters for templates:
// ~/.swissarmyhammer/filters/my_filters.rs
use swissarmyhammer::prelude::*;
pub struct ProjectNameFilter;
impl CustomLiquidFilter for ProjectNameFilter {
fn name(&self) -> &str {
"project_name"
}
fn filter(&self, input: &str) -> Result<String> {
// Extract project name from path
Ok(std::path::Path::new(input)
.file_name()
.unwrap_or_default()
.to_string_lossy()
.to_string())
}
}
Register in configuration:
[template]
custom_filters_dir = "~/.swissarmyhammer/filters"
Template Variables
Set global template variables:
[template.variables]
author = "Your Name"
organization = "Your Company"
default_license = "MIT"
Use in prompts:
---
title: New Project
---
Creating project for {{author}} at {{organization}}.
License: {{default_license}}
MCP Integration Configuration
Tool Selection
Enable/disable specific MCP tools:
[mcp]
enable_tools = [
"issues", # Issue management
"memoranda", # Note-taking
"search", # Semantic search
"abort", # Workflow control
"outline" # Code outline generation
]
Claude Code Integration
Configure Claude Code MCP settings:
// ~/.config/claude-code/mcp.json
{
"servers": {
"sah": {
"command": "sah",
"args": ["serve"],
"env": {
"SAH_LOG_LEVEL": "info",
"SAH_MCP_TIMEOUT": "30000",
"SAH_HOME": "/path/to/custom/sah/dir"
}
}
}
}
Search Configuration
Embedding Models
Configure the embedding model for semantic search:
[search]
# Available models:
# - nomic-embed-code (recommended for code)
# - all-MiniLM-L6-v2 (general purpose)
embedding_model = "nomic-embed-code"
# Model cache directory
model_cache_dir = "~/.swissarmyhammer/models"
# Download timeout (milliseconds)
model_download_timeout = 300000
Indexing Options
Control what gets indexed:
[search]
# File patterns to index
include_patterns = [
"**/*.rs", "**/*.py", "**/*.js", "**/*.ts",
"**/*.md", "**/*.txt", "**/*.json"
]
# File patterns to exclude
exclude_patterns = [
"**/target/**", "**/node_modules/**",
"**/.git/**", "**/build/**"
]
# Maximum file size (bytes)
max_file_size = 1048576
# Languages for TreeSitter parsing
languages = ["rust", "python", "javascript", "typescript", "dart"]
Workflow Configuration
Execution Limits
[workflow]
# Maximum parallel actions
max_parallel_actions = 4
# Default timeout per action (milliseconds)
action_timeout_ms = 60000
# Maximum workflow runtime (milliseconds)
max_workflow_time_ms = 1800000 # 30 minutes
# Enable detailed execution logging
debug_execution = false
Action Configuration
[workflow.actions]
# Shell action settings
[workflow.actions.shell]
allowed_commands = ["git", "npm", "cargo", "python"]
timeout_ms = 60000
working_directory = "."
# Prompt action settings
[workflow.actions.prompt]
timeout_ms = 30000
max_retries = 3
Validation
Validate your configuration:
# Validate configuration file
sah validate --config
# Check all configuration sources
sah doctor --verbose
# Test configuration with specific settings
SAH_LOG_LEVEL=debug sah doctor
Security Considerations
Safe Defaults
SwissArmyHammer uses secure defaults:
- Limited shell command execution
- Path traversal protection
- Resource usage limits
- Network access controls
Hardening
For production use, consider:
[security]
# Restrict allowed commands
allowed_commands = ["git"]
# Disable network access
allow_network = false
# Lower resource limits
max_memory_mb = 256
max_disk_usage_mb = 512
# Enable additional validation
strict_validation = true
File Permissions
Set appropriate permissions:
# Secure configuration directory
chmod 755 ~/.swissarmyhammer
chmod 600 ~/.swissarmyhammer/sah.toml
# Secure search database
chmod 600 ~/.swissarmyhammer/search.db
Migration
Upgrading Configuration
When upgrading SwissArmyHammer, check for configuration changes:
# Check for configuration issues
sah doctor --config
# Validate against new schema
sah validate --config --strict
Backup Configuration
Regular backups of important configuration:
# Backup entire user directory
tar -czf sah-backup-$(date +%Y%m%d).tar.gz ~/.swissarmyhammer
# Or just configuration
cp ~/.swissarmyhammer/sah.toml ~/.swissarmyhammer/sah.toml.backup
This configuration system provides fine-grained control over SwissArmyHammer’s behavior while maintaining sensible defaults for common use cases.
Architecture Overview
SwissArmyHammer is designed as a modular, extensible system with clear separation of concerns and multiple integration points.
System Architecture
┌─────────────────────────────────────────────────────────────┐
│ User Interfaces │
├─────────────────┬─────────────────┬─────────────────────────┤
│ CLI Application│ MCP Server │ Rust Library API │
│ (sah command) │ (Claude Code) │ (Direct Integration) │
└─────────────────┴─────────────────┴─────────────────────────┘
│
┌─────────────────────────┼─────────────────────────────────────┐
│ Core Library │
│ (swissarmyhammer) │
├──────────────────────────────────────────────────────────────┤
│ Prompt System │ Workflow Engine │ Storage & Search │
│ ┌─────────────┐│ ┌──────────────┐│ ┌──────────────────┐ │
│ │PromptLibrary││ │ State Machine││ │ Issue Management │ │
│ │Template Eng.││ │ Transitions ││ │ Memoranda System │ │
│ │Liquid Support│ │ Actions ││ │ Semantic Search │ │
│ └─────────────┘│ └──────────────┘│ └──────────────────┘ │
└──────────────────────────────────────────────────────────────┘
│
┌─────────────────────────┼─────────────────────────────────────┐
│ Infrastructure │
├──────────────────────────────────────────────────────────────┤
│ File System │ Git Integration │ Vector Database │
│ ┌─────────────┐│ ┌─────────────┐│ ┌─────────────────┐ │
│ │ FileLoader ││ │ Branch Mgmt ││ │ DuckDB Storage │ │
│ │ FileWatcher ││ │ Issue Hooks ││ │ TreeSitter │ │
│ │ VFS Support ││ │ Auto Commit ││ │ Embedding Model │ │
│ └─────────────┘│ └─────────────┘│ └─────────────────┘ │
└──────────────────────────────────────────────────────────────┘
Core Components
1. SwissArmyHammer Library (swissarmyhammer
)
The core library provides the fundamental functionality:
Prompt Management
- PromptLibrary: Central registry for all prompts
- PromptLoader: File system integration and loading
- Template Engine: Liquid template processing with custom filters
- PromptResolver: Multi-source resolution with precedence
Workflow System
- State Machine: Finite state automaton for workflow execution
- Action System: Pluggable actions (shell, prompt, conditional, etc.)
- Transition Engine: State transitions with validation
- Execution Engine: Parallel and sequential execution support
Storage Systems
- Issue Management: Git-integrated issue tracking
- Memoranda: Note-taking and knowledge management
- Semantic Search: Vector-based search with TreeSitter parsing
- File System Abstraction: Virtual file system for testing
2. CLI Application (swissarmyhammer-cli
)
Command-line interface providing:
- Command Processing: Clap-based argument parsing
- Interactive Features: Fuzzy selection, confirmation prompts
- Output Formatting: Table, JSON, and human-readable formats
- Shell Integration: Completions and signal handling
- Configuration Management: TOML-based configuration
3. MCP Tools (swissarmyhammer-tools
)
Model Context Protocol server providing:
- Tool Registry: Dynamic tool registration and discovery
- Request Handling: Structured request/response processing
- Error Management: Comprehensive error reporting
- Type Safety: Full JSON schema validation
Data Flow
Prompt Execution Flow
graph TD
A[User Request] --> B[CLI/MCP Parser]
B --> C[Prompt Resolution]
C --> D[Template Rendering]
D --> E[Variable Substitution]
E --> F[Output Generation]
C --> G[File Discovery]
G --> H[Precedence Rules]
H --> I[Prompt Selection]
I --> D
Workflow Execution Flow
graph TD
A[Workflow Request] --> B[Definition Loading]
B --> C[State Machine Init]
C --> D[Current State]
D --> E[Action Execution]
E --> F[Transition Logic]
F --> G{Complete?}
G -->|No| D
G -->|Yes| H[Final State]
E --> I[Shell Actions]
E --> J[Prompt Actions]
E --> K[Conditional Actions]
Issue Management Flow
graph TD
A[Issue Create] --> B[Generate ID]
B --> C[Create Branch]
C --> D[File Creation]
D --> E[Git Add/Commit]
F[Issue Work] --> G[Switch Branch]
G --> H[Update Status]
I[Issue Complete] --> J[Merge Branch]
J --> K[Move to Complete]
K --> L[Cleanup Branch]
File System Organization
Directory Structure
~/.swissarmyhammer/ # User directory
├── prompts/ # User prompts
├── workflows/ # User workflows
├── memoranda/ # Personal notes
├── issues/ # Issue tracking
│ ├── active/ # Active issues
│ └── complete/ # Completed issues
├── search.db # Semantic search index
├── sah.toml # Configuration
└── cache/ # Temporary files
./.swissarmyhammer/ # Project directory
├── prompts/ # Project prompts
├── workflows/ # Project workflows
├── memoranda/ # Project notes
└── issues/ # Project issues
File Types
Prompts (*.md
)
---
title: Example Prompt
description: Description of what this prompt does
arguments:
- name: arg1
description: First argument
required: true
default: "default_value"
---
Prompt content with {{arg1}} substitution.
Workflows (*.md
)
---
name: example-workflow
description: Example workflow
initial_state: start
states:
start:
description: Starting state
transitions:
- to: end
condition: "success"
---
## State Definitions
[Detailed state descriptions...]
Integration Points
Claude Code MCP Integration
SwissArmyHammer integrates with Claude Code through the Model Context Protocol:
{
"servers": {
"sah": {
"command": "sah",
"args": ["serve"],
"env": {
"SAH_LOG_LEVEL": "info"
}
}
}
}
Available Tools:
issue_*
- Issue management tools (issue_create, issue_list, issue_work, etc.)memo_*
- Memoranda tools (memo_create, memo_list, memo_get, etc.)search_*
- Semantic search tools (search_index, search_query)outline_*
- Code outline tools (outline_generate)abort_*
- Workflow control tools (abort_create)
Git Integration
Issue management integrates deeply with Git:
- Branch Management: Automatic branch creation/switching
- Commit Integration: Automatic commits for issue lifecycle
- Merge Handling: Safe merging with conflict detection
- Status Tracking: Branch-based status determination
Vector Search Integration
Semantic search uses modern AI techniques:
- TreeSitter Parsing: Language-aware code parsing
- Embedding Models:
nomic-embed-code
for semantic similarity - Vector Storage: DuckDB for efficient similarity search
- Indexing Pipeline: Incremental indexing with change detection
Plugin Architecture
Custom Liquid Filters
use swissarmyhammer::prelude::*;
struct UppercaseFilter;
impl CustomLiquidFilter for UppercaseFilter {
fn name(&self) -> &str { "uppercase" }
fn filter(&self, input: &str) -> Result<String> {
Ok(input.to_uppercase())
}
}
// Register the filter
let mut registry = PluginRegistry::new();
registry.register_filter(Box::new(UppercaseFilter))?;
Custom Workflow Actions
use swissarmyhammer::workflow::{Action, ActionResult, State};
struct CustomAction {
command: String,
}
impl Action for CustomAction {
async fn execute(&self, state: &State) -> ActionResult {
// Custom action implementation
ActionResult::Success
}
}
Security Model
Path Validation
- All file paths are validated and canonicalized
- Directory traversal attacks are prevented
- Symlink resolution is controlled
Resource Limits
- Configurable timeouts for all operations
- Memory usage limits for large files
- Process execution limits for shell actions
Permissions
- Read-only access to prompt directories by default
- Write access only to designated areas
- Git operations are sandboxed to project directory
Performance Characteristics
Memory Usage
- Lazy loading of prompts and workflows
- Streaming file processing for large files
- Configurable caching with LRU eviction
Disk I/O
- File watching with efficient change detection
- Incremental indexing for search
- Batch operations for better throughput
Network I/O
- Async/await throughout for non-blocking operations
- Connection pooling for MCP servers
- Timeout handling for all network operations
Testing Strategy
Unit Tests
- Component isolation with dependency injection
- Mock implementations for external dependencies
- Property-based testing for critical algorithms
Integration Tests
- End-to-end CLI command testing
- MCP protocol compliance testing
- File system integration testing
Performance Tests
- Benchmarking for critical paths
- Memory usage profiling
- Regression test suite with baselines
Complete Plugin Implementation Example
Here’s a complete example of implementing a custom plugin with all necessary components:
use swissarmyhammer::prelude::*;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
// 1. Plugin Configuration
#[derive(Debug, Deserialize, Serialize)]
pub struct GitLogConfig {
pub max_commits: usize,
pub format: String,
pub include_merges: bool,
}
impl Default for GitLogConfig {
fn default() -> Self {
Self {
max_commits: 10,
format: "--oneline".to_string(),
include_merges: false,
}
}
}
// 2. Plugin Implementation
#[derive(Debug)]
pub struct GitLogPlugin {
config: GitLogConfig,
}
impl GitLogPlugin {
pub fn new(config: GitLogConfig) -> Self {
Self { config }
}
}
impl Plugin for GitLogPlugin {
fn name(&self) -> &str {
"git_log"
}
fn description(&self) -> &str {
"Retrieves git commit history with configurable formatting"
}
fn process(&self, _input: &str, context: &PluginContext) -> PluginResult<String> {
let mut args = vec!["log".to_string()];
// Add format
args.push(self.config.format.clone());
// Add max commits
args.push(format!("-{}", self.config.max_commits));
// Handle merges
if !self.config.include_merges {
args.push("--no-merges".to_string());
}
// Execute git command
match std::process::Command::new("git")
.args(&args)
.current_dir(context.working_directory.as_deref().unwrap_or("."))
.output() {
Ok(output) => {
if output.status.success() {
Ok(String::from_utf8_lossy(&output.stdout).to_string())
} else {
Err(PluginError::ProcessingError {
message: format!("Git command failed: {}",
String::from_utf8_lossy(&output.stderr)),
source: None,
})
}
}
Err(e) => Err(PluginError::ProcessingError {
message: format!("Failed to execute git: {}", e),
source: Some(Box::new(e)),
}),
}
}
}
// 3. Plugin Registration and Usage
fn setup_custom_environment() -> Result<PromptLibrary, Box<dyn std::error::Error>> {
// Create plugin registry
let mut plugin_registry = PluginRegistry::with_builtin_plugins();
// Configure and register custom plugin
let git_config = GitLogConfig {
max_commits: 5,
format: "--pretty=format:'%h %s'".to_string(),
include_merges: false,
};
plugin_registry.register(Box::new(GitLogPlugin::new(git_config)))?;
// Create prompt library with plugins
let library = PromptLibrary::new()
.with_plugin_registry(plugin_registry)
.add_directory("./prompts")?
.add_directory("~/.swissarmyhammer/prompts")?;
Ok(library)
}
// 4. Template Usage
fn create_commit_summary_prompt() -> Result<(), Box<dyn std::error::Error>> {
let template_content = r#"
Recent Commits
{{ "" | git_log }}
# Analysis
Based on the recent commit history, I can see...
"#;
std::fs::write("./prompts/commit-summary.md", template_content)?;
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Setup environment
let library = setup_custom_environment()?;
// Create example prompt
create_commit_summary_prompt()?;
// Use the prompt
let prompt = library.get("commit-summary")?;
let context = HashMap::new();
let result = prompt.render(&context)?;
println!("{}", result);
Ok(())
}
Performance Benchmarks and Optimization
Typical Performance Characteristics
Prompt Loading:
- Small prompts (<10KB): ~1-5ms
- Large prompts (>100KB): ~10-50ms
- Directory scanning (100 files): ~50-200ms
Template Rendering:
- Simple templates: ~0.1-1ms
- Complex templates with loops: ~1-10ms
- Templates with file operations: ~10-100ms
Semantic Search:
- First query (model loading): ~1-3 seconds
- Subsequent queries: ~50-300ms
- Large codebases (10k+ files): ~200-500ms
Memory Usage:
- Base library: ~10-20MB
- With search index: +50-200MB depending on codebase size
- During heavy template rendering: +10-50MB temporary usage
Performance Optimization Examples
use swissarmyhammer::config::PerformanceConfig;
use std::time::Duration;
// 1. Configure resource limits
let perf_config = PerformanceConfig {
max_template_size: 1024 * 1024, // 1MB
max_render_time: Duration::from_secs(30),
max_file_watches: 1000,
enable_template_caching: true,
cache_ttl: Duration::from_secs(3600),
max_concurrent_renders: 10,
..Default::default()
};
// 2. Optimized library setup for large projects
let library = PromptLibrary::with_config(Config {
performance: perf_config,
..Default::default()
})
.with_lazy_loading(true) // Don't load all prompts at startup
.with_file_watching(false) // Disable for production if not needed
.with_template_cache(1000) // Cache 1000 rendered templates
.add_directory_filtered("./prompts", |path| {
// Only load .md files, skip hidden files
path.extension().map_or(false, |ext| ext == "md")
&& !path.file_name().unwrap().to_str().unwrap().starts_with('.')
})?;
// 3. Batch operations for better performance
let mut batch_results = Vec::new();
let prompts = ["template1", "template2", "template3"];
// Process in parallel
let handles: Vec<_> = prompts.iter().map(|name| {
let library = library.clone();
let context = context.clone();
tokio::spawn(async move {
library.get(name)?.render(&context)
})
}).collect();
for handle in handles {
batch_results.push(handle.await??);
}
Troubleshooting Guide
Common Architecture Issues
Plugin Loading Problems
Symptoms:
- “Plugin not found” errors
- “Failed to register plugin” messages
- Unexpected plugin behavior
Diagnosis:
// Enable debug logging to see plugin registration
use swissarmyhammer::plugins::PluginRegistry;
use log::LevelFilter;
env_logger::Builder::from_default_env()
.filter_level(LevelFilter::Debug)
.init();
let registry = PluginRegistry::new();
println!("Available plugins: {:?}", registry.list_plugins());
Solutions:
- Verify plugin is properly registered before use
- Check plugin name matches exactly (case-sensitive)
- Ensure plugin dependencies are available
- Validate plugin configuration is correct
Memory Issues
Symptoms:
- Out of memory errors during template rendering
- Slow performance with large prompt collections
- High memory usage that doesn’t decrease
Diagnosis:
# Monitor memory usage
top -p $(pgrep sah)
# Enable memory profiling (requires build with profiling)
MALLOC_CONF="prof:true,prof_leak:true" sah prompt render large-template
Solutions:
// 1. Configure memory limits
let config = Config {
security: SecurityConfig {
max_template_size: 512 * 1024, // 512KB limit
max_memory_usage: 100 * 1024 * 1024, // 100MB limit
..Default::default()
},
..Default::default()
};
// 2. Use streaming for large files
use swissarmyhammer::template::StreamingTemplate;
let template = StreamingTemplate::from_file("large-template.md")?;
let mut output = std::fs::File::create("output.txt")?;
template.render_to_writer(&context, &mut output)?;
// 3. Manual cleanup for long-running processes
library.clear_cache();
library.collect_garbage();
File System Integration Issues
Symptoms:
- File watching doesn’t trigger updates
- Permission denied errors
- Symlinks not resolved correctly
Diagnosis:
# Check file permissions
ls -la ~/.swissarmyhammer/
ls -la ./.swissarmyhammer/
# Test file watching manually
inotifywait -m -r ~/.swissarmyhammer/
# Verify symlink resolution
readlink -f ~/.swissarmyhammer/prompts/my-prompt.md
Solutions:
// 1. Configure file system options
let fs_config = FileSystemConfig {
follow_symlinks: true,
case_sensitive: cfg!(target_os = "linux"),
max_file_size: 10 * 1024 * 1024, // 10MB
allowed_extensions: vec!["md".to_string(), "yaml".to_string()],
..Default::default()
};
// 2. Handle permission errors gracefully
match library.add_directory("./prompts") {
Ok(_) => println!("Directory added successfully"),
Err(e) if e.kind() == ErrorKind::PermissionDenied => {
eprintln!("Permission denied - check directory permissions");
std::process::exit(1);
}
Err(e) => return Err(e.into()),
}
// 3. Validate paths before use
use swissarmyhammer::security::PathValidator;
let validator = PathValidator::new()
.allow_directory("./prompts")
.allow_directory("~/.swissarmyhammer")
.deny_patterns(&["*.exe", "*.dll"]);
if validator.validate(path)? {
library.add_file(path)?;
}
Performance Troubleshooting
Slow Template Rendering
Diagnosis Steps:
- Enable timing logs:
RUST_LOG=swissarmyhammer::template=debug
- Profile template complexity
- Check for infinite loops in Liquid templates
- Monitor file I/O during rendering
Solutions:
// 1. Template complexity analysis
use swissarmyhammer::template::TemplateAnalyzer;
let analyzer = TemplateAnalyzer::new();
let metrics = analyzer.analyze_file("complex-template.md")?;
println!("Variables: {}", metrics.variable_count);
println!("Loops: {}", metrics.loop_count);
println!("Includes: {}", metrics.include_count);
println!("Estimated complexity: {}", metrics.complexity_score);
// 2. Implement timeouts
use std::time::Duration;
use tokio::time::timeout;
let render_future = prompt.render_async(&context);
let result = timeout(Duration::from_secs(30), render_future).await?;
Search Performance Issues
Common Issues:
- First search very slow (model loading)
- Large result sets cause timeout
- Index corruption or outdated data
Solutions:
// 1. Warm up search model at startup
use swissarmyhammer::search::SearchEngine;
let engine = SearchEngine::new(config)?;
engine.warmup().await?; // Pre-load embedding model
// 2. Implement result pagination
let query = SearchQuery {
text: "error handling".to_string(),
offset: 0,
limit: 25, // Smaller result sets
..Default::default()
};
// 3. Rebuild index if corrupted
if engine.verify_index().await.is_err() {
println!("Index corrupted, rebuilding...");
engine.rebuild_index(&["**/*.rs"]).await?;
}
Error Recovery Patterns
use swissarmyhammer::error::{SwissArmyHammerError, ErrorRecovery};
// Implement comprehensive error recovery
async fn robust_prompt_rendering(
library: &PromptLibrary,
prompt_name: &str,
context: &HashMap<String, String>,
) -> Result<String, SwissArmyHammerError> {
// Attempt 1: Normal rendering
match library.get(prompt_name)?.render(context) {
Ok(result) => return Ok(result),
Err(SwissArmyHammerError::TemplateError(_)) => {
// Template error - try fallback template
if let Ok(fallback) = library.get(&format!("{}-fallback", prompt_name)) {
return fallback.render(context);
}
}
Err(SwissArmyHammerError::IoError(_)) => {
// File I/O error - try reloading directory
library.reload_directory("./prompts")?;
return library.get(prompt_name)?.render(context);
}
Err(e) => return Err(e),
}
// All recovery attempts failed
Err(SwissArmyHammerError::ProcessingError {
message: format!("Failed to render prompt '{}' after recovery attempts", prompt_name),
source: None,
})
}
This architecture provides a solid foundation for extensibility while maintaining clear separation of concerns and robust error handling throughout the system. The comprehensive examples and troubleshooting guide help developers understand both the design principles and practical implementation details.
Features Overview
SwissArmyHammer provides a comprehensive platform for AI-powered development workflows. This overview covers all major capabilities with practical examples.
File-Based Architecture
Markdown-First Approach
Store prompts and workflows as simple markdown files with YAML front matter:
---
title: API Documentation Generator
description: Generate API documentation from code
arguments:
- name: service_name
description: Name of the service
required: true
- name: language
description: Programming language
choices: ["rust", "python", "typescript"]
default: "rust"
---
# {{service_name}} API Documentation
Generate comprehensive API documentation for {{service_name}} written in {{language}}.
Focus on:
{% if language == "rust" %}
- Struct and enum definitions
- Trait implementations
- Error handling patterns
{% elsif language == "python" %}
- Class definitions and methods
- Type hints and annotations
- Exception handling
{% else %}
- Interface definitions
- Type definitions
- Error handling patterns
{% endif %}
Hierarchical Organization
Three-tier system with clear precedence:
- Builtin (
embedded
) - 20+ production-ready prompts and workflows - User (
~/.swissarmyhammer/
) - Your personal collection - Local (
./.swissarmyhammer/
) - Project-specific customizations
Live Reloading
Changes are automatically detected and applied without restart:
- File system watching using native events
- Instant prompt updates in running applications
- Hot reload during development and testing
Template Engine
Liquid Templating
Powerful template processing with variables, logic, and filters:
# Code Review for {{author}}
{% assign critical_files = files | where: "importance", "critical" %}
{% if critical_files.size > 0 %}
## Critical Files Requiring Extra Attention
{% for file in critical_files %}
- {{file.path}} - {{file.reason}}
{% endfor %}
{% endif %}
{% case review_type %}
{% when "security" %}
Focus on security vulnerabilities and data handling.
{% when "performance" %}
Focus on performance bottlenecks and optimization opportunities.
{% else %}
Standard code review focusing on quality and maintainability.
{% endcase %}
Custom Filters
Extensible filter system for domain-specific operations:
- String manipulation and formatting
- Date and time operations
- File system operations
- Custom business logic
Developer Experience
Command Line Interface
Comprehensive CLI with intuitive subcommands:
# Prompt management
sah prompt list # List available prompts
sah prompt test my-prompt # Test prompt rendering
sah prompt validate my-prompt # Validate prompt syntax
# Workflow execution
sah flow run deployment # Execute workflow
sah flow status # Check running workflows
sah flow history # View execution history
# Issue tracking
sah issue create "Fix login bug" # Create new issue
sah issue work ISSUE-123 # Switch to issue branch
sah issue merge ISSUE-123 # Merge completed issue
# System diagnostics
sah doctor # Health check
sah validate # Validate configuration
Shell Integration
Full shell completion support:
- Bash: Complete commands, options, and file paths
- Zsh: Advanced completions with descriptions
- Fish: Interactive completions with context
- PowerShell: Windows-native completion experience
Workflow Engine
State-Based Execution
Workflows use finite state machines with Mermaid diagram visualization:
stateDiagram-v2
[*] --> planning
planning --> implementation
implementation --> testing
testing --> review
review --> deployment
review --> implementation : fixes needed
deployment --> [*]
Workflow Definition
---
name: feature-deployment
description: Complete feature deployment workflow
initial_state: planning
variables:
- name: feature_name
required: true
- name: environment
choices: ["staging", "production"]
default: "staging"
---
### planning
**Description**: Plan feature deployment strategy
**Actions**:
- prompt: deployment-plan feature="{{feature_name}}" env="{{environment}}"
- memo: Record deployment plan
**Next**: implementation
### implementation
**Description**: Deploy the feature
**Actions**:
- shell: `deploy-feature --name {{feature_name}} --env {{environment}}`
- conditional: Check deployment success
**Transitions**:
- If successful → testing
- If failed → rollback
### testing
**Description**: Validate deployment
**Actions**:
- shell: `run-smoke-tests --env {{environment}}`
- wait: 5 minutes
**Next**: review
Action Types
- Prompt Actions: Execute prompts with context
- Shell Actions: Run system commands with output capture
- Conditional Actions: Branch based on conditions
- Wait Actions: Pause execution for specified duration
- Memo Actions: Create notes and documentation
Issue Management
Git-Integrated Workflow
Automatic branch management with issue lifecycle tracking:
# Create issue and branch
sah issue create "Fix authentication bug"
# → Creates issue-ISSUE-123-fix-authentication-bug branch
# Start working
sah issue work ISSUE-123
# → Switches to issue branch automatically
# Complete and merge
sah issue complete ISSUE-123
# → Merges branch and moves issue to completed
Issue Templates
Structured issue creation with templates:
---
title: Bug Report Template
type: bug
priority: medium
---
## Bug Description
Brief description of the issue.
## Steps to Reproduce
1. Step one
2. Step two
3. Step three
## Expected Behavior
What should happen.
## Actual Behavior
What actually happens.
## Environment
- OS: {{os}}
- Version: {{version}}
Memoranda System
Knowledge Management
Markdown-based note-taking with full-text search:
# Create meeting notes
sah memo create "Sprint Planning" --content "# Sprint Planning Meeting..."
# Search notes
sah memo search "authentication requirements"
# Update existing notes
sah memo update MEMO-456 --content "Updated requirements..."
Organized Storage
- Automatic timestamping and ULID generation
- Full-text search across all memos
- Markdown formatting with syntax highlighting
- Integration with workflows and issue tracking
Semantic Search
AI-Powered Code Search
Vector embedding-based search using TreeSitter parsing:
# Index codebase
sah search index "**/*.rs" "**/*.py" "**/*.ts"
# Semantic search
sah search query "error handling patterns"
sah search query "async function implementation"
sah search query "database connection management"
Language Support
- Rust: Structs, enums, functions, traits, modules
- Python: Classes, functions, methods, imports
- TypeScript/JavaScript: Classes, interfaces, functions, types
- Dart: Classes, functions, methods, constructors
Intelligent Parsing
TreeSitter provides language-aware code analysis:
- Function signatures and documentation
- Type definitions and relationships
- Module boundaries and exports
- Comment extraction and indexing
MCP Integration
Claude Code Tools
Complete MCP tool suite for seamless AI development:
# Configure Claude Code
claude mcp add --scope user sah sah serve
Available Tools:
issue_*
- Complete issue lifecycle managementmemo_*
- Note-taking and knowledge managementsearch_*
- Semantic code search capabilitiesoutline_*
- Code structure generationabort_*
- Workflow control and termination
Type-Safe Communication
- Structured JSON-RPC with full schema validation
- Comprehensive error handling and recovery
- Real-time status updates and progress tracking
- Automatic tool discovery and registration
Built-in Resources
Production-Ready Prompts
20+ built-in prompts for common development tasks:
Code Quality
code
- General code analysis and suggestionsreview/code
- Comprehensive code reviewreview/security
- Security-focused code reviewtest
- Test generation and strategies
Documentation
docs/readme
- README file generationdocs/comments
- Inline code documentationdocs/project
- Project documentationdocumentation
- General documentation tasks
Development Process
plan
- Project and feature planningstandards
- Coding standards enforcementprincipals
- Development principles guidancedebug/error
- Error analysis and debugging
Example Workflows
Built-in workflows demonstrating best practices:
Feature Development
hello-world
- Basic workflow exampletdd
- Test-driven development processimplement
- Feature implementation workflow
Process Automation
code_issue
- End-to-end issue resolutionreview_docs
- Documentation quality reviewcomplete_issue
- Issue completion workflow
Quick Access
All built-in resources are immediately available:
# List built-in prompts
sah prompt list --builtin
# Test built-in prompt
sah prompt test code --var language=rust
# Run built-in workflow
sah flow run hello-world
Performance & Security
Scalability
- Handles codebases with 10,000+ files efficiently
- Lazy loading and configurable caching for memory efficiency
- Parallel processing for better throughput
- Incremental updates for changed files only
Security
- Directory traversal protection and path validation
- Controlled symlink handling and permission checking
- Configurable resource limits and timeout controls
- Input validation and output sanitization
Reliability
- Graceful error handling with recovery options
- Automatic cleanup of temporary resources
- Atomic operations with rollback capabilities
- Cross-platform consistency (Windows, macOS, Linux)
This comprehensive platform combines the simplicity of markdown files with powerful AI integration, providing everything needed for modern development workflows.
Prompts
Prompts are the core building blocks of SwissArmyHammer - reusable templates that structure interactions with AI assistants.
Prompt Structure
Every prompt is a markdown file with YAML front matter:
---
title: Code Review Assistant
description: Helps review code for quality, style, and best practices
version: "1.0"
tags: ["code", "review", "quality"]
arguments:
- name: language
description: Programming language being reviewed
required: true
type: string
- name: file_path
description: Path to the file being reviewed
required: false
type: string
- name: focus_areas
description: Specific areas to focus on
required: false
type: array
default: ["style", "performance", "bugs"]
---
# Code Review: {{language}} Code
I need you to review this {{language}} code{% if file_path %} from `{{file_path}}`{% endif %}.
## Focus Areas
{% for area in focus_areas %}
- {{area | capitalize}}
{% endfor %}
Please provide:
1. **Overall Assessment** - Code quality rating and summary
2. **Specific Issues** - Line-by-line feedback on problems
3. **Improvements** - Concrete suggestions for enhancement
4. **Best Practices** - Recommendations following {{language}} conventions
Make your feedback constructive and actionable.
Front Matter Reference
Required Fields
Field | Description | Example |
---|---|---|
title | Human-readable prompt name | "Code Review Assistant" |
description | What the prompt does | "Helps review code quality" |
Optional Fields
Field | Description | Example |
---|---|---|
version | Prompt version | "1.2.0" |
tags | Categorization tags | ["code", "review"] |
author | Prompt creator | "Jane Developer" |
created | Creation date | "2024-01-15" |
updated | Last update | "2024-01-20" |
license | Usage license | "MIT" |
Arguments
Arguments define the variables that can be passed to the prompt:
arguments:
- name: variable_name # Required: variable name
description: "What it does" # Required: human description
required: true # Optional: is it required? (default: false)
type: string # Optional: data type (string, number, boolean, array)
default: "default_value" # Optional: default if not provided
choices: ["a", "b", "c"] # Optional: allowed values
pattern: "^[a-z]+$" # Optional: regex validation
min_length: 1 # Optional: minimum string length
max_length: 100 # Optional: maximum string length
Argument Types
- string: Text values
- number: Numeric values
- boolean: true/false values
- array: Lists of values
Example with all types:
arguments:
- name: title
description: "Document title"
required: true
type: string
min_length: 1
max_length: 100
- name: priority
description: "Priority level"
type: number
default: 5
choices: [1, 2, 3, 4, 5]
- name: include_examples
description: "Include code examples"
type: boolean
default: true
- name: sections
description: "Sections to include"
type: array
default: ["introduction", "usage", "examples"]
Template System
SwissArmyHammer uses the Liquid template engine for dynamic content.
Variable Substitution
Basic variable substitution:
Hello {{name}}, welcome to {{project}}!
With arguments:
name: "Alice"
project: "SwissArmyHammer"
Renders as:
Hello Alice, welcome to SwissArmyHammer!
Conditionals
{% if language == "rust" %}
Use `cargo test` to run tests.
{% elsif language == "python" %}
Use `pytest` to run tests.
{% else %}
Refer to your language's testing framework.
{% endif %}
Loops
## Requirements
{% for req in requirements %}
- {{req}}
{% endfor %}
## Steps
{% for step in steps %}
{{forloop.index}}. {{step.title}}
{{step.description}}
{% endfor %}
Filters
Liquid filters transform values:
{{name | capitalize}} <!-- "john" → "John" -->
{{text | truncate: 50}} <!-- Limit to 50 characters -->
{{items | join: ", "}} <!-- Array to comma-separated -->
{{code | escape}} <!-- HTML-safe escaping -->
{{date | date: "%Y-%m-%d"}} <!-- Format date -->
Built-in Filters
Filter | Description | Example |
---|---|---|
capitalize | Capitalize first letter | {{name | capitalize}} |
downcase | Convert to lowercase | {{text | downcase}} |
upcase | Convert to uppercase | {{text | upcase}} |
truncate | Limit string length | {{text | truncate: 100}} |
strip | Remove whitespace | {{text | strip}} |
escape | HTML escape | {{html | escape}} |
join | Join array elements | {{items | join: ", "}} |
split | Split string | {{text | split: ","}} |
size | Get length | {{array | size}} |
first | Get first element | {{array | first}} |
last | Get last element | {{array | last}} |
sort | Sort array | {{items | sort}} |
uniq | Remove duplicates | {{items | uniq}} |
reverse | Reverse array | {{items | reverse}} |
default | Default if nil | {{value | default: "none"}} |
Custom Filters
SwissArmyHammer includes custom filters for development:
Filter | Description | Example |
---|---|---|
snake_case | Convert to snake_case | {{text | snake_case}} |
kebab_case | Convert to kebab-case | {{text | kebab_case}} |
pascal_case | Convert to PascalCase | {{text | pascal_case}} |
camel_case | Convert to camelCase | {{text | camel_case}} |
pluralize | Make plural | {{word | pluralize}} |
singularize | Make singular | {{word | singularize}} |
markdown_escape | Escape markdown | {{text | markdown_escape}} |
code_block | Wrap in code block | {{code | code_block: "rust"}} |
Environment Variables
Access environment variables in templates:
Project: {{PROJECT_NAME | default: "Unknown"}}
Environment: {{NODE_ENV | default: "development"}}
User: {{USER | default: "unknown"}}
Home: {{HOME}}
Advanced Features
Include Other Files
{% include "common/header.md" %}
Main content here...
{% include "common/footer.md" %}
Assign Variables
{% assign formatted_name = name | capitalize | strip %}
{% assign item_count = items | size %}
Hello {{formatted_name}}, you have {{item_count}} items.
Capture Content
{% capture error_message %}
Error in {{file}}:{{line}}: {{message}}
{% endcapture %}
{% if show_errors %}
{{error_message}}
{% endif %}
Prompt Discovery
SwissArmyHammer discovers prompts from multiple locations with precedence:
1. Built-in Prompts
Embedded prompts always available:
sah prompt list --source builtin
Common built-in prompts:
code
- Code review and analysisdocumentation
- Generate documentationdebug
- Debug assistancetest
- Test writing guidancerefactor
- Refactoring suggestions
2. User Prompts
Personal prompts in ~/.swissarmyhammer/prompts/
:
# List user prompts
sah prompt list --source user
# Create a user prompt
mkdir -p ~/.swissarmyhammer/prompts
editor ~/.swissarmyhammer/prompts/my-prompt.md
3. Local Prompts
Project-specific prompts in ./.swissarmyhammer/prompts/
:
# Create project prompt
mkdir -p .swissarmyhammer/prompts
editor .swissarmyhammer/prompts/project-specific.md
Precedence Rules
When prompts have the same name:
- Local (
./.swissarmyhammer/prompts/
) - highest precedence - User (
~/.swissarmyhammer/prompts/
) - medium precedence - Built-in (embedded) - lowest precedence
Using Prompts
CLI Usage
# Test a prompt
sah prompt test my-prompt --var name="value"
# Render a prompt to file
sah prompt render my-prompt --var name="value" --output result.md
# List available prompts
sah prompt list
# Show prompt details
sah prompt show my-prompt
# Validate prompt syntax
sah prompt validate my-prompt
MCP Usage (Claude Code)
/my-prompt name="value" other_arg="value2"
Library Usage
use swissarmyhammer::prelude::*;
use std::collections::HashMap;
// Create prompt library
let mut library = PromptLibrary::new();
library.add_directory("~/.swissarmyhammer/prompts")?;
// Get and render prompt
let prompt = library.get("my-prompt")?;
let mut args = HashMap::new();
args.insert("name".to_string(), "Alice".to_string());
let rendered = prompt.render(&args)?;
println!("{}", rendered);
Best Practices
Prompt Design
- Clear Purpose - Each prompt should have a single, well-defined purpose
- Good Documentation - Use descriptive titles and detailed descriptions
- Flexible Arguments - Support optional arguments with sensible defaults
- Structured Output - Guide the AI to provide well-formatted responses
- Error Handling - Handle missing or invalid arguments gracefully
Argument Design
arguments:
# Good: Clear, descriptive, with defaults
- name: programming_language
description: "The programming language being used"
required: true
type: string
choices: ["rust", "python", "javascript", "typescript"]
- name: include_examples
description: "Whether to include code examples in the response"
type: boolean
default: true
# Avoid: Vague names and descriptions
- name: thing
description: "A thing"
required: true
Template Best Practices
- Escape User Input - Use
| escape
filter for untrusted content - Provide Defaults - Use
| default: "fallback"
for optional values - Validate Conditionally - Check if variables exist before using them
- Format Consistently - Use filters to ensure consistent formatting
Example of good template practices:
# {{title | default: "Untitled" | capitalize}}
{% if description %}
**Description:** {{description | strip}}
{% endif %}
{% if tags and tags.size > 0 %}
**Tags:** {{tags | join: ", " | downcase}}
{% endif %}
{% assign lang = language | default: "text" | downcase %}
{% if lang == "rust" %}
This is Rust-specific guidance...
{% endif %}
{% for item in items | default: array %}
- {{item | escape | capitalize}}
{% endfor %}
Organization
- Use Tags - Categorize prompts with meaningful tags
- Version Control - Track prompt changes with version numbers
- Modular Design - Break complex prompts into reusable components
- Consistent Naming - Use clear, descriptive filenames
Testing
# Test with various inputs
sah prompt test my-prompt --var lang="rust"
sah prompt test my-prompt --var lang="python"
sah prompt test my-prompt --var lang="invalid"
# Test required arguments
sah prompt test my-prompt
sah prompt test my-prompt --var required_arg="value"
# Validate syntax
sah validate --prompts
Advanced Features
Prompt Inheritance
Create base prompts that others can extend:
<!-- base-review.md -->
---
title: Base Review Template
description: Common review structure
arguments:
- name: type
description: Type of review
required: true
---
# {{type | capitalize}} Review
## Analysis
[Analysis goes here]
## Recommendations
[Recommendations go here]
<!-- code-review.md -->
---
title: Code Review
description: Code-specific review
extends: base-review
arguments:
- name: language
description: Programming language
required: true
---
{% assign type = "code" %}
{% include "base-review" %}
## Code Quality Metrics
- Language: {{language}}
- [Additional code-specific content]
Dynamic Argument Loading
Load arguments from files or environment:
arguments:
- name: config
description: "Configuration file path"
type: string
default: "config.json"
load_from_file: true
- name: api_key
description: "API key for service"
type: string
load_from_env: "API_KEY"
required: false
Prompt Libraries
Create shareable prompt collections:
my-prompt-library/
├── README.md
├── package.toml
├── prompts/
│ ├── web-dev/
│ │ ├── react-component.md
│ │ └── api-endpoint.md
│ └── data-science/
│ ├── analysis.md
│ └── visualization.md
└── templates/
├── common/
│ ├── header.liquid
│ └── footer.liquid
└── helpers/
└── formatting.liquid
Real-World Prompt Examples
1. Pull Request Review Prompt
---
title: Pull Request Reviewer
description: Comprehensive PR review focusing on code quality and maintainability
version: "2.1"
tags: ["pr", "review", "git", "collaboration"]
arguments:
- name: pr_url
description: GitHub PR URL for context
required: true
type: string
- name: changed_files
description: List of files changed in the PR
required: true
type: array
- name: review_depth
description: Level of review detail
type: string
default: "standard"
choices: ["quick", "standard", "thorough"]
- name: team_standards
description: Team-specific coding standards
type: string
load_from_file: "./.swissarmyhammer/team-standards.md"
---
# Pull Request Review
Reviewing PR: {{pr_url}}
## Changed Files
{% for file in changed_files %}
- **{{file}}**{% if file contains "test" %} (Test file){% endif %}
{% endfor %}
## Review Criteria
{% if review_depth == "quick" %}
**Quick Review Focus:**
- Compilation and basic functionality
- Critical security issues
- Breaking changes
{% elsif review_depth == "thorough" %}
**Thorough Review Focus:**
- Code architecture and design patterns
- Performance implications
- Test coverage and quality
- Documentation completeness
- Accessibility considerations
- Security best practices
{% else %}
**Standard Review Focus:**
- Code quality and readability
- Logic correctness
- Error handling
- Testing adequacy
{% endif %}
{% if team_standards %}
## Team Standards
{{team_standards}}
{% endif %}
Please provide:
1. **Summary**: Overall assessment and recommendation (approve/request changes/comment)
2. **Critical Issues**: Must-fix problems that block merging
3. **Suggestions**: Improvements that would enhance code quality
4. **Praise**: What was done well in this PR
5. **Learning**: Any new patterns or approaches worth noting
Format your response with specific line numbers and file references where applicable.
2. API Documentation Generator
---
title: API Documentation Generator
description: Generates comprehensive API documentation from code analysis
version: "1.5"
tags: ["api", "documentation", "openapi"]
arguments:
- name: api_type
description: Type of API being documented
type: string
default: "REST"
choices: ["REST", "GraphQL", "gRPC"]
- name: language
description: Programming language of the API
required: true
type: string
- name: base_url
description: Base URL for the API
type: string
default: "https://api.example.com"
- name: authentication
description: Authentication method used
type: string
default: "Bearer Token"
- name: include_examples
description: Include usage examples
type: boolean
default: true
---
# {{api_type}} API Documentation
## Overview
This documentation covers the {{language}} {{api_type}} API.
**Base URL**: `{{base_url}}`
**Authentication**: {{authentication}}
## Getting Started
### Authentication
{% if authentication contains "Bearer" %}
Include your API token in the Authorization header:
Authorization: Bearer YOUR_TOKEN_HERE
{% elsif authentication contains "API Key" %}
Include your API key in the request headers:
X-API-Key: YOUR_API_KEY_HERE
{% endif %}
### Rate Limits
- Authenticated requests: 1000 requests per hour
- Unauthenticated requests: 100 requests per hour
## Endpoints
{% if api_type == "REST" %}
*[SwissArmyHammer will analyze your code and generate endpoint documentation here]*
For each endpoint, please include:
- HTTP method and path
- Description and purpose
- Request parameters (path, query, body)
- Response format and examples
- Possible error codes and meanings
{% elsif api_type == "GraphQL" %}
*[SwissArmyHammer will analyze your schema and generate query documentation here]*
Please document:
- Available queries and mutations
- Input types and validation rules
- Response types and nested relationships
- Example queries with variables
{% endif %}
{% if include_examples %}
## Code Examples
### Python
```python
import requests
# Basic request example
response = requests.get(
"{{base_url}}/endpoint",
headers={"Authorization": "Bearer YOUR_TOKEN"}
)
print(response.json())
JavaScript
const response = await fetch("{{base_url}}/endpoint", {
headers: {
"Authorization": "Bearer YOUR_TOKEN",
"Content-Type": "application/json"
}
});
const data = await response.json();
cURL
curl -X GET "{{base_url}}/endpoint" \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json"
{% endif %}
Error Handling
All errors follow a consistent format:
{
"error": {
"code": "ERROR_CODE",
"message": "Human-readable description",
"details": {}
}
}
Changelog
Document API version changes and migration notes here.
### 3. Testing Strategy Prompt
```markdown
---
title: Test Strategy Generator
description: Creates comprehensive testing strategies for software projects
version: "1.3"
tags: ["testing", "qa", "strategy"]
arguments:
- name: project_type
description: Type of project being tested
required: true
type: string
choices: ["web-app", "api", "mobile", "desktop", "library"]
- name: tech_stack
description: Main technologies used
required: true
type: array
- name: team_size
description: Size of the development team
type: number
default: 5
- name: release_frequency
description: How often releases are made
type: string
default: "bi-weekly"
choices: ["daily", "weekly", "bi-weekly", "monthly"]
- name: critical_features
description: Features that require extra testing attention
type: array
default: []
---
# Testing Strategy for {{project_type | title}} Project
## Project Context
- **Technology Stack**: {% for tech in tech_stack %}{{tech}}{% unless forloop.last %}, {% endunless %}{% endfor %}
- **Team Size**: {{team_size}} developers
- **Release Cycle**: {{release_frequency}}
- **Critical Features**: {% if critical_features.size > 0 %}{% for feature in critical_features %}{{feature}}{% unless forloop.last %}, {% endunless %}{% endfor %}{% else %}None specified{% endif %}
## Testing Pyramid
### Unit Tests (70% of tests)
**Goal**: Fast feedback on individual components
{% if project_type == "web-app" %}
- Component unit tests
- Utility function tests
- State management tests
- Hook/composable tests
{% elsif project_type == "api" %}
- Service layer tests
- Database model tests
- Utility function tests
- Middleware tests
{% elsif project_type == "mobile" %}
- Business logic tests
- State management tests
- Utility function tests
- Platform-specific component tests
{% endif %}
**Coverage Target**: 90%+ for business logic
### Integration Tests (20% of tests)
**Goal**: Verify component interactions
{% if project_type == "web-app" %}
- API integration tests
- Database integration tests
- Third-party service integration tests
- Feature workflow tests
{% elsif project_type == "api" %}
- Database integration tests
- External service integration tests
- Authentication flow tests
- API contract tests
{% endif %}
### End-to-End Tests (10% of tests)
**Goal**: Validate critical user journeys
{% for feature in critical_features %}
- {{feature}} complete workflow
{% endfor %}
- Happy path scenarios
- Error recovery scenarios
## Test Automation Strategy
### Continuous Integration
```yaml
# GitHub Actions example
name: Test Suite
on: [push, pull_request]
jobs:
test:
steps:
- uses: actions/checkout@v3
- name: Run Unit Tests
run: npm test
- name: Run Integration Tests
run: npm run test:integration
- name: E2E Tests
run: npm run test:e2e
Test Data Management
- Use factories/fixtures for consistent test data
- Database seeding for integration tests
- Mock external services in unit tests
- Use test-specific data in staging environment
Performance Testing
{% if release_frequency == “daily” or release_frequency == “weekly” %}
- Automated performance regression tests
- Load testing on every release {% else %}
- Monthly performance validation
- Load testing before major releases {% endif %}
Quality Gates
Pre-commit Hooks
- Lint and format checks
- Unit test execution
- Type checking (if applicable)
Pull Request Requirements
- All tests passing
- Code coverage maintained above 80%
- Performance benchmarks within acceptable range
Release Criteria
- Full test suite passes
- Manual testing of critical features completed
- Performance benchmarks validated
- Security scans passed
Testing Tools and Frameworks
Recommended Stack
{% if tech_stack contains “JavaScript” or tech_stack contains “TypeScript” %}
- Unit Testing: Jest/Vitest
- Integration Testing: Supertest (APIs), Testing Library (components)
- E2E Testing: Playwright or Cypress
- Performance: Lighthouse CI {% elsif tech_stack contains “Python” %}
- Unit Testing: pytest
- Integration Testing: pytest with fixtures
- E2E Testing: Selenium or Playwright
- Performance: locust {% elsif tech_stack contains “Rust” %}
- Unit Testing: Built-in test framework
- Integration Testing: Custom integration test crates
- E2E Testing: Selenium or custom tooling
- Performance: Criterion benchmarks {% endif %}
Monitoring and Reporting
- Test result dashboards
- Coverage reporting with trends
- Flaky test identification and tracking
- Performance regression alerts
Risk Assessment
High-Risk Areas Requiring Extra Testing
{% for feature in critical_features %}
- {{feature}}: Critical to business operations {% endfor %}
- Authentication and authorization
- Data persistence and integrity
- Payment processing (if applicable)
- Third-party integrations
Testing Schedule
{% if release_frequency == “daily” %}
- Unit tests: Every commit
- Integration tests: Every commit
- E2E tests: Nightly
- Performance tests: Weekly {% else %}
- Unit tests: Every commit
- Integration tests: Every PR
- E2E tests: Before release
- Performance tests: Monthly {% endif %}
This strategy balances thorough testing with development velocity for your {{release_frequency}} release cycle.
### 4. Architecture Decision Record (ADR) Template
```markdown
---
title: Architecture Decision Record Template
description: Template for documenting architectural decisions with context and rationale
version: "2.0"
tags: ["architecture", "documentation", "decision"]
arguments:
- name: decision_title
description: Brief title for the decision
required: true
type: string
- name: decision_date
description: Date of the decision
type: string
default: "{{ 'now' | date: '%Y-%m-%d' }}"
- name: status
description: Status of the decision
type: string
default: "Proposed"
choices: ["Proposed", "Accepted", "Deprecated", "Superseded"]
- name: stakeholders
description: People involved in or affected by the decision
type: array
default: []
---
# ADR-XXX: {{decision_title}}
**Status**: {{status}}
**Date**: {{decision_date}}
{% if stakeholders.size > 0 %}**Stakeholders**: {% for person in stakeholders %}{{person}}{% unless forloop.last %}, {% endunless %}{% endfor %}{% endif %}
## Context
*What is the situation that requires a decision? Include relevant background information, constraints, and requirements.*
## Decision
*What is the change we're making? State the decision clearly and concisely.*
## Rationale
*Why are we making this decision? What factors influenced this choice?*
### Options Considered
1. **Option 1**: [Description]
- Pros: [Benefits]
- Cons: [Drawbacks]
2. **Option 2**: [Description]
- Pros: [Benefits]
- Cons: [Drawbacks]
3. **Selected Option**: [Description]
- Pros: [Benefits]
- Cons: [Drawbacks]
## Consequences
### Positive
- *What benefits do we expect?*
- *What capabilities does this enable?*
### Negative
- *What trade-offs are we making?*
- *What challenges might this create?*
### Neutral
- *What else changes as a result of this decision?*
## Implementation
### Action Items
- [ ] Task 1
- [ ] Task 2
- [ ] Task 3
### Timeline
- **Phase 1** (Week 1-2): [Description]
- **Phase 2** (Week 3-4): [Description]
- **Complete by**: [Date]
### Success Metrics
- Metric 1: [How to measure]
- Metric 2: [How to measure]
## References
- [Link to relevant documentation]
- [Related ADRs or decisions]
- [External resources that influenced the decision]
## Follow-up
*When should this decision be revisited? What might trigger a reconsideration?*
Advanced Prompt Techniques
Dynamic Content Loading
Load content from files during prompt rendering:
---
title: Context-Aware Code Review
description: Code review with dynamic project context
arguments:
- name: file_path
required: true
type: string
- name: project_readme
type: string
load_from_file: "./README.md"
- name: coding_standards
type: string
load_from_file: "./CODING_STANDARDS.md"
---
# Code Review with Project Context
## Project Overview
{{project_readme | truncate: 500}}
## Coding Standards
{{coding_standards}}
## File to Review
File: {{file_path}}
*[File content would be loaded by your AI assistant]*
Please review this file considering the project context and coding standards above.
Conditional Logic and Complex Templates
---
title: Multi-Language Documentation
description: Generates documentation based on detected language
arguments:
- name: language
required: true
type: string
- name: complexity
type: string
default: "medium"
choices: ["simple", "medium", "complex"]
---
# {{language | title}} Documentation
{% case language %}
{% when "rust" %}
## Rust-Specific Guidelines
- Use `cargo doc` for documentation
- Follow Rust naming conventions
- Include examples in doc comments
{% when "python" %}
## Python-Specific Guidelines
- Use docstrings for all public functions
- Follow PEP 8 style guidelines
- Include type hints where appropriate
{% when "javascript" %}
## JavaScript-Specific Guidelines
- Use JSDoc for function documentation
- Follow ESLint recommended rules
- Include usage examples
{% else %}
## General Guidelines
- Clear and concise documentation
- Include usage examples
- Follow language-specific conventions
{% endcase %}
{% if complexity == "complex" %}
## Advanced Topics
Please include:
- Architecture diagrams
- Performance considerations
- Security implications
- Integration patterns
{% elsif complexity == "simple" %}
## Basic Documentation
Focus on:
- Basic usage instructions
- Simple examples
- Getting started guide
{% endif %}
Environment-Specific Prompts
---
title: Environment-Aware Deploy Guide
description: Deployment instructions that vary by environment
arguments:
- name: environment
required: true
type: string
choices: ["development", "staging", "production"]
- name: app_name
required: true
type: string
- name: db_host
type: string
load_from_env: "DATABASE_HOST"
- name: deploy_key
type: string
load_from_env: "DEPLOY_KEY"
required: false
---
# Deploy {{app_name}} to {{environment | title}}
{% if environment == "production" %}
## ⚠️ Production Deployment Checklist
**CRITICAL**: This is a production deployment. Ensure:
- [ ] All tests are passing
- [ ] Database migrations are reviewed
- [ ] Rollback plan is prepared
- [ ] Team is notified
- [ ] Monitoring is active
{% elsif environment == "staging" %}
## Staging Deployment
This deployment will:
- Update the staging environment
- Run integration tests
- Validate changes before production
{% else %}
## Development Deployment
Quick development deployment for testing changes.
{% endif %}
## Configuration
**Database**: {% if db_host %}{{db_host}}{% else %}*Configure DATABASE_HOST environment variable*{% endif %}
{% if deploy_key %}**Deploy Key**: Configured from environment{% else %}**Deploy Key**: *Set DEPLOY_KEY environment variable*{% endif %}
## Commands
```bash
# Set environment
export NODE_ENV={{environment}}
{% if environment == "production" %}
# Production-specific setup
npm ci --only=production
npm run build:production
npm run migrate:production
{% else %}
# Development/staging setup
npm install
npm run build
npm run migrate
{% endif %}
# Deploy
npm run deploy:{{environment}}
{% if environment == “production” %}
Post-Deploy Verification
- Check application health:
curl https://{{app_name}}.com/health
- Verify database connectivity
- Check error logs for any issues
- Validate key user journeys
- Monitor performance metrics
Rollback Plan
If issues are detected:
npm run rollback:production
{% endif %}
## Prompt Debugging Guide
### Common Template Issues
#### 1. Variable Not Rendered
**Problem**: Variables show as `{{variable_name}}` in output
```markdown
Hello {{user_name}}!
Output: Hello {{user_name}}!
Solutions:
# Check variable is defined
sah prompt test my-prompt --var user_name="Alice" --debug
# Verify variable name spelling
sah prompt validate my-prompt.md
# Check front matter argument definition
---
arguments:
- name: user_name # Must match exactly
required: true
---
2. Conditional Logic Not Working
Problem: Conditions always evaluate incorrectly
{% if user_role == "admin" %}
Admin content
{% endif %}
Debug Steps:
# Test with debug output
sah prompt test my-prompt --var user_role="admin" --debug
# Check variable type - strings need quotes
{% if user_role == "admin" %} # Correct
{% if user_role == admin %} # Wrong - no quotes
Common Issues:
- Missing quotes around string values
- Case sensitivity:
"Admin" != "admin"
- Type mismatches:
"5" != 5
3. Loop Not Iterating
Problem: For loops don’t execute
{% for item in items %}
- {{item}}
{% endfor %}
Solutions:
# Ensure items is an array
sah prompt test my-prompt --var 'items=["a","b","c"]' --debug
# Check array syntax in CLI
--var 'items=["item1", "item2"]' # JSON format
--var items.0="item1" --var items.1="item2" # Individual items
4. File Loading Fails
Problem: load_from_file
doesn’t work
arguments:
- name: config
load_from_file: "./config.yaml"
Debug:
# Check file exists and is readable
ls -la ./config.yaml
cat ./config.yaml
# Use absolute paths if relative paths fail
load_from_file: "/full/path/to/config.yaml"
# Verify file format matches expected type
# YAML files are parsed, text files loaded as strings
Debugging Tools and Techniques
Enable Debug Mode
# Verbose output shows variable resolution
sah prompt test my-prompt --debug
# Show template parsing steps
RUST_LOG=swissarmyhammer::template=debug sah prompt test my-prompt
# Validate prompt syntax
sah prompt validate my-prompt.md
Template Testing Workflow
# 1. Validate syntax first
sah prompt validate my-prompt.md
# 2. Test with minimal variables
sah prompt test my-prompt --var required_var="test"
# 3. Add variables incrementally
sah prompt test my-prompt --var var1="value1" --var var2="value2"
# 4. Test edge cases
sah prompt test my-prompt --var empty_string="" --var zero_number=0
Variable Inspection
---
title: Debug Template
---
# Variable Debugging
## All Variables
{% for variable in __variables__ %}
- {{variable[0]}}: {{variable[1]}} ({{variable[1] | type}})
{% endfor %}
## Specific Variable Analysis
- user_name: "{{user_name}}" (length: {{user_name | size}})
- is_admin: {{is_admin}} (type: {{is_admin | type}})
- items count: {{items | size}}
Common Filter Issues
# String filters on non-strings
{{number | upcase}} # Error - upcase only works on strings
{{number | string | upcase}} # Fixed - convert to string first
# Array filters on non-arrays
{{string | first}} # Error - first only works on arrays
{{string | split: "," | first}} # Fixed - split creates array
# Chaining incompatible filters
{{text | split: "," | upcase}} # Error - upcase doesn't work on arrays
{{text | split: "," | map: "upcase"}} # Fixed - map applies upcase to each item
Performance Debugging
Slow Template Rendering
# Profile template complexity
sah prompt profile my-prompt.md
# Identify bottlenecks
RUST_LOG=swissarmyhammer::template=debug sah prompt test my-prompt 2>&1 | grep "duration"
Common Performance Issues:
- Large file loading:
# Slow - loads entire file
load_from_file: "./huge-file.json"
# Better - load and truncate
load_from_file: "./huge-file.json"
filter: "truncate:1000"
- Complex loops:
# Slow - nested loops with complex logic
{% for user in users %}
{% for role in user.roles %}
{% if role.permissions contains "admin" %}
Complex processing...
{% endif %}
{% endfor %}
{% endfor %}
# Better - pre-filter data or use simpler logic
{% assign admin_users = users | where: "roles", "admin" %}
{% for user in admin_users %}
Simple processing...
{% endfor %}
Error Recovery Strategies
---
title: Robust Template with Error Handling
arguments:
- name: optional_data
required: false
type: string
---
# Robust Content
## Safe Variable Access
{% if optional_data and optional_data != "" %}
Data: {{optional_data}}
{% else %}
No data provided
{% endif %}
## Array Safety
{% if items and items.size > 0 %}
Items:
{% for item in items %}
- {{item | default: "Unknown item"}}
{% endfor %}
{% else %}
No items available
{% endif %}
## File Loading with Fallback
{% assign config = config_file | default: "No configuration loaded" %}
Configuration: {{config}}
## Division Safety
{% assign rate = total | divided_by: count | default: 0 %}
Success rate: {{rate}}%
This comprehensive prompt system provides the foundation for consistent, reusable AI interactions across all your projects, with robust debugging capabilities and real-world examples to guide implementation.
Workflows
Workflows enable complex, multi-step AI interactions with state management, conditional logic, and parallel execution.
Overview
SwissArmyHammer workflows are state machines defined in markdown files that can:
- Execute sequences of prompts and shell commands
- Handle conditional branching based on results
- Run actions in parallel for efficiency
- Manage state transitions with validation
- Integrate with git for automated development workflows
Workflow Structure
Workflows are markdown files with YAML front matter:
---
name: code-review-workflow
description: Complete code review process with automated checks
version: "1.0"
initial_state: setup
timeout_ms: 300000
variables:
- name: project_type
description: Type of project being reviewed
default: "web"
- name: strict_mode
description: Enable strict review mode
type: boolean
default: false
---
# Code Review Workflow
This workflow performs a comprehensive code review process.
## States
### setup
**Description**: Initialize the review process
**Actions:**
- shell: `git status`
- prompt: Use 'code' prompt to get initial assessment
- conditional: Check if tests exist
**Transitions:**
- If tests found → `run-tests`
- If no tests → `static-analysis`
### run-tests
**Description**: Execute the test suite
**Actions:**
- shell: `npm test` (parallel with coverage)
- shell: `cargo test` (if Rust project)
**Transitions:**
- If tests pass → `static-analysis`
- If tests fail → `fix-tests`
### static-analysis
**Description**: Perform static code analysis
**Actions:**
- shell: `cargo clippy` (if Rust)
- shell: `eslint .` (if JavaScript/TypeScript)
- prompt: Use 'review' prompt for manual analysis
**Transitions:**
- Always → `generate-report`
### generate-report
**Description**: Create comprehensive review report
**Actions:**
- prompt: Use 'documentation' prompt to generate report
- shell: `git add review-report.md`
**Transitions:**
- Always → `complete`
### complete
**Description**: Review process completed
Front Matter Reference
Required Fields
Field | Description | Example |
---|---|---|
name | Unique workflow identifier | "deploy-process" |
description | What the workflow does | "Deploy application to production" |
initial_state | Starting state name | "validate" |
Optional Fields
Field | Description | Default |
---|---|---|
version | Workflow version | "1.0" |
timeout_ms | Overall timeout | 300000 (5 min) |
max_parallel | Max parallel actions | 4 |
on_error | Error handling state | "error" |
on_timeout | Timeout handling state | "timeout" |
Variables
Define reusable variables:
variables:
- name: environment
description: Deployment environment
type: string
choices: ["dev", "staging", "prod"]
default: "dev"
- name: skip_tests
description: Skip test execution
type: boolean
default: false
- name: components
description: Components to deploy
type: array
default: ["api", "web", "worker"]
State Definitions
States define the workflow steps and transitions.
State Structure
### state-name
**Description**: What this state does
**Actions:**
- action-type: action-specification
- action-type: action-specification (parallel)
**Transitions:**
- condition → target-state
- condition → target-state
- Always → default-state
**Error Handling:**
- On error → error-state
- On timeout → timeout-state
Action Types
Prompt Actions
Execute prompts with variables:
**Actions:**
- prompt: code-review language={{project_language}} file={{current_file}}
- prompt: documentation task="generate API docs" format="markdown"
Shell Actions
Run shell commands:
**Actions:**
- shell: `git status`
- shell: `npm test -- --coverage` (timeout: 120s)
- shell: `cargo build --release` (parallel)
Conditional Actions
Make decisions based on conditions:
**Actions:**
- conditional: Check if file exists
condition: file_exists("package.json")
true_action: shell: `npm install`
false_action: skip
Sub-workflow Actions
Call other workflows:
**Actions:**
- workflow: testing-workflow
variables:
environment: "{{environment}}"
strict_mode: true
Wait Actions
Add delays or wait for conditions:
**Actions:**
- wait: 5s
- wait: until file_exists("build/complete.flag")
- wait: until process_finished("background-job")
Transition Conditions
Simple Conditions
**Transitions:**
- Always → next-state
- On success → success-state
- On failure → error-state
- On timeout → timeout-state
Variable-Based Conditions
**Transitions:**
- If environment == "prod" → production-deploy
- If skip_tests == true → deploy
- If test_results.failed > 0 → fix-issues
Command Result Conditions
**Transitions:**
- If last_command.exit_code == 0 → success
- If last_command.output contains "error" → handle-error
- If file_exists("target/release/app") → deploy
Complex Conditions
**Transitions:**
- If (environment == "prod" AND test_results.passed == true) → deploy
- If (file_changed("Cargo.toml") OR dependencies_updated) → rebuild
Execution Model
Sequential Execution
Default behavior - actions run one after another:
**Actions:**
- shell: `cargo build`
- shell: `cargo test`
- prompt: code-review
Parallel Execution
Mark actions for parallel execution:
**Actions:**
- shell: `cargo build` (parallel)
- shell: `npm install` (parallel)
- shell: `python -m pytest` (parallel)
- prompt: code-review (wait for above)
Fork-Join Pattern
### parallel-work
**Actions:**
- fork: frontend-build
actions:
- shell: `npm run build`
- shell: `npm run test`
- fork: backend-build
actions:
- shell: `cargo build --release`
- shell: `cargo test`
**Transitions:**
- When all forks complete → deploy
### deploy
**Actions:**
- shell: `docker build -t app:latest .`
Built-in Variables
SwissArmyHammer provides built-in variables:
Variable | Description | Example |
---|---|---|
workflow.name | Current workflow name | "deploy-process" |
workflow.version | Workflow version | "1.0" |
state.current | Current state name | "build" |
state.previous | Previous state name | "test" |
execution.start_time | Workflow start time | "2024-01-15T10:30:00Z" |
execution.elapsed_ms | Elapsed time | 45000 |
git.branch | Current git branch | "feature/auth" |
git.commit | Current commit hash | "a1b2c3d" |
env.* | Environment variables | env.NODE_ENV |
last_command.exit_code | Last shell command exit code | 0 |
last_command.output | Last shell command output | "Tests passed" |
last_prompt.result | Last prompt result | "Code looks good" |
Error Handling
Error States
Define dedicated error handling states:
### error
**Description**: Handle errors and cleanup
**Actions:**
- prompt: debug error="{{error.message}}" context="{{state.current}}"
- shell: `git checkout main` (ignore errors)
- shell: `rm -rf temp/` (ignore errors)
**Transitions:**
- If error.recoverable → retry-state
- Always → failed
Retry Logic
### flaky-operation
**Actions:**
- shell: `network-dependent-command`
**Transitions:**
- On success → next-state
- On failure (retry < 3) → flaky-operation
- On failure (retry >= 3) → error
**Retry:**
- max_attempts: 3
- delay_ms: 1000
- backoff: exponential
Cleanup Actions
### cleanup
**Description**: Cleanup resources
**Actions:**
- shell: `docker stop $(docker ps -q)` (ignore errors)
- shell: `rm -rf temp/` (ignore errors)
- prompt: cleanup-report
**Always Execute**: true # Run even if workflow is cancelled
Advanced Features
Conditional Workflows
---
name: conditional-deploy
initial_state: check-environment
---
### check-environment
**Actions:**
- conditional: Environment check
condition: environment == "prod"
true_workflow: production-deploy
false_workflow: development-deploy
Dynamic State Selection
### router
**Actions:**
- dynamic: Choose next state based on file type
condition: file_extension(target_file)
cases:
".rs": rust-build
".js": javascript-build
".py": python-build
default: generic-build
Loop Constructs
### process-files
**Actions:**
- loop: Process each file
items: "{{file_list}}"
state: process-single-file
parallel: 2
**Transitions:**
- When loop complete → finalize
Resource Management
---
name: resource-workflow
resources:
- name: database
type: docker-container
spec: "postgres:13"
cleanup: true
- name: temp-dir
type: directory
path: "/tmp/workflow-{{execution.id}}"
cleanup: true
---
Integration Patterns
Git Integration
### git-workflow
**Actions:**
- shell: `git checkout -b feature/{{issue_name}}`
- shell: `git add -A`
- shell: `git commit -m "{{commit_message}}"`
**Transitions:**
- On success → push-branch
Issue Management Integration
### issue-workflow
**Actions:**
- issue: create
name: "bug-{{bug_id}}"
content: "{{bug_description}}"
- issue: work bug-{{bug_id}}
**Transitions:**
- Always → investigate
CI/CD Integration
### ci-workflow
**Actions:**
- shell: `docker build -t app:{{git.commit}} .`
- shell: `docker push app:{{git.commit}}`
- shell: `kubectl set image deployment/app app=app:{{git.commit}}`
**Environment:**
- DOCKER_REGISTRY: "{{registry_url}}"
- KUBE_CONFIG: "{{kube_config_path}}"
Testing Workflows
Validation
# Validate workflow syntax
sah flow validate my-workflow
# Check for cycles
sah flow validate my-workflow --check-cycles
# Validate all workflows
sah validate --workflows
Dry Run
# See execution plan without running
sah flow run my-workflow --dry-run
# Show state diagram
sah flow show my-workflow --diagram
Unit Testing
---
name: test-workflow
description: Test the main workflow
test_mode: true
---
### test-setup
**Actions:**
- shell: `mkdir -p test-temp`
- shell: `cp test-data/* test-temp/`
### run-main-workflow
**Actions:**
- workflow: main-workflow
variables:
working_dir: "test-temp"
### verify-results
**Actions:**
- shell: `test -f test-temp/output.json`
- conditional: Check output format
condition: valid_json("test-temp/output.json")
### cleanup
**Actions:**
- shell: `rm -rf test-temp`
Best Practices
Design Principles
- Single Responsibility: Each state should have one clear purpose
- Idempotent Actions: Actions should be safe to retry
- Error Recovery: Always include error handling paths
- Resource Cleanup: Clean up resources in error cases
- Clear Transitions: Make state transitions obvious and documented
Performance Optimization
# Use parallel execution where possible
max_parallel: 4
# Set appropriate timeouts
timeout_ms: 300000
# Minimize state transitions
# Combine related actions in single states
# Cache expensive operations
variables:
- name: build_cache_key
value: "{{git.commit}}-{{file_hash('Cargo.toml')}}"
Security Considerations
# Limit allowed commands
allowed_commands: ["git", "cargo", "npm", "docker"]
# Validate inputs
variables:
- name: branch_name
pattern: "^[a-zA-Z0-9/_-]+$"
# Use secure credential handling
environment:
- name: API_TOKEN
from_env: true
required: false
Documentation
# Workflow Title
**Purpose**: Clear description of what this workflow accomplishes
**Prerequisites**:
- Git repository
- Node.js installed
- Docker available
**Usage**:
```bash
sah flow run my-workflow --var environment=prod
Variables:
environment
: Target environment (dev/staging/prod)skip_tests
: Skip test execution (default: false)
States Overview:
- setup: Initialize and validate prerequisites
- build: Compile and build artifacts
- test: Run test suites
- deploy: Deploy to target environment
- verify: Verify deployment success
## Step-by-Step Workflow Tutorials
### Tutorial 1: Creating Your First Workflow
Let's create a simple issue management workflow from scratch.
#### Step 1: Create the Workflow File
Create `./workflows/issue-workflow.md`:
```bash
mkdir -p ./.swissarmyhammer/workflows
cd ./.swissarmyhammer/workflows
Step 2: Define Basic Structure
---
name: issue-workflow
description: Simple issue creation and tracking workflow
version: "1.0"
initial_state: create_issue
timeout_ms: 60000
variables:
- name: issue_title
description: Title for the new issue
required: true
type: string
- name: issue_type
description: Type of issue
default: "FEATURE"
choices: ["FEATURE", "BUG", "TASK"]
---
# Issue Management Workflow
This workflow creates and tracks an issue through completion.
## States
### create_issue
**Description**: Create a new issue
**Actions**:
- Create issue with title and type
**Next**: work_on_issue
### work_on_issue
**Description**: Switch to working on the issue
**Actions**:
- Start work on the created issue
- Create git branch
**Next**: complete_issue
### complete_issue
**Description**: Mark issue as complete
**Actions**:
- Mark issue as completed
- Merge git branch
**Next**: END
Step 3: Test the Workflow
# Run the workflow
sah flow run issue-workflow --var issue_title="Add user authentication" --var issue_type="FEATURE"
# Check workflow status
sah flow status <run_id>
# View workflow logs
sah flow logs <run_id>
Step 4: Understanding the Output
The workflow will:
- Create a new issue:
FEATURE_001_add-user-authentication
- Switch to a git branch:
issue/FEATURE_001_add-user-authentication
- Mark the issue as completed when done
- Merge the branch back to the source branch
Step 5: Customize for Your Needs
Add variables for more control:
variables:
- name: issue_title
required: true
type: string
- name: issue_type
default: "FEATURE"
choices: ["FEATURE", "BUG", "TASK", "REFACTOR"]
- name: assignee
description: Person assigned to work on this issue
type: string
default: "unassigned"
- name: priority
description: Issue priority level
default: "medium"
choices: ["low", "medium", "high", "urgent"]
Tutorial 2: Advanced Workflow with Conditional Logic
Let’s create a comprehensive development workflow with branching logic.
Step 1: Create Development Workflow
Create ./workflows/development-workflow.md
:
---
name: development-workflow
description: Complete development workflow with testing and deployment
version: "2.0"
initial_state: analyze_changes
timeout_ms: 1800000 # 30 minutes
variables:
- name: feature_name
description: Name of feature being developed
required: true
type: string
- name: environment
description: Target environment
default: "staging"
choices: ["development", "staging", "production"]
- name: run_tests
description: Whether to run automated tests
type: boolean
default: true
- name: auto_deploy
description: Automatically deploy if tests pass
type: boolean
default: false
---
# Development Workflow
Comprehensive development workflow with conditional logic.
## States
### analyze_changes
**Description**: Analyze what needs to be done
**Actions**:
- search: Find existing implementations
- memo: Create analysis memo
**Transitions**:
- If existing code found → design_enhancement
- If no existing code → create_from_scratch
### create_from_scratch
**Description**: Create new feature from scratch
**Actions**:
- issue: Create comprehensive issue
- branch: Create feature branch
- memo: Document approach
**Next**: implement
### design_enhancement
**Description**: Design enhancement to existing code
**Actions**:
- memo: Document enhancement plan
- issue: Create focused issue
- branch: Create enhancement branch
**Next**: implement
### implement
**Description**: Implement the feature
**Actions**:
- memo: Update with implementation notes
- Conditional: If run_tests → run_tests, else → manual_review
### run_tests
**Description**: Execute automated test suite
**Actions**:
- shell: Run test commands
- Conditional: If tests pass AND auto_deploy → deploy, else → manual_review
### manual_review
**Description**: Manual review and decision point
**Actions**:
- memo: Create review checklist
- Wait for manual decision
**Transitions**:
- If approved → deploy
- If needs_changes → implement
- If rejected → cleanup
### deploy
**Description**: Deploy to target environment
**Actions**:
- shell: Deploy commands based on environment
- memo: Record deployment details
**Next**: verify_deployment
### verify_deployment
**Description**: Verify deployment worked correctly
**Actions**:
- shell: Health checks
- memo: Record verification results
**Transitions**:
- If verification successful → complete
- If verification failed → rollback
### rollback
**Description**: Rollback failed deployment
**Actions**:
- shell: Rollback commands
- memo: Record rollback details
**Next**: manual_review
### complete
**Description**: Mark workflow as complete
**Actions**:
- issue: Mark as complete
- memo: Final summary
- branch: Merge and cleanup
**Next**: END
### cleanup
**Description**: Clean up after rejection
**Actions**:
- branch: Delete feature branch
- memo: Record cleanup actions
**Next**: END
Step 2: Understanding Advanced Features
Conditional Transitions: Based on previous action results
**Transitions**:
- If tests pass → deploy
- If tests fail → fix_issues
- If no tests → manual_review
Environment-Specific Actions: Different behavior per environment
**Actions**:
- If environment == "production" → production_deploy_actions
- If environment == "staging" → staging_deploy_actions
- else → development_deploy_actions
Parallel Actions: Run multiple actions simultaneously
**Actions**:
- parallel:
- shell: Run unit tests
- shell: Run integration tests
- shell: Run security scans
Step 3: Running the Advanced Workflow
# Development environment
sah flow run development-workflow \
--var feature_name="user-profile-page" \
--var environment="development" \
--var run_tests=true \
--var auto_deploy=false
# Production deployment (with confirmation)
sah flow run development-workflow \
--var feature_name="user-profile-page" \
--var environment="production" \
--var run_tests=true \
--var auto_deploy=false \
--interactive
Tutorial 3: Team Collaboration Workflow
Create a workflow that coordinates multiple team members.
Step 1: Create Team Workflow
Create ./workflows/team-collaboration.md
:
---
name: team-collaboration-workflow
description: Workflow for coordinated team development
version: "1.5"
initial_state: plan_sprint
timeout_ms: 604800000 # 1 week
variables:
- name: sprint_name
description: Name of the sprint
required: true
type: string
- name: team_members
description: List of team members
required: true
type: array
- name: features
description: Features to implement this sprint
required: true
type: array
- name: sprint_duration
description: Sprint duration in days
type: number
default: 14
---
# Team Collaboration Workflow
Coordinates development across multiple team members.
## States
### plan_sprint
**Description**: Plan the sprint with the team
**Actions**:
- memo: Create sprint planning document
- For each feature in features:
- issue: Create feature issue
- assign: Auto-assign to team members (round-robin)
**Next**: daily_standups
### daily_standups
**Description**: Track daily progress
**Actions**:
- Every day for sprint_duration:
- memo: Update daily standup notes
- For each team member:
- check: Issue progress
- update: Status tracking
**Transitions**:
- If all issues complete → sprint_review
- If sprint_duration reached → sprint_review
- Continue → daily_standups
### sprint_review
**Description**: Review sprint results
**Actions**:
- memo: Create sprint review document
- For each completed issue:
- review: Code review process
- merge: Merge completed work
- For each incomplete issue:
- analyze: Why not completed
- decide: Move to next sprint or close
**Next**: retrospective
### retrospective
**Description**: Team retrospective
**Actions**:
- memo: Create retrospective document with:
- What went well
- What could be improved
- Action items for next sprint
**Next**: END
Step 2: Workflow Execution with Team Coordination
# Start team workflow
sah flow run team-collaboration-workflow \
--var sprint_name="Q1-Sprint-3" \
--var 'team_members=["alice", "bob", "charlie"]' \
--var 'features=["user-auth", "payment-gateway", "dashboard"]' \
--var sprint_duration=10
# Monitor progress
sah flow status <run_id> --watch
# Check team progress
sah issue list --format table
sah memo search "sprint standup"
Tutorial 4: CI/CD Integration Workflow
Integrate workflows with CI/CD systems.
Step 1: Create CI/CD Workflow
Create ./workflows/cicd-integration.md
:
---
name: cicd-integration-workflow
description: Integrates with CI/CD pipeline for automated deployments
version: "2.1"
initial_state: trigger_build
timeout_ms: 2700000 # 45 minutes
variables:
- name: git_branch
description: Git branch to build
required: true
type: string
- name: build_type
description: Type of build to create
default: "release"
choices: ["debug", "release", "test"]
- name: deploy_targets
description: Deployment targets
type: array
default: ["staging"]
- name: slack_webhook
description: Slack webhook for notifications
type: string
load_from_env: "SLACK_WEBHOOK_URL"
---
# CI/CD Integration Workflow
Coordinates with external CI/CD systems.
## States
### trigger_build
**Description**: Trigger the CI/CD build
**Actions**:
- shell: `git checkout {{git_branch}}`
- shell: `git pull origin {{git_branch}}`
- api_call: Trigger CI build via API
- notification: Send build start notification
**Transitions**:
- If build triggered → wait_for_build
- If trigger failed → build_failed
### wait_for_build
**Description**: Wait for build completion
**Actions**:
- poll: Check build status every 30 seconds
- timeout: 1800 seconds (30 minutes)
**Transitions**:
- If build successful → run_tests
- If build failed → build_failed
- If timeout → build_timeout
### run_tests
**Description**: Execute test suite
**Actions**:
- parallel:
- shell: `npm run test:unit`
- shell: `npm run test:integration`
- shell: `npm run test:e2e`
- shell: `npm run test:security`
- collect: Test results and coverage
**Transitions**:
- If all tests pass → deploy_staging
- If any test fails → test_failed
### deploy_staging
**Description**: Deploy to staging environment
**Actions**:
- shell: Deploy to staging
- shell: Run smoke tests
- notification: Notify team of staging deployment
**Transitions**:
- If "production" in deploy_targets → deploy_production
- else → deployment_complete
### deploy_production
**Description**: Deploy to production (with approval)
**Actions**:
- require_approval: Manual approval required
- shell: Deploy to production with blue-green deployment
- shell: Run production smoke tests
- notification: Notify team of production deployment
**Next**: deployment_complete
### deployment_complete
**Description**: Finalize deployment
**Actions**:
- memo: Record deployment details
- notification: Send success notification
- cleanup: Clean up temporary resources
**Next**: END
### build_failed
**Description**: Handle build failure
**Actions**:
- memo: Record build failure details
- notification: Send failure notification to team
- analysis: Analyze build logs for common issues
**Next**: END
### test_failed
**Description**: Handle test failure
**Actions**:
- memo: Record test failure details
- notification: Send test failure notification
- shell: Generate test report
**Next**: END
### build_timeout
**Description**: Handle build timeout
**Actions**:
- memo: Record timeout details
- notification: Send timeout notification
- shell: Cancel running build
**Next**: END
Step 2: Integration with External Systems
GitHub Actions Integration:
# .github/workflows/swissarmyhammer.yml
name: SwissArmyHammer Workflow
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
run-workflow:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install SwissArmyHammer
run: cargo install swissarmyhammer
- name: Run CI/CD Workflow
run: |
sah flow run cicd-integration-workflow \
--var git_branch="${GITHUB_REF_NAME}" \
--var build_type="release" \
--var 'deploy_targets=["staging"]'
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
Tutorial 5: Debugging and Troubleshooting Workflows
Learn to debug workflows effectively.
Step 1: Enable Debug Mode
# Run workflow with debug output
sah flow run my-workflow --debug --var param="value"
# Enable trace logging
RUST_LOG=swissarmyhammer::workflow=trace sah flow run my-workflow
# Step through workflow interactively
sah flow run my-workflow --interactive --var param="value"
Step 2: Common Issues and Solutions
Issue: Workflow gets stuck in a state
# Check current state
sah flow status <run_id>
# View detailed logs
sah flow logs <run_id> --follow
# Manual state transition (emergency)
sah flow transition <run_id> --to-state "next_state"
Issue: Variables not resolving correctly
# Add debug actions to your workflow
**Actions**:
- debug: Print all variables
- debug: Print specific variable values
- conditional: if debug_mode → detailed_logging
Issue: Actions failing silently
# Check action output
sah flow logs <run_id> --filter "action_result"
# Test actions individually
sah flow test my-workflow --dry-run --var param="value"
Step 3: Workflow Validation
# Validate workflow syntax
sah flow validate ./workflows/my-workflow.md
# Check for common issues
sah flow lint ./workflows/my-workflow.md
# Test workflow without execution
sah flow test ./workflows/my-workflow.md --dry-run
Best Practices for Workflow Development
1. Start Simple
- Begin with linear workflows
- Add complexity incrementally
- Test each addition thoroughly
2. State Design Patterns
# Good: Single responsibility states
### validate_input
**Description**: Validate all input parameters
**Actions**: validation_logic_only
### process_data
**Description**: Process the validated data
**Actions**: processing_logic_only
# Avoid: Multi-purpose states
### validate_and_process
**Description**: Validate input and process data
**Actions**: too_much_complexity
3. Error Handling
# Every state should handle errors
### normal_operation
**Actions**:
- main_action
- on_error → error_handling_state
### error_handling_state
**Actions**:
- log_error
- notify_team
- cleanup_resources
**Transitions**:
- if recoverable → retry_state
- else → failed_state
4. Resource Management
# Always include cleanup
### resource_intensive_task
**Actions**:
- acquire_resources
- main_processing
- cleanup_resources (always runs)
### cleanup_state
**Actions**:
- release_connections
- delete_temporary_files
- update_status
5. Testing Strategies
Unit Test Individual States:
# Test specific workflow state
sah flow test-state my-workflow setup --var param="value"
Integration Testing:
# Test complete workflow path
sah flow test my-workflow --path "setup→process→complete"
Load Testing:
# Run multiple workflow instances
for i in {1..10}; do
sah flow run my-workflow --var id="$i" &
done
wait
Workflow Monitoring and Metrics
Set up Monitoring
# Monitor all running workflows
sah flow monitor --dashboard
# Get workflow metrics
sah flow metrics --workflow my-workflow --period "last-week"
# Set up alerts
sah flow alert --on-failure --on-timeout --webhook "https://hooks.slack.com/..."
Performance Optimization
# Profile workflow execution
---
enable_profiling: true
collect_metrics: true
---
# Optimize slow states
### slow_state
**Actions**:
- parallel: # Run actions in parallel where possible
- action_1
- action_2
- action_3
- cache: # Cache expensive computations
key: "computation_result"
action: expensive_computation
Workflows provide powerful automation capabilities while maintaining clarity and maintainability through their state machine design. These tutorials provide a solid foundation for creating sophisticated development automation.
Template System
SwissArmyHammer uses the Liquid template engine to create dynamic, data-driven prompts and workflows.
Overview
The template system provides:
- Variable Substitution: Replace placeholders with dynamic values
- Conditional Logic: Show/hide content based on conditions
- Loops and Iteration: Process lists and collections
- Filters: Transform and format data
- Environment Integration: Access environment variables and system info
- Custom Extensions: Add domain-specific functionality
Basic Syntax
Variables
Variables are enclosed in double curly braces:
Hello {{name}}!
Welcome to {{project_name}}.
With arguments:
name: "Alice"
project_name: "SwissArmyHammer"
Renders as:
Hello Alice!
Welcome to SwissArmyHammer.
Comments
{% comment %}
This is a comment that won't appear in output
{% endcomment %}
{# This is also a comment #}
Control Structures
Conditionals
If/Else
{% if user.premium %}
Welcome to Premium features!
{% else %}
Consider upgrading to Premium.
{% endif %}
Multiple Conditions
{% if language == "rust" %}
Use `cargo build` to compile.
{% elsif language == "python" %}
Use `python script.py` to run.
{% elsif language == "javascript" %}
Use `node script.js` to run.
{% else %}
Check your language documentation.
{% endif %}
Complex Conditions
{% if user.premium and feature.enabled %}
Premium feature available!
{% endif %}
{% if environment == "prod" or environment == "staging" %}
⚠️ Production environment detected!
{% endif %}
Loops
For Loops
## Requirements
{% for req in requirements %}
- {{req}}
{% endfor %}
Loop Variables
{% for item in items %}
{{forloop.index}}. {{item.name}}
{% if forloop.first %}(First item){% endif %}
{% if forloop.last %}(Last item){% endif %}
{% endfor %}
Available loop variables:
forloop.index
- Current iteration (1-based)forloop.index0
- Current iteration (0-based)forloop.rindex
- Remaining iterationsforloop.rindex0
- Remaining iterations (0-based)forloop.first
- True if first iterationforloop.last
- True if last iterationforloop.length
- Total loop length
Filtering in Loops
{% for user in users %}
{% if user.active %}
- {{user.name}} ({{user.role}})
{% endif %}
{% endfor %}
Limiting Loops
{% for item in items limit:5 %}
- {{item}}
{% endfor %}
{% for item in items offset:2 limit:3 %}
- {{item}}
{% endfor %}
Case Statements
{% case language %}
{% when "rust" %}
Rust detected - using Cargo
{% when "python" %}
Python detected - using pip
{% when "javascript", "typescript" %}
Node.js detected - using npm
{% else %}
Unknown language
{% endcase %}
Filters
Filters transform values using the pipe (|
) operator:
{{name | upcase}}
{{description | truncate: 50}}
{{items | size}}
String Filters
Filter | Description | Example |
---|---|---|
capitalize | Capitalize first letter | `{{text |
downcase | Convert to lowercase | `{{text |
upcase | Convert to uppercase | `{{text |
strip | Remove whitespace | `{{text |
lstrip | Remove left whitespace | `{{text |
rstrip | Remove right whitespace | `{{text |
truncate | Limit length | `{{text |
truncatewords | Limit words | `{{text |
prepend | Add prefix | `{{text |
append | Add suffix | `{{text |
replace | Replace text | `{{text |
remove | Remove text | `{{text |
split | Split into array | `{{text |
slice | Extract substring | `{{text |
Array Filters
Filter | Description | Example |
---|---|---|
join | Join elements | `{{array |
first | Get first element | `{{array |
last | Get last element | `{{array |
size | Get length | `{{array |
sort | Sort elements | `{{array |
reverse | Reverse order | `{{array |
uniq | Remove duplicates | `{{array |
compact | Remove nil values | `{{array |
map | Extract property | `{{users |
where | Filter by property | `{{users |
Numeric Filters
Filter | Description | Example |
---|---|---|
plus | Addition | `{{num |
minus | Subtraction | `{{num |
times | Multiplication | `{{num |
divided_by | Division | `{{num |
modulo | Remainder | `{{num |
round | Round number | `{{num |
ceil | Round up | `{{num |
floor | Round down | `{{num |
abs | Absolute value | `{{num |
Date Filters
{{date | date: "%Y-%m-%d"}}
{{date | date: "%B %d, %Y"}}
{{now | date: "%H:%M:%S"}}
Format strings:
%Y
- 4-digit year%m
- Month (01-12)%d
- Day (01-31)%H
- Hour (00-23)%M
- Minute (00-59)%S
- Second (00-59)%B
- Full month name%A
- Full weekday name
Utility Filters
Filter | Description | Example |
---|---|---|
default | Default value | `{{value |
escape | HTML escape | `{{html |
escape_once | Escape unescaped | `{{html |
url_encode | URL encoding | `{{text |
strip_html | Remove HTML tags | `{{html |
newline_to_br | Convert \n to | `{{text |
Custom Filters
SwissArmyHammer includes programming-specific filters:
Case Conversion Filters
{{variable_name | snake_case}} <!-- becomes snake_case -->
{{variable_name | kebab_case}} <!-- becomes kebab-case -->
{{variable_name | pascal_case}} <!-- becomes PascalCase -->
{{variable_name | camel_case}} <!-- becomes camelCase -->
Code Formatting Filters
{{code | code_block: "rust"}} <!-- Wraps in ```rust block -->
{{text | markdown_escape}} <!-- Escapes markdown chars -->
{{number | pluralize: "item"}} <!-- "1 item" or "2 items" -->
Path Filters
{{file_path | dirname}} <!-- Get directory -->
{{file_path | basename}} <!-- Get filename -->
{{file_path | extname}} <!-- Get extension -->
Advanced Features
Variable Assignment
{% assign full_name = first_name | append: " " | append: last_name %}
{% assign item_count = items | size %}
{% assign formatted_date = now | date: "%Y-%m-%d" %}
Hello {{full_name}}!
You have {{item_count}} items.
Today is {{formatted_date}}.
Capture Blocks
{% capture error_message %}
Error in {{file_name}} at line {{line_number}}: {{error_description}}
{% endcapture %}
{% if show_errors %}
**Error**: {{error_message}}
{% endif %}
Include Templates
Create reusable template fragments:
File: templates/header.liquid
# {{title | default: "Untitled"}}
**Generated**: {{now | date: "%Y-%m-%d %H:%M"}}
---
Usage:
{% include "header" %}
Main content goes here...
Template Inheritance
Base template (base.liquid
):
# {{title}}
{% block content %}
Default content
{% endblock %}
---
Generated by SwissArmyHammer
Child template:
{% extends "base" %}
{% block content %}
Custom content for this template
{% endblock %}
Environment Integration
Environment Variables
Project: {{PROJECT_NAME | default: "Unknown"}}
Environment: {{NODE_ENV | default: "development"}}
User: {{USER}}
Home: {{HOME}}
Current Directory: {{PWD}}
System Information
OS: {{OSTYPE | default: "unknown"}}
Shell: {{SHELL | default: "unknown"}}
Path: {{PATH | truncate: 100}}
Git Information
Branch: {{GIT_BRANCH | default: "main"}}
Commit: {{GIT_COMMIT | slice: 0, 8}}
Repository: {{GIT_REMOTE_URL | replace: ".git", ""}}
Context Variables
SwissArmyHammer provides built-in context variables:
File Context
Current file: {{file.path}}
File size: {{file.size}} bytes
Modified: {{file.modified | date: "%Y-%m-%d"}}
Extension: {{file.extension}}
Workflow Context
Workflow: {{workflow.name}}
State: {{workflow.current_state}}
Started: {{workflow.start_time | date: "%H:%M:%S"}}
Elapsed: {{workflow.elapsed_ms}}ms
Issue Context
Issue: {{issue.name}}
Status: {{issue.status}}
Branch: {{issue.branch}}
Created: {{issue.created | date: "%B %d, %Y"}}
Error Handling
Graceful Degradation
{% if user.name %}
Hello {{user.name}}!
{% else %}
Hello there!
{% endif %}
Files: {{files | size | default: 0}}
Debugging Templates
{% comment %}Debug: {{variable | inspect}}{% endcomment %}
{% if debug_mode %}
**Debug Info**:
- Variable: {{variable}}
- Type: {{variable | type}}
- Size: {{variable | size}}
{% endif %}
Performance Considerations
Efficient Loops
{% comment %}Good: Filter before looping{% endcomment %}
{% assign active_users = users | where: "active", true %}
{% for user in active_users %}
- {{user.name}}
{% endfor %}
{% comment %}Avoid: Filtering inside loop{% endcomment %}
{% for user in users %}
{% if user.active %}
- {{user.name}}
{% endif %}
{% endfor %}
Variable Reuse
{% assign processed_data = raw_data | sort | uniq | slice: 0, 10 %}
Count: {{processed_data | size}}
Items: {{processed_data | join: ", "}}
Conditional Computation
{% if expensive_operation_needed %}
{% assign result = data | expensive_filter %}
Result: {{result}}
{% endif %}
Best Practices
Template Organization
- Keep templates focused - One purpose per template
- Use meaningful names - Clear, descriptive template names
- Document complex logic - Use comments for complex conditionals
- Validate inputs - Check for required variables
- Provide defaults - Use
default
filter for optional values
Code Style
{% comment %}Good: Clean, readable formatting{% endcomment %}
{% if environment == "production" %}
{% assign base_url = "https://api.example.com" %}
{% else %}
{% assign base_url = "http://localhost:3000" %}
{% endif %}
API Endpoint: {{base_url}}/{{endpoint | default: "status"}}
{% comment %}Avoid: Cramped, hard to read{% endcomment %}
API: {% if environment=="production"%}https://api.example.com{%else%}http://localhost:3000{%endif%}/{{endpoint|default:"status"}}
Security
{% comment %}Always escape user input{% endcomment %}
User: {{user_input | escape}}
{% comment %}Validate before use{% endcomment %}
{% if branch_name and branch_name != "" %}
Branch: {{branch_name | escape}}
{% endif %}
{% comment %}Use safe defaults{% endcomment %}
Environment: {{environment | default: "development" | escape}}
This template system provides powerful capabilities for creating dynamic, context-aware prompts and workflows while maintaining readability and maintainability.
Issue Management
SwissArmyHammer provides a powerful issue tracking system that integrates directly with your Git workflow. Issues are stored as Markdown files in your repository, creating a self-contained, version-controlled task management system.
Overview
The issue management system allows you to:
- Create and track work items as Markdown files
- Automatically generate unique issue identifiers
- Create feature branches for issue work
- Merge completed issues back to their source branch
- Track issue lifecycle and status
- Search and organize issues efficiently
Core Concepts
Issue Structure
Issues are stored as Markdown files in the ./issues/
directory with the following structure:
project/
├── issues/
│ ├── FEATURE_001_user-authentication.md
│ ├── BUG_002_login-validation.md
│ └── complete/
│ └── REFACTOR_003_code-cleanup.md
Issue Naming
Issues follow a structured naming convention:
TYPE_NUMBER_description
(e.g.,FEATURE_001_user-auth
)- Or auto-generated ULID for unnamed issues:
01K0Q4V1N0V35TQEDPXPE1HF7Z.md
Supported issue types:
FEATURE
- New functionalityBUG
- Bug fixesREFACTOR
- Code improvementsDOCS
- Documentation updates- Custom types as needed
Basic Usage
Creating Issues
Create a new issue with a name:
sah issue create --name "feature_user_auth" --content "# User Authentication
Implement user login and registration system.
## Requirements
- Email/password login
- Session management
- Password reset functionality
"
Create a quick unnamed issue:
echo "# Quick Bug Fix
Fix the validation error in login form" | sah issue create
Create from file:
sah issue create --file issue_template.md
Listing Issues
List all active issues:
sah issue list
Include completed issues:
sah issue list --completed --active
Output in different formats:
sah issue list --format json
sah issue list --format table
Viewing Issues
Show a specific issue:
sah issue show FEATURE_001_user-auth
Show the current issue (based on branch):
sah issue show current
Show the next pending issue:
sah issue show next
Raw content only:
sah issue show FEATURE_001_user-auth --raw
Git Integration
Working on Issues
Start work on an issue (creates/switches to branch):
sah issue work FEATURE_001_user-auth
This creates and switches to a branch named issue/FEATURE_001_user-auth
.
Current Issue Status
Check which issue you’re currently working on:
sah issue current
Show overall status:
sah issue status
Completing Issues
Mark an issue as complete:
sah issue complete FEATURE_001_user-auth
This moves the issue file to ./issues/complete/
.
Merging Issue Work
Merge completed issue work back to source branch:
sah issue merge FEATURE_001_user-auth
Delete the branch after merging:
sah issue merge FEATURE_001_user-auth --delete-branch
Advanced Features
Updating Issues
Add content to existing issues:
sah issue update FEATURE_001_user-auth --content "
## Additional Context
Added OAuth integration requirements.
" --append
Replace entire content:
sah issue update FEATURE_001_user-auth --file updated_requirements.md
Issue Templates
Create template files for consistent issue creation:
# {ISSUE_TYPE}: {TITLE}
## Description
Brief description of the issue.
## Acceptance Criteria
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] Criterion 3
## Technical Notes
Implementation details and considerations.
## Testing
Testing approach and requirements.
Branch Management Strategies
Feature Branches: Each issue gets its own branch
sah issue work FEATURE_001_user-auth
# Work on feature
git commit -m "implement user authentication"
sah issue complete FEATURE_001_user-auth
sah issue merge FEATURE_001_user-auth --delete-branch
Long-running Features: Keep branches for complex features
sah issue work FEATURE_001_user-auth
# Multiple commits over time
git commit -m "add login form"
git commit -m "add validation"
git commit -m "add tests"
sah issue merge FEATURE_001_user-auth # Keep branch for future work
Organization and Search
Issue Organization
Organize issues using:
- Directory structure: Group related issues in subdirectories
- Naming conventions: Use consistent prefixes and descriptions
- Tags: Add tags within issue content for categorization
- Labels: Use Markdown headers and lists for status tracking
Searching Issues
Search issue content:
sah search query "authentication login"
Use grep for specific patterns:
grep -r "TODO" issues/
List issues by type:
ls issues/FEATURE_*
ls issues/BUG_*
Best Practices
Issue Creation
- Use descriptive names that clearly identify the work
- Include clear acceptance criteria
- Add relevant context and background
- Link to related issues or documentation
Content Structure
# Issue Title
## Summary
Brief overview of what needs to be done.
## Details
Detailed requirements and specifications.
## Acceptance Criteria
- [ ] Specific, measurable criteria
- [ ] Each criterion should be testable
- [ ] Include both functional and non-functional requirements
## Technical Notes
Implementation approach, architecture decisions, dependencies.
## Resources
- Links to relevant documentation
- Related issues or PRs
- Design mockups or specifications
Workflow Integration
- Create issues before starting work
- Use descriptive commit messages that reference issues
- Review and update issues as work progresses
- Mark issues complete only when fully tested
- Clean up branches regularly
Team Collaboration
- Use consistent naming conventions
- Include team members in issue discussions
- Document decisions and changes in issue comments
- Use issue references in commit messages and PRs
Integration with Workflows
Issues integrate seamlessly with SwissArmyHammer workflows:
# example_workflow.md
## Workflow: Issue Development
1. Create issue: `sah issue create --name "feature_name"`
2. Start work: `sah issue work {issue_name}`
3. Development cycle:
- Code changes
- Commit with issue reference
- Update issue with progress
4. Complete: `sah issue complete {issue_name}`
5. Merge: `sah issue merge {issue_name} --delete-branch`
Troubleshooting
Common Issues
Issue not found:
- Check issue name spelling
- Verify issue exists:
sah issue list
- Check if issue was completed:
sah issue list --completed
Branch conflicts:
- Ensure working directory is clean before switching branches
- Resolve merge conflicts before completing issues
- Use
git status
to check current state
Git integration problems:
- Verify Git repository is initialized
- Check branch permissions and remote access
- Ensure working directory is within a Git repository
Error Messages
“Issue already exists”: Issue name conflicts with existing issue
“Branch already exists”: Git branch name conflicts - use different issue name or clean up branches
“No current issue”: Not currently on an issue branch - use sah issue current
to check status
Migration and Maintenance
Migrating from Other Systems
Convert from linear issue numbers:
# Convert JIRA-style issues
for issue in PROJ-*.md; do
mv "$issue" "FEATURE_$(basename $issue .md | sed 's/PROJ-//')_$(head -1 $issue | tr ' ' '-').md"
done
Cleanup and Maintenance
Regularly clean up completed issues:
# Archive old completed issues
mkdir -p issues/archive/$(date +%Y)
mv issues/complete/*.md issues/archive/$(date +%Y)/
Remove stale branches:
# List issue branches
git branch | grep "issue/"
# Clean up merged branches
git branch -d issue/FEATURE_001_user-auth
API Reference
For programmatic access to issue management, see the Rust API documentation.
Key types and functions:
IssueName
- Type-safe issue name handlingIssueStorage
- Issue persistence interfacecreate_issue()
- Create new issues programmaticallylist_issues()
- Query and filter issuesissue_lifecycle()
- Manage issue state transitions
Memoranda System
The memoranda system in SwissArmyHammer provides a powerful note-taking and knowledge management solution designed for developers and teams. It stores notes as structured documents with full-text search capabilities and seamless integration with your development workflow.
Overview
The memoranda system enables you to:
- Create and organize notes with unique identifiers
- Full-text search across all memo content
- Export and import memo collections
- Integration with issues and workflows
- Version-controlled knowledge base
- Collaborative note sharing
Core Concepts
Memo Structure
Memoranda are stored with the following structure:
- Title: Human-readable memo title
- Content: Markdown-formatted memo body
- ID: Unique ULID identifier (e.g.,
01ARZ3NDEKTSV4RRFFQ69G5FAV
) - Timestamp: Creation and modification times
- Metadata: Additional structured data
Storage Format
Memos are stored in a structured format that supports:
- Efficient querying and indexing
- Full-text search capabilities
- Metadata extraction and filtering
- Import/export operations
- Version tracking
Basic Operations
Creating Memos
Create a new memo:
sah memo create --title "Project Architecture Notes" --content "
# System Architecture
## Overview
The system follows a modular architecture with clear separation of concerns.
## Components
- API Gateway: Handles external requests
- Service Layer: Business logic processing
- Data Layer: Persistence and caching
## Design Decisions
- Microservices for scalability
- Event-driven communication
- CQRS pattern for complex queries
"
Create from file:
sah memo create --title "Meeting Notes" --file meeting_2024_01_15.md
Interactive creation:
echo "Quick note about bug in login validation" | sah memo create --title "Login Bug"
Listing Memos
List all memos with previews:
sah memo list
Example output:
ID: 01ARZ3NDEKTSV4RRFFQ69G5FAV
Title: Project Architecture Notes
Created: 2024-01-15T10:30:00Z
Preview: # System Architecture\n\n## Overview\nThe system follows...
ID: 01BSZ4OFDLTSV5SSGGQ70H6GBW
Title: API Design Guidelines
Created: 2024-01-14T15:45:00Z
Preview: # API Standards\n\n## REST Conventions\nAll endpoints should...
Viewing Memos
Get a specific memo by ID:
sah memo get 01ARZ3NDEKTSV4RRFFQ69G5FAV
Get all memo content for AI context:
sah memo get-all-context
This returns all memos sorted by most recent first, formatted for AI consumption.
Searching Memos
Search across memo titles and content:
sah memo search "architecture microservices"
Search results include:
- Matching memo titles and IDs
- Content excerpts with highlighted terms
- Relevance ranking
- Creation timestamps
Example output:
Found 3 matching memos:
ID: 01ARZ3NDEKTSV4RRFFQ69G5FAV
Title: Project Architecture Notes
Match: ...system follows a modular **architecture** with clear separation...
...chose **microservices** for scalability and maintainability...
ID: 01BSZ4OFDLTSV5SSGGQ70H6GBW
Title: Service Design Patterns
Match: ...**microservices** communication patterns include...
Advanced Features
Updating Memos
Update memo content:
sah memo update 01ARZ3NDEKTSV4RRFFQ69G5FAV --content "
# Updated System Architecture
## New Requirements
Added real-time messaging capabilities.
## Implementation Notes
- WebSocket connections for live updates
- Message queuing for reliability
- Load balancing for scalability
"
The title remains unchanged when updating content.
Deleting Memos
Remove a memo permanently:
sah memo delete 01ARZ3NDEKTSV4RRFFQ69G5FAV
Warning: This action cannot be undone.
Organization Strategies
Categorization by Title
Use consistent title patterns:
# Project documentation
sah memo create --title "[PROJECT] Architecture Overview"
sah memo create --title "[PROJECT] API Documentation"
# Meeting notes
sah memo create --title "[MEETING] Team Standup 2024-01-15"
sah memo create --title "[MEETING] Architecture Review"
# Learning notes
sah memo create --title "[LEARN] Rust Async Programming"
sah memo create --title "[LEARN] Database Optimization"
Content Structure
Organize memo content with consistent structure:
# Topic Title
## Summary
Brief overview of the topic.
## Key Points
- Main concept 1
- Main concept 2
- Main concept 3
## Details
Comprehensive information, code examples, and explanations.
## References
- [Link 1](https://example.com)
- Related memos: 01ARZ3NDEKTSV4RRFFQ69G5FAV
## Action Items
- [ ] Task 1
- [ ] Task 2
## Tags
#architecture #microservices #design-patterns
Linking Related Content
Reference other memos by ID:
See also:
- Architecture overview: 01ARZ3NDEKTSV4RRFFQ69G5FAV
- API guidelines: 01BSZ4OFDLTSV5SSGGQ70H6GBW
Cross-reference with issues:
Related to issue: FEATURE_001_user-authentication
Search Capabilities
Full-Text Search
The search system provides:
- Term matching: Find exact words and phrases
- Partial matching: Match word stems and variations
- Boolean logic: Combine terms with AND/OR operations
- Phrase search: Use quotes for exact phrase matching
Search examples:
# Single term
sah memo search "microservices"
# Multiple terms (AND)
sah memo search "architecture design patterns"
# Phrase search
sah memo search "\"API gateway pattern\""
# Technical terms
sah memo search "async await"
Search Tips
Effective search strategies:
- Use specific technical terms
- Combine related concepts
- Search by project or component names
- Use tag-like terms for categorization
Search optimization:
- Include synonyms and variations
- Use both technical and common terms
- Search by date patterns in titles
- Combine title and content terms
Integration with Development Workflow
With Issues
Link memos to issues for comprehensive documentation:
# Issue Research: FEATURE_001_user-auth
## Background Research
Created memo: 01ARZ3NDEKTSV4RRFFQ69G5FAV - "OAuth Implementation Patterns"
## Design Decisions
Documented in memo: 01BSZ4OFDLTSV5SSGGQ70H6GBW - "Authentication Architecture"
## Implementation Notes
See memo: 01CSZ5PGEMT7V6TTHHQ81I7HCX - "User Session Management"
With Workflows
Incorporate memo creation into workflows:
# Development Workflow
1. Research phase:
- `sah memo create --title "[RESEARCH] {topic}"`
- Document findings and decisions
2. Design phase:
- `sah memo create --title "[DESIGN] {component}"`
- Architecture and interface documentation
3. Implementation phase:
- `sah memo create --title "[IMPL] {feature}"`
- Implementation notes and gotchas
4. Review phase:
- `sah memo search "{project} implementation"`
- Review and consolidate learnings
Knowledge Sharing
Use memos for team knowledge sharing:
# Team Knowledge Base
## Onboarding
- System Overview: 01ARZ3NDEKTSV4RRFFQ69G5FAV
- Development Setup: 01BSZ4OFDLTSV5SSGGQ70H6GBW
- Code Standards: 01CSZ5PGEMT7V6TTHHQ81I7HCX
## Architecture
- Service Architecture: 01DSZ6QHFNU8W7UUIIR92J8IDY
- Database Schema: 01ESZ7RIGOV9X8VVJJS03K9JEZ
- API Design: 01FSZ8SJHPWAZ9WWKKTP4LAKFA
Export and Import
Exporting Memos
Export all memos for backup or sharing:
# Export to JSON
sah memo list --format json > memos_backup.json
# Export individual memo
sah memo get 01ARZ3NDEKTSV4RRFFQ69G5FAV --format markdown > memo_export.md
Importing Memos
Import from external systems:
# Convert from other formats
cat external_notes.md | sah memo create --title "Imported Notes"
# Bulk import from directory
for file in notes/*.md; do
sah memo create --title "$(basename "$file" .md)" --file "$file"
done
Best Practices
Content Creation
Write clear, searchable content:
- Use descriptive titles with keywords
- Include technical terms and concepts
- Add context and background information
- Structure content with headers and lists
Make content discoverable:
- Add relevant tags and keywords
- Include synonyms for technical terms
- Reference related memos and issues
- Use consistent naming conventions
Organization
Develop a taxonomy:
[CATEGORY] Specific Topic
[PROJECT-NAME] Component/Feature
[MEETING] Date and Participants
[RESEARCH] Technology/Approach
[DECISION] What was decided
[HOW-TO] Step-by-step guides
Maintain memo hygiene:
- Regularly review and update content
- Remove outdated or duplicate information
- Consolidate related memos when appropriate
- Archive historical memos that are no longer relevant
Collaborative Use
Team conventions:
- Agree on title formatting standards
- Define categories and tags
- Establish update and ownership policies
- Create shared memo indexes for important topics
Knowledge management:
- Regular knowledge sharing sessions
- Memo review and consolidation processes
- Cross-team memo sharing mechanisms
- Documentation of team decisions and rationales
Troubleshooting
Common Issues
Memo not found:
- Verify the ULID is correct
- Check if memo was deleted
- Use
sah memo list
to browse available memos
Search returns no results:
- Check spelling and terminology
- Try broader or more specific terms
- Search for partial words or phrases
- Verify memos exist with expected content
Performance issues:
- Large memo collections may have slower search
- Consider archiving old memos
- Use more specific search terms
Error Messages
“Invalid ULID”: Check that the memo ID is a valid ULID format “Memo not found”: The specified memo ID doesn’t exist “Content too large”: Memo content exceeds size limits
API Integration
For programmatic access to the memoranda system, see the Rust API documentation.
Key operations:
memo_create()
- Create new memosmemo_search()
- Search memo contentmemo_get()
- Retrieve specific memosmemo_list()
- List all memos with metadatamemo_update()
- Modify existing memo contentmemo_delete()
- Remove memos permanently
The memoranda system provides a foundation for building institutional knowledge and supporting effective development workflows through organized, searchable documentation.
Semantic Search
SwissArmyHammer’s semantic search system provides intelligent code search capabilities using vector embeddings and AI-powered similarity matching. Unlike traditional text-based search, semantic search understands the meaning and context of code, enabling more accurate and relevant results.
Overview
The semantic search system offers:
- Vector-based search: Uses embeddings to understand code semantics
- Multi-language support: Rust, Python, TypeScript, JavaScript, Dart
- Code-aware parsing: TreeSitter integration for structured code analysis
- Local processing: All embeddings computed locally with no external API calls
- Performance optimization: Efficient indexing and caching
- Incremental updates: Only re-index changed files
Core Concepts
Semantic Understanding
Traditional text search finds exact matches:
grep "function login" *.js # Finds only exact phrase
Semantic search understands meaning:
sah search query "user authentication" # Finds related concepts:
# - login functions
# - auth middleware
# - session management
# - password validation
Vector Embeddings
Code is converted to high-dimensional vectors that capture semantic meaning:
- Similar code produces similar vectors
- Related concepts cluster together in vector space
- Similarity measured by vector distance
- AI model trained specifically for code understanding
Code Structure Awareness
TreeSitter parsing provides structured understanding:
- Function definitions and implementations
- Class hierarchies and relationships
- Module dependencies and imports
- Documentation and comments
- Type information and signatures
Getting Started
Indexing Files
Before searching, index your codebase:
# Index all Rust files
sah search index "**/*.rs"
# Index multiple file types
sah search index "**/*.rs" "**/*.py" "**/*.ts"
# Index specific directories
sah search index "src/**/*.rs" "lib/**/*.rs"
# Force re-index all files
sah search index "**/*.rs" --force
Basic Search
Search indexed code:
# Basic search
sah search query "error handling"
# Limit results
sah search query "async function implementation" --limit 5
# Search specific concepts
sah search query "database connection pooling"
Search Results
Results include:
{
"results": [
{
"file_path": "src/auth.rs",
"chunk_text": "fn handle_auth_error(e: AuthError) -> Result<Response> { ... }",
"line_start": 42,
"line_end": 48,
"similarity_score": 0.87,
"language": "rust",
"chunk_type": "Function",
"excerpt": "...fn handle_auth_error(e: AuthError) -> Result<Response> {..."
}
],
"query": "error handling",
"total_results": 1,
"execution_time_ms": 123
}
Supported Languages
Rust (.rs)
- Functions and methods
- Structs and enums
- Traits and implementations
- Modules and use statements
- Type definitions
- Documentation comments
Python (.py)
- Functions and methods
- Classes and inheritance
- Decorators and properties
- Import statements
- Type hints
- Docstrings
TypeScript (.ts)
- Functions and arrow functions
- Classes and interfaces
- Type definitions
- Import/export statements
- Generics and constraints
- JSDoc comments
JavaScript (.js)
- Functions (regular and arrow)
- Classes and prototypes
- Module imports/exports
- Object methods
- Closure patterns
- Comments
Dart (.dart)
- Functions and methods
- Classes and mixins
- Constructors
- Type definitions
- Library imports
- Documentation comments
Plain Text Fallback
Files that cannot be parsed with TreeSitter are indexed as plain text with basic symbol extraction.
Advanced Usage
Indexing Strategies
Project-wide indexing:
# Index entire codebase
sah search index "**/*.{rs,py,ts,js,dart}"
Selective indexing:
# Index only source directories
sah search index "src/**/*" "lib/**/*" "crates/**/*"
# Exclude test files
sah search index "**/*.rs" --exclude "**/*test*.rs" "**/*spec*.rs"
Incremental updates:
# Only re-index changed files (default behavior)
sah search index "**/*.rs"
# Force complete re-indexing
sah search index "**/*.rs" --force
Search Query Optimization
Effective queries:
# Specific concepts
sah search query "error handling patterns"
sah search query "async database operations"
sah search query "HTTP request middleware"
# Implementation details
sah search query "trait implementation for serialization"
sah search query "React component lifecycle hooks"
sah search query "memory management and cleanup"
Query strategies:
- Use domain-specific terminology
- Combine related concepts
- Include both high-level and specific terms
- Search for patterns and implementations
Result Filtering
Limit results:
sah search query "authentication" --limit 10
Similarity thresholds: Results are automatically filtered by similarity score (typically > 0.5).
File type filtering: Search specific languages by indexing only those files:
sah search index "**/*.rs" # Index only Rust
sah search query "memory safety" # Will only search Rust files
Performance Optimization
Indexing Performance
First-time setup:
- Initial embedding model download (~100MB)
- TreeSitter parser compilation
- Complete codebase analysis
- Can take several minutes for large projects
Subsequent runs:
- Model cached locally
- Only changed files re-indexed
- Incremental updates are fast
- Vector database optimized for queries
Optimization tips:
# Index incrementally
sah search index "src/**/*.rs" # Start with core source
sah search index "lib/**/*.rs" # Add libraries
sah search index "tests/**/*.rs" # Add tests last
# Use specific patterns
sah search index "src/main.rs" "src/lib.rs" # Critical files first
Query Performance
Fast queries:
- Embedding model loaded once
- Vector similarity computed efficiently
- Results cached for repeated queries
- Logarithmic scaling with index size
Performance characteristics:
- First query: ~1-2 seconds (model loading)
- Subsequent queries: ~100-300ms
- Scales well to large codebases (10k+ files)
- Memory usage scales with index size
Storage
Index location:
- Stored in
.swissarmyhammer/search.db
- DuckDB database for efficient storage
- Automatically added to
.gitignore
- Portable across machines
Storage size:
- ~1-5MB per 1000 source files
- Compressed vector representations
- Metadata and text chunks
- Grows linearly with codebase size
Integration with Development Workflow
Code Exploration
Understanding new codebases:
# Find authentication systems
sah search query "user authentication login"
# Locate error handling patterns
sah search query "error handling Result Option"
# Find database interactions
sah search query "database query connection"
# Discover API endpoints
sah search query "HTTP route handler endpoint"
Architecture analysis:
# Find design patterns
sah search query "factory pattern builder"
# Locate configuration management
sah search query "config settings environment"
# Find testing utilities
sah search query "test helper mock fixture"
Refactoring Support
Before refactoring:
# Find all usages of a concept
sah search query "user session management"
# Locate similar implementations
sah search query "validation input sanitization"
# Find related error types
sah search query "CustomError DatabaseError"
Impact analysis:
# Find dependencies
sah search query "imports {module_name}"
# Locate similar patterns
sah search query "{old_pattern}" --limit 20
Code Review
Review preparation:
# Understand changed areas
sah search query "{feature_area} implementation"
# Find related code
sah search query "{component} {functionality}"
# Check for similar patterns
sah search query "{new_pattern} {approach}"
Documentation and Learning
Knowledge discovery:
# Learn from existing code
sah search query "async streaming data processing"
# Find implementation examples
sah search query "trait object dynamic dispatch"
# Discover best practices
sah search query "error propagation handling"
Use Cases
Code Discovery
Finding functionality:
- “How is logging implemented?”
- “Where are HTTP requests handled?”
- “How is database connection managed?”
- “What validation logic exists?”
Pattern recognition:
- “Find all factory patterns”
- “Locate builder implementations”
- “Show async processing examples”
- “Find error handling approaches”
Maintenance and Debugging
Issue investigation:
- “Find error handling for network timeouts”
- “Locate memory leak prevention code”
- “Show panic handling strategies”
- “Find resource cleanup patterns”
Code quality analysis:
- “Find duplicate logic patterns”
- “Locate complex functions”
- “Show outdated API usage”
- “Find security-sensitive code”
Learning and Onboarding
New team members:
- “Show authentication flow”
- “Find configuration examples”
- “Locate test patterns”
- “Show deployment procedures”
Technology adoption:
- “Find async/await usage”
- “Show generic implementations”
- “Locate macro usage”
- “Find trait implementations”
Best Practices
Indexing Strategy
Comprehensive coverage:
# Include all source languages
sah search index "**/*.{rs,py,ts,js,dart,go,java,cpp,h}"
# Exclude generated and vendor code
sah search index "src/**/*" "lib/**/*" --exclude "target/**/*" "node_modules/**/*"
Regular maintenance:
# Re-index after major changes
git pull && sah search index "**/*.rs" --force
# Update index with new files
sah search index "**/*.rs" # Incremental by default
Query Techniques
Start broad, then narrow:
sah search query "authentication" # Broad overview
sah search query "JWT token validation" # Specific implementation
sah search query "auth middleware setup" # Particular aspect
Use domain terminology:
# Good: specific terms
sah search query "HTTP request serialization"
sah search query "database transaction rollback"
sah search query "async stream processing"
# Less effective: generic terms
sah search query "data processing"
sah search query "network code"
Result Analysis
Evaluate relevance:
- Higher similarity scores (>0.8) indicate close matches
- Review context around matching code chunks
- Consider file paths and locations
- Examine related functions and types
Follow-up searches:
- Use findings to refine queries
- Search for related patterns
- Explore connected functionality
- Verify implementations across codebase
Troubleshooting
Indexing Issues
Model download fails:
- Check internet connectivity
- Verify disk space (need ~200MB)
- Try again - downloads resume automatically
Parsing errors:
- Most files will parse successfully
- Unparseable files indexed as plain text
- Check TreeSitter language support
Performance problems:
# Check index size
ls -la .swissarmyhammer/search.db
# Re-index with smaller scope
sah search index "src/**/*.rs" # Just source code
Search Issues
No results found:
- Verify files are indexed: check
.swissarmyhammer/search.db
exists - Try broader query terms
- Check if search terms match code language/style
- Re-index if codebase has changed significantly
Irrelevant results:
- Use more specific terminology
- Combine multiple concepts in query
- Consider different phrasing
- Try exact technical terms from your domain
Slow queries:
- First query loads model (normal delay)
- Large result sets take longer to return
- Reduce result limit for faster response
- Check available memory for large indices
Common Errors
“Index not found”: Run sah search index
first
“Model initialization failed”: Check disk space and permissions
“No matching files”: Verify glob patterns and file paths
“Query too short”: Use queries with at least 2-3 meaningful words
Integration with Other Tools
With Issue Management
Link search results to issues:
# Find related code for issue
sah search query "user authentication session" > issue_research.md
sah issue update FEATURE_001_auth --file issue_research.md --append
With Workflows
Incorporate search into development workflows:
# Research Workflow
1. Search for existing implementations:
`sah search query "{feature_concept}"`
2. Analyze patterns and approaches:
Review results for design patterns
3. Document findings:
`sah memo create --title "[RESEARCH] {feature}"`
4. Plan implementation:
Use findings to inform architecture decisions
With External Tools
IDE Integration:
- Export search results to files for IDE viewing
- Use results to navigate to specific code locations
- Integrate with editor plugins for seamless workflow
Documentation Generation:
- Use search results to find code examples
- Extract patterns for documentation
- Generate API usage examples from search results
The semantic search system transforms how you explore, understand, and work with code, providing intelligent discovery capabilities that go far beyond traditional text matching.
MCP Integration
SwissArmyHammer provides comprehensive Model Context Protocol (MCP) integration, allowing AI language models to interact directly with your development tools and workflows. This creates a seamless bridge between AI assistance and your development environment.
Overview
MCP integration enables:
- Direct tool access: AI models can use SwissArmyHammer tools directly
- Workflow automation: AI can execute complex development workflows
- Context-aware assistance: AI has access to your project state and history
- Bidirectional communication: Tools can provide feedback and results to AI
- Secure operation: Controlled access to development resources
MCP Architecture
Protocol Foundation
MCP (Model Context Protocol) is a standard for AI-tool integration:
- Server-Client Architecture: SwissArmyHammer runs as MCP server
- Tool Registry: Exposes capabilities to AI clients
- Request-Response Pattern: Structured communication protocol
- Type Safety: Strongly typed interface definitions
- Error Handling: Comprehensive error propagation and reporting
Tool Categories
SwissArmyHammer exposes several categories of MCP tools:
Issue Management Tools:
issue_create
- Create new issuesissue_list
- List and filter issuesissue_show
- Display issue detailsissue_update
- Modify issue contentissue_complete
- Mark issues completeissue_work
- Start work on issuesissue_merge
- Merge completed work
Memoranda Tools:
memo_create
- Create new memosmemo_list
- List all memosmemo_search
- Search memo contentmemo_get
- Retrieve specific memosmemo_update
- Modify memo contentmemo_delete
- Remove memos
Search Tools:
search_index
- Index files for semantic searchsearch_query
- Perform semantic searchesoutline_generate
- Generate code outlines
Workflow Control:
abort_create
- Signal workflow terminationissue_all_complete
- Check completion status
Getting Started
Server Setup
SwissArmyHammer automatically runs as an MCP server when used with compatible AI clients:
# Server starts automatically with compatible clients
# No manual configuration required
Client Configuration
Configure your AI client to use SwissArmyHammer as an MCP server. Example configuration:
{
"mcpServers": {
"swissarmyhammer": {
"command": "sah",
"args": ["--mcp"],
"env": {
"SAH_PROJECT_ROOT": "/path/to/your/project"
}
}
}
}
Verification
Test MCP connectivity:
# Check available tools
sah --mcp list-tools
# Verify server status
sah --mcp status
Tool Reference
Issue Management
Create Issue:
{
"tool": "issue_create",
"parameters": {
"name": "feature_user_auth",
"content": "# User Authentication\n\nImplement login system..."
}
}
List Issues:
{
"tool": "issue_list",
"parameters": {
"show_completed": false,
"show_active": true,
"format": "table"
}
}
Show Issue:
{
"tool": "issue_show",
"parameters": {
"name": "current" // or specific issue name
}
}
Work on Issue:
{
"tool": "issue_work",
"parameters": {
"name": "FEATURE_001_user-auth"
}
}
Memoranda Operations
Create Memo:
{
"tool": "memo_create",
"parameters": {
"title": "API Design Decisions",
"content": "# REST API Guidelines\n\n## Authentication\n..."
}
}
Search Memos:
{
"tool": "memo_search",
"parameters": {
"query": "authentication patterns OAuth"
}
}
Get All Context:
{
"tool": "memo_get_all_context",
"parameters": {}
}
Semantic Search
Index Files:
{
"tool": "search_index",
"parameters": {
"patterns": ["**/*.rs", "**/*.py"],
"force": false
}
}
Search Query:
{
"tool": "search_query",
"parameters": {
"query": "error handling patterns",
"limit": 10
}
}
Generate Outline:
{
"tool": "outline_generate",
"parameters": {
"patterns": ["src/**/*.rs"],
"output_format": "yaml"
}
}
Advanced Usage
Workflow Integration
AI can execute complex workflows using MCP tools:
1. Research Phase:
- search_query("existing authentication systems")
- memo_create("Research Findings", content)
2. Planning Phase:
- issue_create("implement OAuth integration")
- issue_work("FEATURE_001_oauth")
3. Development Phase:
- search_index(["**/*.rs"])
- outline_generate(["src/auth/**/*.rs"])
4. Completion Phase:
- issue_complete("FEATURE_001_oauth")
- memo_create("Implementation Notes", lessons_learned)
Context Management
AI maintains context across tool calls:
- Project state: Current branch, active issues
- Search history: Previous queries and results
- Memo database: Accumulated knowledge and decisions
- Issue tracking: Work progress and relationships
Error Handling
MCP tools provide structured error responses:
{
"error": {
"code": "ISSUE_NOT_FOUND",
"message": "Issue 'FEATURE_999' does not exist",
"details": {
"available_issues": ["FEATURE_001", "FEATURE_002"],
"suggestions": ["Check issue name spelling", "Use issue_list to see available issues"]
}
}
}
Security Considerations
Access Control
MCP integration operates within defined boundaries:
- File system access: Limited to project directories
- Git operations: Only standard development commands
- Network access: No external API calls required
- Process isolation: Runs in controlled environment
Data Privacy
All operations are local:
- No external services: All processing happens locally
- No data transmission: Project data stays on your machine
- No logging: Sensitive information not logged remotely
- Full control: Complete visibility into all operations
Safe Operations
Tools designed for safe automated use:
- Non-destructive defaults: Safe operations by default
- Confirmation patterns: Critical operations require explicit confirmation
- Rollback capability: Git integration enables easy rollback
- Audit trail: All operations tracked in Git history
Integration Examples
AI-Assisted Development
Feature Development Flow:
AI: "I'll help implement user authentication. Let me start by researching existing patterns."
1. search_query("authentication patterns JWT session")
2. memo_create("Auth Research", findings)
3. issue_create("implement_user_auth", requirements)
4. issue_work("FEATURE_001_user_auth")
5. outline_generate(["src/auth/**/*.rs"])
6. [Development work with other tools]
7. issue_complete("FEATURE_001_user_auth")
Code Review Assistance:
AI: "Let me review the recent changes and provide feedback."
1. search_query("error handling in authentication")
2. issue_show("current")
3. outline_generate(["src/**/*.rs"])
4. memo_create("Code Review Notes", analysis)
Knowledge Management:
AI: "I'll help organize the team's knowledge about the authentication system."
1. memo_search("authentication login OAuth")
2. memo_get_all_context()
3. search_query("auth implementation patterns")
4. memo_create("Auth System Overview", consolidated_knowledge)
Custom Workflows
AI can execute custom workflows defined in SwissArmyHammer:
# AI Development Assistant Workflow
## Research Phase
- Use semantic search to understand existing code
- Create memos with findings and decisions
- Reference related issues and documentation
## Implementation Phase
- Create focused issues for development tasks
- Switch to appropriate Git branches
- Generate code outlines for understanding structure
## Review Phase
- Search for related implementations
- Check issue completion status
- Create summary memos with lessons learned
Troubleshooting
Connection Issues
MCP server not responding:
# Check server status
sah --mcp status
# Restart server
pkill sah && sah --mcp
Tool registration problems:
# Verify tool availability
sah --mcp list-tools
# Check client configuration
# Ensure correct command and arguments
Authentication and Permissions
File access denied:
- Verify project directory permissions
- Check that SAH_PROJECT_ROOT is set correctly
- Ensure Git repository is accessible
Git operation failures:
- Verify Git repository status
- Check for uncommitted changes
- Ensure branch switching is possible
Performance Issues
Slow tool responses:
- Large search indices may be slow initially
- First semantic search loads model (normal delay)
- Check available memory for large operations
High memory usage:
- Semantic search models use significant memory
- Close unused AI sessions
- Restart MCP server if needed
Common Errors
“Project root not found”: Set SAH_PROJECT_ROOT environment variable
“Git repository not initialized”: Run git init
in project directory
“Search index not found”: Run search_index
before querying
“Invalid issue name”: Check issue naming conventions and existing issues
Best Practices
Tool Usage
Efficient workflows:
- Use search_index before multiple queries
- Batch related operations together
- Cache frequently accessed memo content
- Leverage Git branching for issue work
Error recovery:
- Handle tool errors gracefully
- Provide fallback strategies
- Validate inputs before tool calls
- Use issue_all_complete for status checks
AI Integration
Context management:
- Use memo_get_all_context for comprehensive background
- Search existing knowledge before creating new content
- Reference related issues and memos in new content
- Maintain consistent naming and tagging
Workflow design:
- Break complex tasks into discrete tool operations
- Provide clear success/failure indicators
- Enable easy rollback and recovery
- Document decisions and rationale
Future Enhancements
The MCP integration is designed for extensibility:
Planned tool additions:
- Configuration management tools
- Test execution and reporting tools
- Deployment and environment tools
- Code generation and refactoring tools
Protocol improvements:
- Enhanced error reporting and recovery
- Streaming responses for long operations
- Progress reporting for complex workflows
- Enhanced security and access controls
AI capabilities:
- Multi-step workflow execution
- Context-aware decision making
- Learning from project patterns
- Proactive assistance and suggestions
MCP integration transforms SwissArmyHammer into a powerful AI development assistant, enabling sophisticated automation while maintaining full control over your development environment and data.
Built-in Resources
SwissArmyHammer includes production-ready prompts and workflows embedded in the binary. These are immediately available after installation.
Built-in Prompts
Code Quality
code
General code analysis and suggestions.
sah prompt test code --var language=rust --var context="authentication module"
review/code
Comprehensive code review with quality checklist.
sah prompt test review/code --var author="developer" --var files="src/auth.rs"
review/security
Security-focused code review.
sah prompt test review/security --var component="payment processing"
review/accessibility
Accessibility review for user interfaces.
sah prompt test review/accessibility --var interface="login form"
review/patterns
Review code patterns and architectural decisions.
sah prompt test review/patterns --var pattern="repository pattern"
test
Test generation and validation strategies.
sah prompt test test --var function="user_authentication" --var language=rust
coverage
Code coverage analysis and improvement suggestions.
sah prompt test coverage --var module="user_service"
Documentation
documentation
General documentation generation with Liquid templating.
sah prompt test documentation --var project="MyApp" --var type="API"
docs/readme
Generate README files for projects.
sah prompt test docs/readme --var project="SwissArmyHammer"
docs/comments
Generate inline code documentation.
sah prompt test docs/comments --var language=rust --var function="process_user_input"
docs/project
Comprehensive project documentation.
sah prompt test docs/project --var name="MyProject" --var language=python
docs/review
Review and improve existing documentation.
sah prompt test docs/review --var document="API documentation"
docs/correct
Fix documentation errors and inconsistencies.
sah prompt test docs/correct --var section="installation guide"
Development Process
plan
Project and feature planning assistance.
sah prompt test plan --var feature="user dashboard" --var scope="MVP"
principals
Development principles and best practices guidance.
sah prompt test principals --var language=rust --var domain="web backend"
standards
Coding standards enforcement and guidance.
sah prompt test standards --var team_size="5" --var language=typescript
coding_standards
Liquid-templated coding standards.
sah prompt test coding_standards --var language=python --var framework=django
review_format
Structured review format templates.
sah prompt test review_format --var type="architecture" --var scope="microservices"
Debugging and Analysis
debug/error
Error analysis and debugging assistance.
sah prompt test debug/error --var error_message="connection timeout" --var context="database"
debug/logs
Log analysis and interpretation.
sah prompt test debug/logs --var log_level="ERROR" --var service="payment_service"
Issue Management
issue/code
Code-related issue analysis and resolution.
sah prompt test issue/code --var issue="memory leak" --var language=rust
issue/code_review
Code review issue handling.
sah prompt test issue/code_review --var reviewer="senior_dev" --var priority="high"
issue/review
General issue review and triage.
sah prompt test issue/review --var type="bug" --var severity="critical"
issue/complete
Issue completion and closure procedures.
sah prompt test issue/complete --var issue_id="PROJ-123" --var resolution="fixed"
issue/merge
Issue merge and integration procedures.
sah prompt test issue/merge --var branch="feature/auth" --var target="develop"
issue/on_worktree
Issue workflow for worktree-based development.
sah prompt test issue/on_worktree --var worktree="feature-branch"
Workflow Management
todo
TODO list generation and task management.
sah prompt test todo --var project="web_app" --var milestone="v1.0"
commit
Commit message generation and formatting.
sah prompt test commit --var changes="authentication fixes" --var type="bugfix"
empty
Empty template for custom prompts.
sah prompt test empty --var context="custom_use_case"
Utility Prompts
help
General help and guidance.
sah prompt test help --var topic="workflow setup"
example
Example prompt demonstrating basic usage.
sah prompt test example --var name="test_prompt"
say-hello
Simple greeting prompt for testing.
sah prompt test say-hello --var name="World"
abort
Workflow abort and termination procedures.
sah prompt test abort --var reason="user_requested" --var workflow="deployment"
Status Check Prompts
are_issues_complete
Check if all issues are completed.
sah prompt test are_issues_complete --var project="current"
are_reviews_done
Verify all reviews are completed.
sah prompt test are_reviews_done --var milestone="release_1.0"
are_tests_passing
Check test suite status.
sah prompt test are_tests_passing --var suite="integration"
Meta-Prompts
prompts/create
Create new prompts programmatically.
sah prompt test prompts/create --var purpose="API documentation" --var domain="fintech"
prompts/improve
Improve existing prompts.
sah prompt test prompts/improve --var prompt_name="code_review" --var issue="too_verbose"
Built-in Workflows
Basic Examples
hello-world
Simple workflow demonstrating basic state transitions.
sah flow run hello-world
States: greeting → farewell → complete
greeting
Interactive greeting workflow.
sah flow run greeting --var name="Developer"
States: welcome → personalize → complete
example-actions
Demonstrates different action types (shell, prompt, conditional).
sah flow run example-actions
States: setup → execute → validate → complete
Development Workflows
tdd
Test-driven development workflow.
sah flow run tdd --var feature="user_login" --var language="rust"
States: write_test → run_test → implement → refactor → complete
implement
General feature implementation workflow.
sah flow run implement --var feature="payment_processing"
States: plan → code → test → review → complete
plan
Planning and design workflow.
sah flow run plan --var scope="user_dashboard" --var timeline="2_weeks"
States: requirements → architecture → tasks → review → complete
Issue Management Workflows
code_issue
End-to-end issue resolution workflow.
sah flow run code_issue --var issue_type="bug" --var priority="high"
States: triage → investigate → fix → test → review → complete
do_issue
Execute work on an existing issue.
sah flow run do_issue --var issue_id="PROJ-123"
States: start_work → implement → test → submit → complete
complete_issue
Issue completion and cleanup workflow.
sah flow run complete_issue --var issue_id="PROJ-456"
States: final_review → merge → cleanup → document → complete
review_issue
Issue review and validation workflow.
sah flow run review_issue --var issue_id="PROJ-789" --var reviewer="tech_lead"
States: review_code → test_changes → approve → complete
Documentation Workflows
document
Documentation generation workflow.
sah flow run document --var type="API" --var format="markdown"
States: outline → draft → review → publish → complete
review_docs
Documentation review and quality check.
sah flow run review_docs --var document="user_guide"
States: content_review → format_check → accuracy_check → approve → complete
Using Built-in Resources
List Available Resources
# List all prompts (including built-in)
sah prompt list
# List all workflows
sah flow list
# Filter for built-in only
sah prompt list --builtin
sah flow list --builtin
Test Before Using
# Test a prompt with variables
sah prompt test code --var language=rust --var context="auth module"
# Validate prompt syntax
sah prompt validate code
# Render without executing
sah prompt render documentation --var project=MyApp
Workflow Execution
# Run a workflow
sah flow run tdd --var feature=login --var language=python
# Check workflow status
sah flow status
# View workflow history
sah flow history
# Stop a running workflow
sah flow stop workflow_id
Customization
You can override built-in resources by creating files with the same name in your user or local directories:
# Override built-in 'code' prompt
cp ~/.swissarmyhammer/prompts/code.md ~/.swissarmyhammer/prompts/code.md
# Edit the file to customize
# Create project-specific override
mkdir -p .swissarmyhammer/prompts
cp ~/.swissarmyhammer/prompts/team-review.md .swissarmyhammer/prompts/code.md
# Customize for project needs
Precedence Order:
- Local directory (
.swissarmyhammer/
) - User directory (
~/.swissarmyhammer/
) - Built-in resources (embedded)
Integration Examples
Claude Code Usage
Built-in prompts are automatically available in Claude Code:
# Configure MCP
claude mcp add --scope user sah sah serve
# Use in Claude Code
/code language="typescript" context="React component"
/plan feature="user authentication" scope="MVP"
/review/security component="payment processing"
Workflow Automation
# Chain workflows together
sah flow run plan --var project=MyApp && \
sah flow run tdd --var feature=auth && \
sah flow run document --var type=API
Custom Integration
# Use prompts in scripts
REVIEW_OUTPUT=$(sah prompt render review/code --var author="$USER" --var files="$CHANGED_FILES")
echo "$REVIEW_OUTPUT" | mail -s "Code Review" team@company.com
# Integrate with CI/CD
sah flow run code_issue --var issue_type=ci_failure --var build_id="$BUILD_ID"
These built-in resources provide a solid foundation for development workflows. You can use them as-is or customize them for your specific needs.
Use Cases and Examples
SwissArmyHammer adapts to various development workflows. Here are practical scenarios showing how to use it effectively.
Individual Developer
Personal Prompt Library
Create reusable prompts for consistent code quality:
# Set up personal collection
mkdir -p ~/.swissarmyhammer/prompts
# Code review checklist
cat > ~/.swissarmyhammer/prompts/self-review.md << 'EOF'
---
title: Self Code Review
description: Personal code review checklist
arguments:
- name: language
description: Programming language
required: true
- name: feature
description: Feature being developed
required: true
---
# Self Review: {{feature}} ({{language}})
Review this {{feature}} implementation:
## Quality Check
- [ ] Code follows language conventions
- [ ] Functions are documented
- [ ] Error handling is comprehensive
- [ ] Tests cover main scenarios
## Security Review
- [ ] Input validation implemented
- [ ] No hardcoded secrets
- [ ] Authentication/authorization proper
Please analyze my {{feature}} code and provide specific feedback.
EOF
# Use in Claude Code
/self-review language="rust" feature="authentication"
Development Workflow
Automate your development process:
---
name: feature-workflow
description: Complete feature development process
initial_state: plan
---
### plan
Plan feature implementation
**Actions**: prompt planning, memo documentation
**Next**: implement
### implement
Write feature code
**Actions**: issue creation, branch switching
**Next**: review
### review
Self-review implementation
**Actions**: self-review prompt, test execution
**Transitions**: If issues → implement, If good → complete
Small Team (3-5 developers)
Team Standards
Shared prompts in project repository:
# Team code review standard
mkdir -p .swissarmyhammer/prompts
cat > .swissarmyhammer/prompts/team-review.md << 'EOF'
---
title: Team Code Review
description: Standardized review process
arguments:
- name: author
required: true
- name: urgency
choices: ["low", "medium", "high", "critical"]
default: "medium"
---
# Code Review - {{author}}
**Priority**: {{urgency | upcase}}
## Team Checklist
- [ ] Follows linting rules
- [ ] Unit tests included
- [ ] Documentation updated
- [ ] No security issues
{% if urgency == "critical" %}
## CRITICAL REVIEW
Focus on correctness and no regressions.
{% endif %}
Provide specific feedback on each item.
EOF
Pull Request Workflow
---
name: pr-workflow
description: Team PR process
initial_state: create_pr
---
### create_pr
Create pull request
**Actions**: git push, PR creation, reviewer assignment
**Next**: ci_checks
### ci_checks
Run automated checks
**Transitions**: Pass → review, Fail → fix_tests
### review
Team code review
**Actions**: team-review prompt execution
**Transitions**: Approved → merge, Changes needed → fix_feedback
Large Team/Enterprise
Compliance Review
Enterprise-grade architecture review:
---
title: Enterprise Architecture Review
description: Compliance and security review
arguments:
- name: system_name
required: true
- name: compliance_level
choices: ["standard", "regulated", "highly-regulated"]
default: "standard"
---
# Architecture Review: {{system_name}}
**Compliance**: {{compliance_level | title}}
## Security Requirements
{% if compliance_level == "highly-regulated" %}
- [ ] End-to-end encryption
- [ ] Multi-factor authentication
- [ ] Zero-trust architecture
{% else %}
- [ ] Basic authentication
- [ ] HTTPS enforcement
- [ ] Input validation
{% endif %}
## Data Protection
- [ ] PII handling procedures
- [ ] Data retention policies
- [ ] Backup and recovery tested
Release Workflow
---
name: enterprise-release
description: Enterprise release process
initial_state: architecture_review
---
### architecture_review
Enterprise architecture review
**Actions**: compliance review, security scan
**Transitions**: Pass → approvals, Fail → remediation
### approvals
Required stakeholder approvals
**Actions**: business approval, security approval, compliance approval
**Next**: production_deploy
### production_deploy
Deploy with monitoring
**Actions**: deployment, health checks, notification
Industry-Specific Examples
Financial Services
---
title: Financial Compliance Review
description: Financial regulatory compliance
arguments:
- name: regulation
choices: ["PCI-DSS", "SOX", "GDPR"]
required: true
---
# {{regulation}} Compliance Review
{% case regulation %}
{% when "PCI-DSS" %}
## Payment Card Industry Requirements
- [ ] Cardholder data encryption
- [ ] Network segmentation
- [ ] Access controls
{% when "SOX" %}
## Sarbanes-Oxley Requirements
- [ ] Internal controls documented
- [ ] Change management enforced
- [ ] Data integrity controls
{% endcase %}
Please assess compliance and provide remediation steps.
Healthcare (HIPAA)
---
title: HIPAA Compliance Review
description: Healthcare data protection review
arguments:
- name: phi_types
description: Types of PHI handled
type: array
required: true
---
# HIPAA Compliance Review
**PHI Types**: {{phi_types | join: ", "}}
## Technical Safeguards
- [ ] Unique user identification
- [ ] Access controls implemented
- [ ] Audit logs comprehensive
- [ ] Transmission security
## Physical Safeguards
- [ ] Facility access controls
- [ ] Workstation security
- [ ] Device controls
Provide detailed compliance assessment.
Open Source Project
Project Health Assessment
---
title: OSS Project Health
description: Open source project assessment
arguments:
- name: project_name
required: true
- name: contributors
type: number
required: true
---
# Project Health: {{project_name}}
**Contributors**: {{contributors}}
## Community Health
- [ ] Contributor onboarding documented
- [ ] Code of conduct present
- [ ] Good first issues available
{% if contributors > 20 %}
## Large Project Requirements
- [ ] Governance structure defined
- [ ] Decision-making transparent
- [ ] Regular maintainer meetings
{% else %}
## Small Project Requirements
- [ ] Primary maintainer identified
- [ ] Basic contribution workflow
- [ ] Backup maintainer designated
{% endif %}
## Technical Health
- [ ] Automated testing (>80% coverage)
- [ ] CI/CD configured
- [ ] Security scanning enabled
Assess current state and provide improvement recommendations.
Release Process
---
name: oss-release
description: Open source release workflow
initial_state: health_check
---
### health_check
Assess project health
**Actions**: health assessment, community metrics
**Next**: version_prep
### version_prep
Prepare version bump
**Actions**: changelog generation, version update
**Next**: testing
### testing
Comprehensive test suite
**Actions**: test matrix, integration tests, security scan
**Next**: community_review
### community_review
Community feedback period
**Actions**: release PR, announcement, 48-hour wait
**Next**: release
### release
Publish release
**Actions**: artifact building, package publishing, announcement
Development Team Scenarios
Code Review Automation
# Automated code review workflow
sah flow run code-review-workflow \
--var author="john.doe" \
--var files="src/auth.rs,src/db.rs" \
--var priority="high"
Issue Resolution
# Complete issue workflow
sah issue create "Fix login timeout"
sah issue work ISSUE-456
# ... implement fix ...
sah flow run fix-validation
sah issue complete ISSUE-456
Documentation Generation
# Generate project documentation
sah prompt test docs/project \
--var project_name="MyApp" \
--var language="rust"
# Run documentation review workflow
sah flow run doc-review-workflow
These examples show SwissArmyHammer’s flexibility across different team sizes, industries, and use cases while maintaining consistency and quality.
CLI Reference
Complete reference for all SwissArmyHammer command-line interface commands.
Global Options
Available for all commands:
sah [GLOBAL_OPTIONS] <COMMAND> [COMMAND_OPTIONS]
Option | Description | Default |
---|---|---|
--help, -h | Show help information | |
--version, -V | Show version information | |
--config, -c <FILE> | Configuration file path | Auto-detected |
--log-level <LEVEL> | Log level (trace, debug, info, warn, error) | info |
--quiet, -q | Suppress output | false |
--verbose, -v | Verbose output | false |
Main Commands
sah serve
Run SwissArmyHammer as an MCP server for Claude Code integration.
sah serve [OPTIONS]
Options:
--stdio
- Use stdin/stdout transport (default when run by Claude Code)--port <PORT>
- TCP port to bind to--host <HOST>
- Host address to bind to (default: localhost)--timeout <MS>
- Request timeout in milliseconds (default: 30000)
Examples:
# Run as MCP server (typical usage)
sah serve
# Run on specific port
sah serve --port 8080 --host 0.0.0.0
# Run with custom timeout
sah serve --timeout 60000
sah doctor
Diagnose installation and configuration issues.
sah doctor [OPTIONS]
Options:
--fix
- Automatically fix detected issues--check <CHECK>
- Run specific check only--format <FORMAT>
- Output format (table, json, markdown)
Examples:
# Run all diagnostic checks
sah doctor
# Fix issues automatically
sah doctor --fix
# Check only MCP integration
sah doctor --check mcp
# Output as JSON
sah doctor --format json
Prompt Commands
sah prompt list
List available prompts from all sources.
sah prompt list [OPTIONS]
Options:
--source <SOURCE>
- Filter by source (builtin, user, local)--tag <TAG>
- Filter by tag--format <FORMAT>
- Output format (table, json, list)--search <TERM>
- Search prompt titles and descriptions
Examples:
# List all prompts
sah prompt list
# List built-in prompts only
sah prompt list --source builtin
# Search for code-related prompts
sah prompt list --search "code"
# List prompts with specific tag
sah prompt list --tag "review"
sah prompt show
Show detailed information about a prompt.
sah prompt show <PROMPT_NAME> [OPTIONS]
Options:
--raw
- Show raw markdown content--format <FORMAT>
- Output format (yaml, json, markdown)
Examples:
# Show prompt details
sah prompt show code-review
# Show raw markdown
sah prompt show code-review --raw
# Output as JSON
sah prompt show code-review --format json
sah prompt test
Test a prompt by rendering it with variables.
sah prompt test <PROMPT_NAME> [OPTIONS]
Options:
--var <KEY=VALUE>
- Set template variable (can be repeated)--vars-file <FILE>
- Load variables from JSON/YAML file--output <FILE>
- Write output to file instead of stdout--format <FORMAT>
- Output format (text, markdown, html)
Examples:
# Test with inline variables
sah prompt test code-review --var language=rust --var file=main.rs
# Load variables from file
sah prompt test code-review --vars-file variables.json
# Save output to file
sah prompt test code-review --var language=rust --output review.md
sah prompt render
Render a prompt and output the result (alias for test
).
sah prompt render <PROMPT_NAME> [OPTIONS]
Same options as sah prompt test
.
sah prompt validate
Validate prompt syntax and structure.
sah prompt validate [PROMPT_NAME] [OPTIONS]
Options:
--strict
- Enable strict validation mode--format <FORMAT>
- Output format (table, json)--fix
- Attempt to fix validation issues
Examples:
# Validate specific prompt
sah prompt validate my-prompt
# Validate all prompts
sah prompt validate
# Strict validation with fixes
sah prompt validate --strict --fix
Flow (Workflow) Commands
sah flow list
List available workflows.
sah flow list [OPTIONS]
Options:
--source <SOURCE>
- Filter by source (builtin, user, local)--format <FORMAT>
- Output format (table, json, list)--search <TERM>
- Search workflow names and descriptions
sah flow show
Show workflow details and structure.
sah flow show <WORKFLOW_NAME> [OPTIONS]
Options:
--diagram
- Generate Mermaid diagram--format <FORMAT>
- Output format (yaml, json, markdown)
sah flow run
Execute a workflow.
sah flow run <WORKFLOW_NAME> [OPTIONS]
Options:
--var <KEY=VALUE>
- Set workflow variable--vars-file <FILE>
- Load variables from file--start-state <STATE>
- Start from specific state--dry-run
- Show execution plan without running--parallel
- Enable parallel execution where possible--timeout <MS>
- Workflow timeout in milliseconds
Examples:
# Run workflow
sah flow run my-workflow
# Run with variables
sah flow run my-workflow --var project=myapp --var env=prod
# Dry run to see execution plan
sah flow run my-workflow --dry-run
# Start from specific state
sah flow run my-workflow --start-state deploy
sah flow validate
Validate workflow syntax and logic.
sah flow validate [WORKFLOW_NAME] [OPTIONS]
Options:
--strict
- Enable strict validation--check-cycles
- Check for circular dependencies--format <FORMAT>
- Output format (table, json)
Issue Management Commands
sah issue list
List issues with their status.
sah issue list [OPTIONS]
Options:
--status <STATUS>
- Filter by status (active, complete, all)--format <FORMAT>
- Output format (table, json, markdown)--sort <FIELD>
- Sort by field (name, created, status)
sah issue create
Create a new issue.
sah issue create [OPTIONS]
Options:
--name <NAME>
- Issue name (will be used in branch name)--content <TEXT>
- Issue content as text--file <FILE>
- Load content from file--template <TEMPLATE>
- Use issue template--editor
- Open editor for content
Examples:
# Create named issue
sah issue create --name "feature-auth" --content "# Authentication Feature\n\nImplement JWT auth"
# Create from file
sah issue create --name "bugfix" --file issue-template.md
# Create with editor
sah issue create --name "refactor" --editor
sah issue show
Show issue details.
sah issue show <ISSUE_NAME> [OPTIONS]
Options:
--raw
- Show raw markdown content--format <FORMAT>
- Output format (markdown, json)
Special issue names:
current
- Show issue for current git branchnext
- Show next pending issue
sah issue work
Start working on an issue (creates/switches to branch).
sah issue work <ISSUE_NAME> [OPTIONS]
Options:
--create-branch
- Force branch creation even if exists--base <BRANCH>
- Base branch for new branch (default: current)
sah issue complete
Mark an issue as complete.
sah issue complete <ISSUE_NAME> [OPTIONS]
Options:
--merge
- Merge branch back to source branch--delete-branch
- Delete the issue branch after completion--message <MSG>
- Completion commit message
sah issue update
Update issue content.
sah issue update <ISSUE_NAME> [OPTIONS]
Options:
--content <TEXT>
- New content as text--file <FILE>
- Load content from file--append
- Append to existing content--editor
- Open editor for content
sah issue merge
Merge issue branch back to source branch using git merge-base.
sah issue merge <ISSUE_NAME> [OPTIONS]
Options:
--delete-branch
- Delete branch after merge--squash
- Squash commits when merging--message <MSG>
- Merge commit message
Memoranda (Notes) Commands
sah memo list
List all memos.
sah memo list [OPTIONS]
Options:
--format <FORMAT>
- Output format (table, json, list)--sort <FIELD>
- Sort by field (title, created, updated)--limit <N>
- Limit number of results
sah memo create
Create a new memo.
sah memo create [OPTIONS]
Options:
--title <TITLE>
- Memo title--content <TEXT>
- Memo content as text--file <FILE>
- Load content from file--editor
- Open editor for content
Examples:
# Create memo with inline content
sah memo create --title "Meeting Notes" --content "# Team Meeting\n\nDiscussed project timeline"
# Create from file
sah memo create --title "Architecture" --file architecture-notes.md
# Create with editor
sah memo create --title "Ideas" --editor
sah memo show
Show memo content.
sah memo show <MEMO_ID> [OPTIONS]
Options:
--raw
- Show raw markdown content--format <FORMAT>
- Output format (markdown, json)
sah memo update
Update memo content.
sah memo update <MEMO_ID> [OPTIONS]
Options:
--content <TEXT>
- New content as text--file <FILE>
- Load content from file--editor
- Open editor for content
sah memo delete
Delete a memo.
sah memo delete <MEMO_ID> [OPTIONS]
Options:
--confirm
- Skip confirmation prompt
sah memo search
Search memos by content.
sah memo search <QUERY> [OPTIONS]
Options:
--limit <N>
- Limit number of results--format <FORMAT>
- Output format (table, json, list)
Search Commands
sah search index
Index files for semantic search.
sah search index <PATTERN> [OPTIONS]
Options:
--force
- Force re-indexing of all files--language <LANG>
- Limit to specific language--exclude <PATTERN>
- Exclude files matching pattern--max-size <BYTES>
- Maximum file size to index
Examples:
# Index Rust files
sah search index "**/*.rs"
# Index multiple languages
sah search index "**/*.{rs,py,js,ts}"
# Force re-index
sah search index "**/*.rs" --force
# Index with exclusions
sah search index "**/*.py" --exclude "**/test_*.py"
sah search query
Perform semantic search query.
sah search query <QUERY> [OPTIONS]
Options:
--limit <N>
- Number of results to return (default: 10)--format <FORMAT>
- Output format (table, json, detailed)--threshold <SCORE>
- Minimum similarity score (0.0-1.0)
Examples:
# Basic search
sah search query "error handling"
# Limit results
sah search query "async functions" --limit 5
# Detailed output
sah search query "database connection" --format detailed
# High threshold for exact matches
sah search query "specific function name" --threshold 0.8
Validation Commands
sah validate
Validate configurations, prompts, and workflows.
sah validate [OPTIONS]
Options:
--config
- Validate configuration files only--prompts
- Validate prompts only--workflows
- Validate workflows only--strict
- Enable strict validation--format <FORMAT>
- Output format (table, json)--fix
- Attempt to fix validation issues
Examples:
# Validate everything
sah validate
# Validate only configuration
sah validate --config
# Strict validation with fixes
sah validate --strict --fix
Configuration Commands
sah config show
Show current configuration.
sah config show [OPTIONS]
Options:
--format <FORMAT>
- Output format (toml, json, yaml)--section <SECTION>
- Show specific section only
sah config set
Set configuration value.
sah config set <KEY> <VALUE> [OPTIONS]
Options:
--user
- Set in user configuration--local
- Set in local project configuration--type <TYPE>
- Value type (string, number, boolean)
Examples:
# Set log level
sah config set logging.level debug
# Set user-level setting
sah config set general.auto_reload true --user
# Set project-level setting
sah config set workflow.max_parallel_actions 8 --local
sah config get
Get configuration value.
sah config get <KEY> [OPTIONS]
Options:
--default
- Show default value if not set--source
- Show which config file the value comes from
Utility Commands
sah completions
Generate shell completions.
sah completions <SHELL>
Supported shells:
bash
zsh
fish
powershell
Examples:
# Generate bash completions
sah completions bash > ~/.bash_completion.d/sah
# Generate zsh completions
sah completions zsh > ~/.zfunc/_sah
sah version
Show version information.
sah version [OPTIONS]
Options:
--short
- Show version number only--build
- Include build information
sah help
Show help information.
sah help [COMMAND]
Show help for specific command or general help.
Exit Codes
SwissArmyHammer uses standard exit codes:
0
- Success1
- General error2
- Misuse of shell command3
- Configuration error4
- Validation error5
- Network error6
- Permission error7
- Not found error8
- Timeout error
Examples
Common Workflows
# Set up new project
sah doctor
sah config set workflow.max_parallel_actions 4 --local
sah search index "**/*.rs"
# Daily development workflow
sah issue create --name "feature-api" --editor
sah issue work feature-api
# ... do development work ...
sah issue complete feature-api --merge --delete-branch
# Code review workflow
sah prompt test code-review --var file=src/main.rs --var language=rust
sah memo create --title "Review Notes" --editor
# Search and discovery
sah search query "authentication middleware"
sah prompt list --search "test"
sah memo search "architecture"
Integration with Other Tools
# Use with git hooks
#!/bin/bash
# .git/hooks/pre-commit
sah validate --strict --fix
# Use in CI/CD
sah validate --config --format json
sah search index "**/*.rs" --force
sah prompt validate --strict
# Use with editors (VS Code task example)
{
"label": "Test Prompt",
"type": "shell",
"command": "sah",
"args": ["prompt", "test", "${input:promptName}", "--var", "file=${file}"]
}
This comprehensive CLI reference covers all SwissArmyHammer commands and options for efficient prompt and workflow management.
Rust API Reference
SwissArmyHammer provides a comprehensive Rust API for building custom tools, integrations, and extensions. The library is designed with modularity and flexibility in mind, offering both async and sync interfaces for different use cases.
Overview
The SwissArmyHammer crate provides:
- Prompt Management: Load, store, and organize prompts from various sources
- Template Engine: Powerful Liquid-based template processing with custom filters
- Semantic Search: Vector-based code search with TreeSitter parsing
- Issue Tracking: Git-integrated issue management system
- Memoranda: Note-taking and knowledge management
- Workflow System: State-based execution engine
- Plugin Architecture: Extensible filter and processing system
Quick Start
Add SwissArmyHammer to your Cargo.toml
:
[dependencies]
swissarmyhammer = "0.1.0"
Basic usage example:
use swissarmyhammer::PromptLibrary;
use std::collections::HashMap;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a new prompt library
let mut library = PromptLibrary::new();
// Add prompts from a directory
library.add_directory("./.swissarmyhammer/prompts")?;
// Get and render a prompt
let prompt = library.get("code-review")?;
let mut args = HashMap::new();
args.insert("language".to_string(), "rust".to_string());
let rendered = prompt.render(&args)?;
println!("{}", rendered);
Ok(())
}
Core Modules
Prompt Management (prompts
)
The prompts module provides the core functionality for managing and organizing prompts.
Key Types
PromptLibrary
: Main interface for prompt management
use swissarmyhammer::prompts::PromptLibrary;
let mut library = PromptLibrary::new();
library.add_directory("./prompts")?;
library.add_file("./custom_prompt.md")?;
// Access prompts
let prompt = library.get("my-prompt")?;
let all_prompts = library.list();
Prompt
: Represents a single prompt with metadata
use swissarmyhammer::prompts::Prompt;
let prompt = Prompt {
name: "code-review".to_string(),
content: "Review this code: {{ code }}".to_string(),
metadata: HashMap::new(),
};
let rendered = prompt.render(&context)?;
PromptMetadata
: Metadata associated with prompts
use swissarmyhammer::prompts::PromptMetadata;
let metadata = PromptMetadata {
description: Some("Code review prompt".to_string()),
tags: vec!["review".to_string(), "code".to_string()],
author: Some("team@example.com".to_string()),
version: Some("1.0.0".to_string()),
..Default::default()
};
Prompt Loading
use swissarmyhammer::prompt_resolver::PromptResolver;
let resolver = PromptResolver::new();
// Load from directory
resolver.load_directory("./prompts")?;
// Load single file
resolver.load_file("./prompt.md")?;
// Load from memory
resolver.load_from_content("name", "content", metadata)?;
Template Engine (template
)
Liquid-based template engine with custom filters and extensions.
Basic Templating
use swissarmyhammer::template::Template;
use std::collections::HashMap;
let template = Template::from_string("Hello {{ name }}!")?;
let mut context = HashMap::new();
context.insert("name".to_string(), "World".to_string());
let rendered = template.render(&context)?;
assert_eq!(rendered, "Hello World!");
Advanced Features
use swissarmyhammer::template::{Template, TemplateEngine};
let engine = TemplateEngine::new()
.with_custom_filters()
.with_security_limits();
let template = engine.parse("{{ code | highlight: 'rust' | trim }}")?;
let result = template.render(&context)?;
Custom Filters
use swissarmyhammer::prompt_filter::PromptFilter;
// Built-in filters
let filters = vec![
PromptFilter::Trim,
PromptFilter::Uppercase,
PromptFilter::CodeHighlight { language: "rust".to_string() },
PromptFilter::FileRead { path: "./example.rs".to_string() },
];
// Apply filters to content
let processed = filters.apply("content")?;
Semantic Search (search
)
Vector-based semantic search with TreeSitter integration for code understanding.
Indexing
use swissarmyhammer::search::{SearchEngine, IndexConfig};
let config = IndexConfig {
model_path: "./models/code-embeddings".to_string(),
index_path: "./search_index".to_string(),
..Default::default()
};
let engine = SearchEngine::new(config)?;
// Index files
engine.index_files(&["**/*.rs", "**/*.py"]).await?;
// Index specific content
engine.index_content("file.rs", content, language).await?;
Querying
use swissarmyhammer::search::SearchQuery;
let query = SearchQuery {
text: "error handling patterns".to_string(),
limit: 10,
similarity_threshold: 0.5,
..Default::default()
};
let results = engine.search(&query).await?;
for result in results {
println!("File: {} (score: {:.2})", result.file_path, result.similarity_score);
println!("Content: {}", result.excerpt);
}
Issue Management (issues
)
Git-integrated issue tracking system.
Core Types
use swissarmyhammer::issues::{Issue, IssueName, IssueStorage};
// Create issue
let issue = Issue {
name: IssueName::new("FEATURE_001_user-auth")?,
content: "# User Authentication\n\nImplement login system".to_string(),
status: IssueStatus::Active,
created_at: chrono::Utc::now(),
..Default::default()
};
// Storage operations
let storage = IssueStorage::new("./issues")?;
storage.create(&issue).await?;
storage.complete(&issue.name).await?;
Git Integration
use swissarmyhammer::issues::{IssueManager, GitIntegration};
let manager = IssueManager::new("./issues")?
.with_git_integration();
// Start work (creates branch)
manager.start_work(&issue_name).await?;
// Complete work (merges branch)
manager.complete_work(&issue_name).await?;
// Get current issue from branch
let current = manager.current_issue().await?;
Memoranda (memoranda
)
Note-taking and knowledge management system.
use swissarmyhammer::memoranda::{MemoStorage, Memo};
let storage = MemoStorage::new("./memos")?;
// Create memo
let memo = storage.create(
"Architecture Notes",
"# System Design\n\nKey decisions and rationale"
).await?;
// Search memos
let results = storage.search("architecture design").await?;
// Get all memos
let all_memos = storage.list().await?;
Workflow System (workflow
)
State-based execution engine for complex automation.
Workflow Definition
use swissarmyhammer::workflow::{Workflow, WorkflowState, Action};
let workflow = Workflow {
name: "development-cycle".to_string(),
initial_state: "research".to_string(),
states: HashMap::from([
("research".to_string(), WorkflowState {
actions: vec![
Action::SearchCode { query: "{{ feature }}" },
Action::CreateMemo { title: "Research: {{ feature }}" },
],
transitions: HashMap::from([
("complete".to_string(), "design".to_string()),
]),
}),
("design".to_string(), WorkflowState {
actions: vec![
Action::CreateIssue {
name: "{{ feature }}",
content: "{{ design_spec }}"
},
],
transitions: HashMap::from([
("approved".to_string(), "implement".to_string()),
]),
}),
]),
};
Workflow Execution
use swissarmyhammer::workflow::{WorkflowEngine, ExecutionContext};
let engine = WorkflowEngine::new();
let context = ExecutionContext::new()
.with_variable("feature", "user-authentication")
.with_variable("design_spec", "OAuth 2.0 implementation");
let execution = engine.execute(&workflow, context).await?;
// Check execution status
match execution.status {
ExecutionStatus::Running => println!("Workflow in progress"),
ExecutionStatus::Completed => println!("Workflow completed successfully"),
ExecutionStatus::Failed(error) => println!("Workflow failed: {}", error),
}
Plugin System (plugins
)
Extensible architecture for custom functionality.
Plugin Development
use swissarmyhammer::plugins::{Plugin, PluginContext, PluginResult};
#[derive(Debug)]
pub struct CustomCodeFormatter;
impl Plugin for CustomCodeFormatter {
fn name(&self) -> &str {
"custom-formatter"
}
fn process(&self, input: &str, context: &PluginContext) -> PluginResult<String> {
// Custom formatting logic
let formatted = format_code(input, &context.language)?;
Ok(formatted)
}
}
// Register plugin
let mut registry = PluginRegistry::new();
registry.register(Box::new(CustomCodeFormatter))?;
Using Plugins
use swissarmyhammer::plugins::{PluginRegistry, PluginContext};
let registry = PluginRegistry::with_builtin_plugins();
let context = PluginContext {
language: Some("rust".to_string()),
file_path: Some("./src/main.rs".to_string()),
..Default::default()
};
let result = registry.apply("custom-formatter", input, &context)?;
Configuration
Library Configuration
use swissarmyhammer::config::{Config, SearchConfig, IssueConfig};
let config = Config {
search: SearchConfig {
model_path: "./models".to_string(),
index_path: "./.sah/search.db".to_string(),
embedding_dimension: 768,
..Default::default()
},
issues: IssueConfig {
storage_path: "./issues".to_string(),
git_integration: true,
branch_prefix: "issue/".to_string(),
..Default::default()
},
..Default::default()
};
// Initialize with config
let library = PromptLibrary::with_config(config)?;
Environment Integration
use swissarmyhammer::config::ConfigBuilder;
let config = ConfigBuilder::new()
.from_env() // Load from environment variables
.from_file("./sah.toml")? // Override with file config
.from_args(args)? // Override with CLI args
.build()?;
Error Handling
SwissArmyHammer uses a comprehensive error system:
use swissarmyhammer::error::{SwissArmyHammerError, Result};
fn example_function() -> Result<String> {
match some_operation() {
Ok(value) => Ok(value),
Err(e) => Err(SwissArmyHammerError::ProcessingError {
message: "Operation failed".to_string(),
source: Some(Box::new(e)),
}),
}
}
// Error types
pub enum SwissArmyHammerError {
IoError(std::io::Error),
TemplateError(String),
SearchError(String),
IssueError(String),
ValidationError(String),
ConfigError(String),
NetworkError(String),
ProcessingError { message: String, source: Option<Box<dyn std::error::Error + Send + Sync>> },
}
Async and Sync APIs
Most functionality is available in both async and sync variants:
// Async API (preferred for I/O operations)
use swissarmyhammer::async_api::*;
let results = search_engine.search(&query).await?;
let memo = memo_storage.create(title, content).await?;
// Sync API (for simple cases)
use swissarmyhammer::sync_api::*;
let results = search_engine.search_blocking(&query)?;
let memo = memo_storage.create_blocking(title, content)?;
Testing Utilities
SwissArmyHammer provides testing utilities for integration tests:
use swissarmyhammer::test_utils::*;
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_prompt_rendering() {
let temp_dir = create_temp_directory()?;
let library = create_test_library(&temp_dir)?;
let prompt = library.get("test-prompt")?;
let result = prompt.render(&test_context())?;
assert_eq!(result, "Expected output");
}
#[test]
fn test_search_indexing() {
let temp_index = create_temp_search_index()?;
let engine = SearchEngine::new(temp_index.config())?;
// Test operations
}
}
Performance Considerations
Memory Management
use swissarmyhammer::config::PerformanceConfig;
let config = PerformanceConfig {
max_prompt_size: 1024 * 1024, // 1MB
max_search_results: 100,
cache_size: 1000,
enable_lazy_loading: true,
..Default::default()
};
Caching
use swissarmyhammer::cache::{Cache, CacheConfig};
let cache = Cache::new(CacheConfig {
max_entries: 1000,
ttl_seconds: 3600,
enable_persistence: true,
})?;
// Cached operations
let result = cache.get_or_compute("key", || expensive_operation())?;
Resource Limits
use swissarmyhammer::security::{ResourceLimits, SecurityContext};
let limits = ResourceLimits {
max_file_size: 10 * 1024 * 1024, // 10MB
max_processing_time: Duration::from_secs(30),
allowed_directories: vec!["/safe/path".to_string()],
..Default::default()
};
let security = SecurityContext::new(limits);
Integration Examples
Custom CLI Tool
use swissarmyhammer::prelude::*;
use clap::{Parser, Subcommand};
#[derive(Parser)]
struct Cli {
#[command(subcommand)]
command: Commands,
}
#[derive(Subcommand)]
enum Commands {
Search { query: String },
CreateMemo { title: String, content: String },
}
#[tokio::main]
async fn main() -> Result<()> {
let cli = Cli::parse();
let library = PromptLibrary::new();
match cli.command {
Commands::Search { query } => {
let results = library.search(&query).await?;
for result in results {
println!("{}: {}", result.name, result.excerpt);
}
}
Commands::CreateMemo { title, content } => {
let memo = library.create_memo(&title, &content).await?;
println!("Created memo: {}", memo.id);
}
}
Ok(())
}
Web Service Integration
use swissarmyhammer::prelude::*;
use axum::{Json, extract::Query, routing::get, Router};
#[derive(serde::Deserialize)]
struct SearchParams {
q: String,
limit: Option<usize>,
}
async fn search(Query(params): Query<SearchParams>) -> Json<SearchResults> {
let library = get_library().await;
let results = library.search(¶ms.q)
.limit(params.limit.unwrap_or(10))
.execute().await
.unwrap();
Json(results)
}
fn app() -> Router {
Router::new()
.route("/search", get(search))
}
Migration Guide
From 0.x to 1.0
Key breaking changes and migration strategies:
// Old API
let library = PromptLibrary::from_directory("./prompts")?;
// New API
let mut library = PromptLibrary::new();
library.add_directory("./prompts")?;
// Old search API
let results = search("query")?;
// New search API
let engine = SearchEngine::new(config)?;
let results = engine.search(&SearchQuery::new("query")).await?;
This API reference provides comprehensive coverage of SwissArmyHammer’s Rust API. For additional examples and detailed documentation, see the generated rustdoc documentation and the examples directory in the repository.
Plugin Development
SwissArmyHammer features a flexible plugin architecture that allows you to extend functionality through custom filters, processors, and integrations. The plugin system enables seamless integration of external tools and custom processing logic.
Overview
The plugin system provides:
- Custom Filters: Transform and process prompt content
- Processing Plugins: Add new data processing capabilities
- Template Extensions: Extend the Liquid template engine
- Integration Plugins: Connect with external tools and services
- Workflow Actions: Create custom workflow step implementations
Plugin Architecture
Core Components
Plugin Interface: All plugins implement a common interface
pub trait Plugin: Send + Sync {
fn name(&self) -> &str;
fn description(&self) -> &str;
fn process(&self, input: &str, context: &PluginContext) -> PluginResult<String>;
}
Plugin Registry: Central management of available plugins
pub struct PluginRegistry {
plugins: HashMap<String, Box<dyn Plugin>>,
}
Plugin Context: Provides contextual information to plugins
pub struct PluginContext {
pub file_path: Option<String>,
pub language: Option<String>,
pub metadata: HashMap<String, String>,
pub environment: HashMap<String, String>,
}
Plugin Types
Filter Plugins: Transform text content
- Input processing and validation
- Output formatting and styling
- Content transformation and encoding
Data Plugins: Process structured data
- File format conversions
- Data extraction and parsing
- External API integrations
Workflow Plugins: Custom workflow actions
- External tool execution
- Conditional logic and branching
- State management and persistence
Built-in Plugins
Text Processing Filters
Trim Filter: Remove whitespace
use swissarmyhammer::prompt_filter::PromptFilter;
let filter = PromptFilter::Trim;
let result = filter.apply(" hello world ")?;
// Result: "hello world"
Case Transformation:
let uppercase = PromptFilter::Uppercase;
let lowercase = PromptFilter::Lowercase;
let titlecase = PromptFilter::TitleCase;
let result = uppercase.apply("hello world")?;
// Result: "HELLO WORLD"
String Manipulation:
let replace = PromptFilter::Replace {
pattern: "old".to_string(),
replacement: "new".to_string(),
};
let result = replace.apply("old text with old words")?;
// Result: "new text with new words"
Code Processing Filters
Syntax Highlighting:
let highlight = PromptFilter::CodeHighlight {
language: "rust".to_string(),
};
let code = r#"
fn main() {
println!("Hello, world!");
}
"#;
let result = highlight.apply(code)?;
// Result: HTML with syntax highlighting
Code Formatting:
let format = PromptFilter::CodeFormat {
language: "rust".to_string(),
style: FormatStyle::Standard,
};
let result = format.apply(unformatted_code)?;
File System Filters
File Reading:
let read_file = PromptFilter::FileRead {
path: "./src/main.rs".to_string(),
};
let content = read_file.apply("")?;
// Result: File contents
Directory Listing:
let list_files = PromptFilter::ListFiles {
path: "./src".to_string(),
pattern: Some("*.rs".to_string()),
};
let files = list_files.apply("")?;
// Result: Newline-separated file paths
External Tool Integration
Shell Command Execution:
let shell = PromptFilter::Shell {
command: "git log --oneline -5".to_string(),
timeout: Some(10),
};
let output = shell.apply("")?;
// Result: Command output
HTTP Requests:
let http = PromptFilter::HttpGet {
url: "https://api.example.com/data".to_string(),
headers: HashMap::new(),
timeout: Some(30),
};
let response = http.apply("")?;
// Result: HTTP response body
Creating Custom Plugins
Basic Plugin Implementation
use swissarmyhammer::plugins::{Plugin, PluginContext, PluginResult};
#[derive(Debug)]
pub struct ReverseStringPlugin;
impl Plugin for ReverseStringPlugin {
fn name(&self) -> &str {
"reverse"
}
fn description(&self) -> &str {
"Reverses the input string"
}
fn process(&self, input: &str, _context: &PluginContext) -> PluginResult<String> {
Ok(input.chars().rev().collect())
}
}
// Usage in templates:
// {{ content | reverse }}
Advanced Plugin with Context
use swissarmyhammer::plugins::*;
use std::fs;
#[derive(Debug)]
pub struct ProjectInfoPlugin;
impl Plugin for ProjectInfoPlugin {
fn name(&self) -> &str {
"project_info"
}
fn description(&self) -> &str {
"Extracts project information from context"
}
fn process(&self, _input: &str, context: &PluginContext) -> PluginResult<String> {
let mut info = Vec::new();
// Get language from context
if let Some(lang) = &context.language {
info.push(format!("Language: {}", lang));
}
// Get file path info
if let Some(path) = &context.file_path {
if let Some(name) = std::path::Path::new(path).file_name() {
info.push(format!("File: {}", name.to_string_lossy()));
}
}
// Check for project files
if std::path::Path::new("Cargo.toml").exists() {
info.push("Project: Rust".to_string());
} else if std::path::Path::new("package.json").exists() {
info.push("Project: Node.js".to_string());
}
Ok(info.join("\n"))
}
}
Error Handling in Plugins
use swissarmyhammer::plugins::*;
#[derive(Debug)]
pub struct ValidatingPlugin;
impl Plugin for ValidatingPlugin {
fn name(&self) -> &str {
"validate_json"
}
fn description(&self) -> &str {
"Validates and formats JSON content"
}
fn process(&self, input: &str, _context: &PluginContext) -> PluginResult<String> {
match serde_json::from_str::<serde_json::Value>(input) {
Ok(value) => {
// Format with indentation
match serde_json::to_string_pretty(&value) {
Ok(formatted) => Ok(formatted),
Err(e) => Err(PluginError::ProcessingError {
message: format!("Failed to format JSON: {}", e),
source: Some(Box::new(e)),
}),
}
}
Err(e) => Err(PluginError::ValidationError {
message: format!("Invalid JSON: {}", e),
input_excerpt: input.chars().take(100).collect(),
}),
}
}
}
Plugin Registration and Usage
Registering Plugins
use swissarmyhammer::plugins::PluginRegistry;
// Create registry with built-in plugins
let mut registry = PluginRegistry::with_builtin_plugins();
// Register custom plugins
registry.register(Box::new(ReverseStringPlugin))?;
registry.register(Box::new(ProjectInfoPlugin))?;
registry.register(Box::new(ValidatingPlugin))?;
// Use in prompt library
let library = PromptLibrary::new()
.with_plugin_registry(registry);
Using Plugins in Templates
<!-- Basic usage -->
{{ content | reverse }}
<!-- Chaining filters -->
{{ code | trim | code_highlight: "rust" | reverse }}
<!-- With parameters -->
{{ json_data | validate_json }}
<!-- Conditional usage -->
{% if language == "rust" %}
{{ code | code_format: "rust" }}
{% else %}
{{ code | trim }}
{% endif %}
<!-- Complex processing -->
{{ file_path | file_read | code_highlight: language | trim }}
Dynamic Plugin Loading
use swissarmyhammer::plugins::{PluginLoader, PluginConfig};
// Load plugins from directory
let loader = PluginLoader::new();
let plugins = loader.load_from_directory("./plugins")?;
// Load with configuration
let config = PluginConfig {
allow_unsafe: false,
timeout: Some(30),
memory_limit: Some(100 * 1024 * 1024), // 100MB
..Default::default()
};
let plugins = loader.load_with_config("./plugins", config)?;
// Register loaded plugins
for plugin in plugins {
registry.register(plugin)?;
}
Advanced Plugin Patterns
Stateful Plugins
use std::sync::{Arc, Mutex};
use std::collections::HashMap;
#[derive(Debug)]
pub struct CachingPlugin {
cache: Arc<Mutex<HashMap<String, String>>>,
}
impl CachingPlugin {
pub fn new() -> Self {
Self {
cache: Arc::new(Mutex::new(HashMap::new())),
}
}
}
impl Plugin for CachingPlugin {
fn name(&self) -> &str {
"cache"
}
fn description(&self) -> &str {
"Caches expensive computations"
}
fn process(&self, input: &str, context: &PluginContext) -> PluginResult<String> {
let cache_key = format!("{}:{}", input, context.file_path.as_deref().unwrap_or(""));
// Check cache first
{
let cache = self.cache.lock().unwrap();
if let Some(cached) = cache.get(&cache_key) {
return Ok(cached.clone());
}
}
// Expensive computation
let result = expensive_computation(input)?;
// Store in cache
{
let mut cache = self.cache.lock().unwrap();
cache.insert(cache_key, result.clone());
}
Ok(result)
}
}
Async Plugin Processing
use tokio::runtime::Runtime;
#[derive(Debug)]
pub struct AsyncPlugin {
runtime: Runtime,
}
impl AsyncPlugin {
pub fn new() -> PluginResult<Self> {
let runtime = Runtime::new()
.map_err(|e| PluginError::InitializationError {
message: format!("Failed to create async runtime: {}", e),
source: Some(Box::new(e)),
})?;
Ok(Self { runtime })
}
async fn async_process(&self, input: &str) -> Result<String, Box<dyn std::error::Error>> {
// Async operations like HTTP requests, database queries, etc.
let client = reqwest::Client::new();
let response = client
.post("https://api.example.com/process")
.body(input.to_string())
.send()
.await?
.text()
.await?;
Ok(response)
}
}
impl Plugin for AsyncPlugin {
fn name(&self) -> &str {
"async_processor"
}
fn description(&self) -> &str {
"Processes input asynchronously"
}
fn process(&self, input: &str, _context: &PluginContext) -> PluginResult<String> {
self.runtime.block_on(self.async_process(input))
.map_err(|e| PluginError::ProcessingError {
message: format!("Async processing failed: {}", e),
source: Some(e),
})
}
}
Configuration-Based Plugins
use serde::{Deserialize, Serialize};
#[derive(Debug, Deserialize, Serialize)]
pub struct DatabaseConfig {
pub connection_string: String,
pub timeout: u64,
pub pool_size: u32,
}
#[derive(Debug)]
pub struct DatabasePlugin {
config: DatabaseConfig,
// connection pool, etc.
}
impl DatabasePlugin {
pub fn new(config: DatabaseConfig) -> PluginResult<Self> {
// Initialize database connection
Ok(Self { config })
}
pub fn from_config_file<P: AsRef<std::path::Path>>(path: P) -> PluginResult<Self> {
let content = std::fs::read_to_string(path)
.map_err(|e| PluginError::ConfigError {
message: format!("Failed to read config file: {}", e),
source: Some(Box::new(e)),
})?;
let config: DatabaseConfig = toml::from_str(&content)
.map_err(|e| PluginError::ConfigError {
message: format!("Failed to parse config: {}", e),
source: Some(Box::new(e)),
})?;
Self::new(config)
}
}
impl Plugin for DatabasePlugin {
fn name(&self) -> &str {
"database_query"
}
fn description(&self) -> &str {
"Executes database queries"
}
fn process(&self, input: &str, _context: &PluginContext) -> PluginResult<String> {
// Execute SQL query and return results
// Implementation depends on database driver
todo!("Implement database query execution")
}
}
Testing Plugins
Unit Testing
#[cfg(test)]
mod tests {
use super::*;
use swissarmyhammer::plugins::PluginContext;
#[test]
fn test_reverse_plugin() {
let plugin = ReverseStringPlugin;
let context = PluginContext::default();
let result = plugin.process("hello", &context).unwrap();
assert_eq!(result, "olleh");
}
#[test]
fn test_plugin_with_context() {
let plugin = ProjectInfoPlugin;
let context = PluginContext {
language: Some("rust".to_string()),
file_path: Some("src/main.rs".to_string()),
..Default::default()
};
let result = plugin.process("", &context).unwrap();
assert!(result.contains("Language: rust"));
assert!(result.contains("File: main.rs"));
}
#[test]
fn test_error_handling() {
let plugin = ValidatingPlugin;
let context = PluginContext::default();
// Valid JSON
let valid_json = r#"{"name": "test"}"#;
let result = plugin.process(valid_json, &context);
assert!(result.is_ok());
// Invalid JSON
let invalid_json = r#"{"name": "test""#;
let result = plugin.process(invalid_json, &context);
assert!(result.is_err());
}
}
Integration Testing
#[cfg(test)]
mod integration_tests {
use super::*;
use swissarmyhammer::prelude::*;
#[test]
fn test_plugin_in_template() {
let mut registry = PluginRegistry::new();
registry.register(Box::new(ReverseStringPlugin)).unwrap();
let library = PromptLibrary::new()
.with_plugin_registry(registry);
let template = "{{ content | reverse }}";
let context = HashMap::from([
("content".to_string(), "hello world".to_string()),
]);
let result = library.render_template(template, &context).unwrap();
assert_eq!(result, "dlrow olleh");
}
}
Best Practices
Plugin Development
Error Handling:
- Use descriptive error messages
- Provide context about what went wrong
- Include suggestions for fixing issues
- Handle edge cases gracefully
Performance:
- Cache expensive computations
- Use appropriate data structures
- Implement timeout mechanisms
- Monitor memory usage
Security:
- Validate all inputs
- Sanitize file paths
- Limit resource usage
- Avoid executing arbitrary code
Plugin Distribution
Documentation:
- Provide clear usage examples
- Document configuration options
- Include troubleshooting guides
- Maintain API compatibility
Packaging:
# Cargo.toml for plugin crate
[package]
name = "sah-plugin-example"
version = "0.1.0"
[dependencies]
swissarmyhammer = "0.1"
serde = { version = "1.0", features = ["derive"] }
tokio = { version = "1.0", optional = true }
[features]
default = []
async = ["tokio"]
Plugin Manifest:
# plugin.toml
[plugin]
name = "example-plugin"
version = "0.1.0"
description = "Example plugin for SwissArmyHammer"
author = "Your Name <email@example.com>"
[plugin.capabilities]
filters = ["reverse", "project_info"]
processors = ["validate_json"]
[plugin.requirements]
min_sah_version = "0.1.0"
features = ["async"]
Plugin Ecosystem
Community Plugins
Popular community-developed plugins:
Development Tools:
- Code formatters and linters
- Git integration plugins
- CI/CD workflow helpers
- Documentation generators
External Integrations:
- API clients for popular services
- Database connectors
- Cloud platform integrations
- Monitoring and logging tools
Content Processing:
- Markdown processors
- Image manipulation tools
- Data format converters
- Template engines
Plugin Registry
# Install plugins from registry
sah plugin install reverse-string
sah plugin install database-query
# List installed plugins
sah plugin list
# Update plugins
sah plugin update
# Remove plugins
sah plugin remove reverse-string
Troubleshooting
Common Issues
Plugin Not Found:
- Verify plugin is registered in registry
- Check plugin name spelling
- Ensure plugin is loaded before use
Processing Errors:
- Check plugin logs for error details
- Validate input data format
- Verify plugin configuration
- Test plugin in isolation
Performance Issues:
- Profile plugin execution time
- Check for memory leaks
- Optimize expensive operations
- Implement caching where appropriate
Debug Mode
use swissarmyhammer::plugins::{PluginRegistry, DebugConfig};
let debug_config = DebugConfig {
enable_logging: true,
log_level: LogLevel::Debug,
trace_execution: true,
dump_context: true,
};
let registry = PluginRegistry::with_debug(debug_config);
The plugin system enables unlimited extensibility of SwissArmyHammer, allowing you to integrate with any tool, service, or processing pipeline while maintaining type safety and performance.
Custom Workflows
SwissArmyHammer’s workflow system enables you to create sophisticated, state-driven automation sequences that combine multiple tools and actions. Custom workflows allow you to encode complex development processes, automate repetitive tasks, and ensure consistent execution of multi-step procedures.
Overview
Custom workflows provide:
- State-based execution: Define workflows as state machines with transitions
- Action composition: Combine multiple actions in sequence or parallel
- Conditional logic: Branch execution based on conditions and variables
- Error handling: Robust error recovery and rollback mechanisms
- Template integration: Use Liquid templates throughout workflow definitions
- External integrations: Execute shell commands, API calls, and tool invocations
Workflow Fundamentals
Workflow Structure
Workflows are defined as Markdown files with YAML front matter:
---
name: feature-development
description: Complete feature development workflow
initial_state: research
variables:
feature_name: ""
complexity: "medium"
---
# Feature Development Workflow
This workflow guides through the complete feature development process.
## States
### research
**Description**: Research existing implementations and patterns
**Actions**:
- Search codebase for related functionality
- Create research memo with findings
- Identify dependencies and requirements
**Transitions**:
- `found_examples` → `design`
- `no_examples` → `architecture_review`
### design
**Description**: Create detailed design specifications
**Actions**:
- Create design document
- Review with team
- Update issue with design details
**Transitions**:
- `approved` → `implement`
- `needs_revision` → `design`
### implement
**Description**: Implement the feature
**Actions**:
- Create feature branch
- Implement core functionality
- Add tests and documentation
**Transitions**:
- `complete` → `review`
- `blocked` → `research`
Core Components
States: Discrete phases of workflow execution
states:
research:
description: "Initial research and planning"
actions:
- type: search_query
params:
query: "{{ feature_name }} implementation"
- type: create_memo
params:
title: "Research: {{ feature_name }}"
transitions:
complete: "design"
blocked: "help_needed"
Actions: Individual operations within states
actions:
- type: shell_command
params:
command: "git checkout -b feature/{{ feature_name }}"
- type: create_issue
params:
name: "{{ feature_name }}"
content: "{{ issue_template }}"
Transitions: Movement between states based on conditions
transitions:
success: "next_state"
failure: "error_handling"
timeout: "manual_review"
Built-in Actions
Issue Management Actions
Create Issue:
- type: create_issue
params:
name: "FEATURE_{{ timestamp }}_{{ feature_name | slugify }}"
content: |
# {{ feature_name }}
## Description
{{ description }}
## Requirements
{{ requirements }}
Update Issue:
- type: update_issue
params:
name: "{{ current_issue }}"
content: |
## Progress Update
{{ progress_summary }}
append: true
Work on Issue:
- type: issue_work
params:
name: "{{ issue_name }}"
Memoranda Actions
Create Memo:
- type: create_memo
params:
title: "{{ memo_title }}"
content: |
# {{ topic }}
## Key Points
{{ key_points }}
## Action Items
{{ action_items }}
Search Memos:
- type: search_memos
params:
query: "{{ search_terms }}"
limit: 5
output_var: "related_memos"
Search Actions
Semantic Search:
- type: search_query
params:
query: "{{ feature_name }} {{ technology_stack }}"
limit: 10
output_var: "search_results"
Index Files:
- type: search_index
params:
patterns:
- "src/**/*.rs"
- "lib/**/*.rs"
force: false
Shell Actions
Execute Commands:
- type: shell_command
params:
command: "cargo test {{ test_pattern }}"
timeout: 300
working_directory: "{{ project_root }}"
output_var: "test_results"
Conditional Execution:
- type: shell_command
params:
command: "git push origin {{ branch_name }}"
condition: "{{ auto_push == true }}"
File Operations
Read File:
- type: read_file
params:
path: "{{ config_file }}"
output_var: "config_content"
Write File:
- type: write_file
params:
path: "{{ output_file }}"
content: |
# Generated Configuration
{{ config_template | render }}
Template Actions
Render Template:
- type: render_template
params:
template: "{{ template_name }}"
context:
project: "{{ project_name }}"
author: "{{ author_name }}"
output_var: "rendered_content"
Advanced Workflow Patterns
Parallel Execution
Execute multiple actions simultaneously:
states:
setup:
description: "Parallel initialization tasks"
actions:
- type: parallel
actions:
- type: search_index
params:
patterns: ["**/*.rs"]
- type: shell_command
params:
command: "cargo check"
- type: create_memo
params:
title: "Setup Started"
content: "Beginning feature setup..."
transitions:
complete: "development"
Conditional Branching
Branch execution based on conditions:
states:
analysis:
description: "Analyze codebase complexity"
actions:
- type: search_query
params:
query: "{{ feature_type }}"
output_var: "existing_implementations"
- type: conditional
condition: "{{ existing_implementations | size > 5 }}"
then:
- type: set_variable
params:
complexity: "high"
else:
- type: set_variable
params:
complexity: "low"
transitions:
high_complexity: "architecture_review"
low_complexity: "direct_implementation"
Error Handling and Retry
Handle failures gracefully:
states:
build_and_test:
description: "Build and run tests"
actions:
- type: shell_command
params:
command: "cargo build"
retry:
attempts: 3
delay: 5
on_error: "build_failed"
- type: shell_command
params:
command: "cargo test"
on_error: "test_failed"
transitions:
success: "deployment"
build_failed: "dependency_check"
test_failed: "fix_tests"
Loop Constructs
Iterate over collections or repeat actions:
states:
process_files:
description: "Process each source file"
actions:
- type: for_each
collection: "{{ source_files }}"
item_var: "current_file"
actions:
- type: shell_command
params:
command: "rustfmt {{ current_file }}"
- type: create_memo
params:
title: "Processed {{ current_file }}"
transitions:
complete: "review_changes"
Variable Management
Variable Declaration
Define workflow variables with types and defaults:
variables:
feature_name:
type: string
required: true
description: "Name of the feature to implement"
complexity:
type: enum
values: ["low", "medium", "high"]
default: "medium"
auto_deploy:
type: boolean
default: false
team_members:
type: array
default: []
config:
type: object
default:
timeout: 300
retries: 3
Variable Scoping
Variables have different scopes:
# Global variables (available throughout workflow)
variables:
project_name: "my-project"
# State-local variables
states:
research:
variables:
search_terms: "{{ feature_name }} implementation"
actions:
- type: search_query
params:
query: "{{ search_terms }}" # Uses local variable
Dynamic Variable Assignment
Set variables during execution:
actions:
- type: shell_command
params:
command: "git rev-parse --short HEAD"
output_var: "commit_hash"
- type: set_variable
params:
branch_name: "feature/{{ feature_name }}-{{ commit_hash }}"
- type: calculate
expression: "{{ file_count * complexity_multiplier }}"
output_var: "estimated_time"
Template Integration
Liquid Template Usage
Use Liquid templates throughout workflow definitions:
actions:
- type: create_issue
params:
name: "{{ issue_type | upcase }}_{{ '%03d' | sprintf: issue_number }}_{{ feature_name | slugify }}"
content: |
# {{ feature_name | title }}
**Type**: {{ issue_type | capitalize }}
**Priority**: {{ priority | default: "normal" }}
**Assigned**: {{ assignee | default: "unassigned" }}
## Description
{{ description | strip | default: "No description provided" }}
## Acceptance Criteria
{% for criterion in acceptance_criteria %}
- [ ] {{ criterion }}
{% endfor %}
{% if related_issues %}
## Related Issues
{% for issue in related_issues %}
- {{ issue }}
{% endfor %}
{% endif %}
Custom Filters
Define workflow-specific filters:
filters:
slugify:
description: "Convert string to URL-safe slug"
implementation: |
{{ input | downcase | replace: ' ', '-' | replace: '[^a-z0-9\-]', '' }}
format_duration:
description: "Format seconds as human-readable duration"
implementation: |
{% assign hours = input | divided_by: 3600 %}
{% assign minutes = input | modulo: 3600 | divided_by: 60 %}
{% if hours > 0 %}{{ hours }}h {% endif %}{{ minutes }}m
Workflow Composition
Subworkflows
Break complex workflows into reusable components:
# main-workflow.md
states:
setup:
actions:
- type: execute_workflow
params:
workflow: "setup-environment"
variables:
project_type: "rust"
transitions:
complete: "development"
# setup-environment.md
---
name: setup-environment
description: Environment setup workflow
---
states:
initialize:
actions:
- type: shell_command
params:
command: "rustup update"
- type: shell_command
params:
command: "cargo install cargo-edit"
Workflow Inheritance
Extend base workflows with specific customizations:
# base-development.md
---
name: base-development
description: Base development workflow
---
states:
setup:
actions:
- type: create_issue
- type: issue_work
implement:
actions:
- type: placeholder # To be overridden
finalize:
actions:
- type: issue_complete
# rust-development.md
---
name: rust-development
extends: base-development
description: Rust-specific development workflow
---
states:
implement:
actions:
- type: shell_command
params:
command: "cargo check"
- type: shell_command
params:
command: "cargo test"
- type: shell_command
params:
command: "cargo fmt"
Testing and Debugging
Workflow Testing
Test workflows in isolation:
# test-workflow.md
---
name: test-feature-workflow
description: Test version of feature workflow
test_mode: true
---
# Override external dependencies for testing
test_overrides:
shell_command:
mock_output: "Success"
create_issue:
mock_response: { "name": "TEST_001", "id": "12345" }
states:
test_setup:
actions:
- type: assert_variable
params:
variable: "feature_name"
expected: "test-feature"
Debug Mode
Enable detailed execution logging:
# Run workflow with debug output
sah workflow execute feature-development --debug --variables feature_name=user-auth
# Step through workflow interactively
sah workflow step feature-development --interactive
Validation
Validate workflow definitions:
# Validate syntax and structure
sah workflow validate feature-development.md
# Check variable dependencies
sah workflow check --variables feature_name=test complexity=high
Real-World Examples
Complete Feature Development
---
name: full-feature-development
description: Complete feature development lifecycle
initial_state: planning
variables:
feature_name: { type: string, required: true }
estimated_hours: { type: number, default: 8 }
---
states:
planning:
description: "Research and planning phase"
actions:
- type: search_query
params:
query: "{{ feature_name }} existing implementation"
output_var: "existing_code"
- type: create_memo
params:
title: "Feature Planning: {{ feature_name }}"
content: |
# {{ feature_name | title }} Planning
## Research Findings
{{ existing_code | format_search_results }}
## Estimated Effort
{{ estimated_hours }} hours
## Next Steps
- Create detailed design
- Break down into tasks
- Begin implementation
- type: create_issue
params:
name: "FEATURE_{{ '%03d' | sprintf: feature_counter }}_{{ feature_name | slugify }}"
content: |
# {{ feature_name | title }}
## Planning Complete
See memo: {{ memo_id }}
## Estimated Effort
{{ estimated_hours }} hours
output_var: "feature_issue"
transitions:
ready: "implementation"
needs_research: "deep_research"
implementation:
description: "Core implementation phase"
actions:
- type: issue_work
params:
name: "{{ feature_issue.name }}"
- type: shell_command
params:
command: "cargo check"
output_var: "check_result"
- type: conditional
condition: "{{ check_result.exit_code != 0 }}"
then:
- type: update_issue
params:
name: "{{ feature_issue.name }}"
content: "\n## Build Issues\n{{ check_result.stderr }}"
append: true
else:
- type: shell_command
params:
command: "cargo test"
output_var: "test_result"
transitions:
tests_pass: "documentation"
tests_fail: "fix_tests"
build_fail: "fix_build"
documentation:
description: "Add documentation and examples"
actions:
- type: create_memo
params:
title: "{{ feature_name }} Documentation"
content: |
# {{ feature_name | title }} Implementation
## Overview
{{ feature_description }}
## API Documentation
{{ api_docs | generate_from_code }}
## Usage Examples
{{ usage_examples }}
- type: shell_command
params:
command: "cargo doc"
transitions:
complete: "review"
review:
description: "Code review and finalization"
actions:
- type: shell_command
params:
command: "git add -A && git commit -m 'Complete {{ feature_name }} implementation'"
- type: update_issue
params:
name: "{{ feature_issue.name }}"
content: |
## Implementation Complete
- ✅ Core functionality implemented
- ✅ Tests passing
- ✅ Documentation added
- ✅ Ready for review
append: true
- type: issue_complete
params:
name: "{{ feature_issue.name }}"
transitions:
approved: "merge"
needs_changes: "implementation"
merge:
description: "Merge completed feature"
actions:
- type: issue_merge
params:
name: "{{ feature_issue.name }}"
delete_branch: true
- type: create_memo
params:
title: "Feature Complete: {{ feature_name }}"
content: |
# {{ feature_name | title }} - Complete
**Status**: ✅ Merged and deployed
**Time Spent**: {{ actual_hours }} hours
**Commits**: {{ commit_count }}
## Lessons Learned
{{ lessons_learned }}
Bug Fix Workflow
---
name: bug-fix-workflow
description: Systematic bug fixing process
initial_state: reproduce
variables:
bug_description: { type: string, required: true }
severity: { type: enum, values: ["low", "medium", "high", "critical"], default: "medium" }
---
states:
reproduce:
description: "Reproduce and understand the bug"
actions:
- type: create_issue
params:
name: "BUG_{{ timestamp }}_{{ bug_description | slugify }}"
content: |
# Bug: {{ bug_description }}
**Severity**: {{ severity }}
**Reported**: {{ timestamp | date: '%Y-%m-%d %H:%M' }}
## Description
{{ bug_description }}
## Reproduction Steps
- [ ] Step 1
- [ ] Step 2
- [ ] Step 3
## Expected Behavior
## Actual Behavior
## Environment
- OS: {{ os_info }}
- Version: {{ app_version }}
output_var: "bug_issue"
- type: search_query
params:
query: "{{ bug_description }} error handling"
output_var: "related_code"
transitions:
reproduced: "investigate"
cannot_reproduce: "needs_info"
investigate:
description: "Investigate root cause"
actions:
- type: create_memo
params:
title: "Bug Investigation: {{ bug_description }}"
content: |
# Bug Analysis
## Related Code
{{ related_code | format_search_results }}
## Hypothesis
{{ investigation_notes }}
## Potential Fixes
{{ fix_options }}
- type: issue_work
params:
name: "{{ bug_issue.name }}"
transitions:
root_cause_found: "fix"
need_more_info: "reproduce"
fix:
description: "Implement and test fix"
actions:
- type: shell_command
params:
command: "cargo test {{ test_pattern }}"
output_var: "pre_fix_tests"
- type: conditional
condition: "{{ pre_fix_tests.exit_code == 0 }}"
then:
- type: update_issue
params:
name: "{{ bug_issue.name }}"
content: "\n## Fix Applied\n{{ fix_description }}\n"
append: true
- type: shell_command
params:
command: "cargo test"
output_var: "post_fix_tests"
transitions:
tests_pass: "validate"
tests_fail: "investigate"
validate:
description: "Validate fix and prepare for merge"
actions:
- type: shell_command
params:
command: "cargo build --release"
- type: update_issue
params:
name: "{{ bug_issue.name }}"
content: |
## Fix Validated
- ✅ Tests passing
- ✅ Build successful
- ✅ Manual testing complete
append: true
- type: issue_complete
params:
name: "{{ bug_issue.name }}"
transitions:
validated: "deploy"
deploy:
description: "Deploy fix"
actions:
- type: issue_merge
params:
name: "{{ bug_issue.name }}"
delete_branch: true
- type: create_memo
params:
title: "Bug Fixed: {{ bug_description }}"
content: |
# Bug Fix Complete
**Issue**: {{ bug_issue.name }}
**Resolution Time**: {{ resolution_time }}
## Root Cause
{{ root_cause }}
## Fix Description
{{ fix_description }}
## Prevention
{{ prevention_measures }}
Best Practices
Workflow Design
State Granularity:
- Keep states focused on single responsibilities
- Use meaningful state names that describe the current phase
- Avoid overly complex states with too many actions
Error Handling:
- Define clear error transitions for each state
- Implement retry logic for transient failures
- Provide rollback mechanisms for critical operations
Variable Management:
- Use descriptive variable names
- Provide sensible defaults where possible
- Document variable purposes and types
- Validate inputs early in the workflow
Testing Strategy
Unit Testing:
- Test individual actions in isolation
- Mock external dependencies
- Verify variable transformations
- Test error conditions
Integration Testing:
- Test complete workflow execution
- Verify state transitions work correctly
- Test with realistic data and scenarios
- Validate external integrations
Documentation
Inline Documentation:
- Document the purpose of each state
- Explain complex conditions and transitions
- Provide examples of variable usage
- Include troubleshooting notes
User Guides:
- Create step-by-step execution guides
- Document required variables and setup
- Provide examples of different execution scenarios
- Include troubleshooting common issues
Custom workflows transform SwissArmyHammer into a powerful automation platform, enabling you to codify complex development processes and ensure consistent execution across your team.
Basic Examples
Simple, practical examples to get you started with SwissArmyHammer prompts and workflows.
Simple Prompts
Task Helper
A basic prompt for general assistance:
File: ~/.swissarmyhammer/prompts/helper.md
---
title: Task Helper
description: General purpose task assistance
arguments:
- name: task
description: What you need help with
required: true
- name: detail_level
description: Level of detail needed
choices: ["brief", "detailed", "comprehensive"]
default: "detailed"
---
I need help with: **{{task}}**
{% if detail_level == "brief" %}
Please provide a concise answer with key points only.
{% elsif detail_level == "comprehensive" %}
Please provide a thorough explanation with examples, alternatives, and best practices.
{% else %}
Please provide a detailed explanation with practical steps.
{% endif %}
Focus on actionable advice and practical solutions.
Usage:
sah prompt test helper --var task="setting up a Rust project"
sah prompt test helper --var task="debugging memory leaks" --var detail_level="comprehensive"
Code Reviewer
A prompt for reviewing code:
File: ~/.swissarmyhammer/prompts/code-reviewer.md
---
title: Code Reviewer
description: Review code for quality and best practices
arguments:
- name: language
description: Programming language
required: true
choices: ["rust", "python", "javascript", "typescript", "go"]
- name: code
description: Code to review
required: true
- name: focus
description: Review focus areas
type: array
default: ["bugs", "performance", "style"]
---
## Code Review: {{language | capitalize}}
Please review this {{language}} code:
```{{language}}
{{code}}
Focus Areas
{% for area in focus %}
- {{area | capitalize}} {% endfor %}
Please provide:
- Overall Assessment - Quality rating and summary
- Specific Issues - Line-by-line feedback
- Improvements - Concrete suggestions
- Best Practices - {{language}} conventions
Make feedback constructive and specific.
**Usage**:
```bash
sah prompt test code-reviewer \
--var language="rust" \
--var code="fn main() { println!(\"Hello\"); }" \
--var focus='["bugs", "style"]'
Documentation Generator
Generate documentation for code:
File: ~/.swissarmyhammer/prompts/doc-gen.md
---
title: Documentation Generator
description: Generate documentation for code or APIs
arguments:
- name: type
description: Type of documentation
choices: ["api", "function", "class", "module"]
required: true
- name: name
description: Name of the item to document
required: true
- name: code
description: Code to document
required: false
- name: format
description: Output format
choices: ["markdown", "html", "rst"]
default: "markdown"
---
# Documentation for {{type | capitalize}}: {{name}}
{% if code %}
## Code
{{code}}
{% endif %}
Please generate comprehensive {{format}} documentation including:
{% if type == "api" %}
- Endpoint description
- Request/response schemas
- Example requests
- Error codes
- Authentication requirements
{% elsif type == "function" %}
- Purpose and behavior
- Parameters and types
- Return value
- Examples
- Edge cases
{% elsif type == "class" %}
- Class purpose
- Constructor parameters
- Public methods
- Properties
- Usage examples
{% else %}
- Module overview
- Key functions/classes
- Usage examples
- Dependencies
- Installation notes
{% endif %}
Use clear, professional language suitable for developers.
Simple Workflows
Code Review Workflow
A basic workflow for reviewing code changes:
File: ~/.swissarmyhammer/workflows/code-review.md
---
name: code-review
description: Simple code review workflow
initial_state: analyze
variables:
- name: language
description: Programming language
default: "rust"
---
## Code Review Workflow
### analyze
**Description**: Analyze the code for issues
**Actions:**
- prompt: code-reviewer language={{language}} code="$(cat src/main.rs)" focus='["bugs", "performance"]'
**Transitions:**
- Always → report
### report
**Description**: Generate review report
**Actions:**
- prompt: doc-gen type="function" name="review_summary" format="markdown"
**Transitions:**
- Always → complete
### complete
**Description**: Review completed
Usage:
sah flow run code-review --var language="rust"
Test and Build Workflow
Simple CI-like workflow:
File: ~/.swissarmyhammer/workflows/test-build.md
---
name: test-build
description: Run tests and build if they pass
initial_state: test
---
## Test and Build Workflow
### test
**Description**: Run test suite
**Actions:**
- shell: `cargo test`
**Transitions:**
- On success → build
- On failure → test-failed
### build
**Description**: Build the project
**Actions:**
- shell: `cargo build --release`
**Transitions:**
- On success → complete
- On failure → build-failed
### test-failed
**Description**: Handle test failures
**Actions:**
- prompt: helper task="debugging failed tests" detail_level="detailed"
**Transitions:**
- Always → failed
### build-failed
**Description**: Handle build failures
**Actions:**
- prompt: helper task="fixing build errors" detail_level="detailed"
**Transitions:**
- Always → failed
### failed
**Description**: Workflow failed
### complete
**Description**: All steps completed successfully
Issue Management Examples
Creating Issues
# Simple bug report
sah issue create --name "fix-memory-leak" --content "
# Memory Leak in Parser
## Description
Memory usage grows continuously when parsing large files.
## Steps to Reproduce
1. Parse file > 100MB
2. Monitor memory usage
3. Memory never gets freed
## Expected
Memory should be freed after parsing.
"
# Feature request
sah issue create --name "add-json-output" --content "
# Add JSON Output Format
## Description
Add --format json flag to all commands for machine-readable output.
## Acceptance Criteria
- [ ] All list commands support JSON
- [ ] All show commands support JSON
- [ ] JSON schema is documented
- [ ] Tests cover JSON output
"
Working with Issues
# Start working on an issue
sah issue work fix-memory-leak
# Update issue with progress
sah issue update fix-memory-leak --append --content "
## Progress Update
- Identified leak in tokenizer
- Need to add Drop implementation
"
# Complete the issue
sah issue complete fix-memory-leak --merge --delete-branch
Memoranda Examples
Meeting Notes
sah memo create --title "Team Standup 2024-01-15" --content "
# Team Standup - January 15, 2024
## Attendees
- Alice (Lead)
- Bob (Backend)
- Carol (Frontend)
## Progress
- Alice: Working on authentication system
- Bob: Database migration almost complete
- Carol: New UI components ready for review
## Blockers
- Need staging environment for testing
- Waiting for design approval on checkout flow
## Action Items
- [ ] Alice: Set up staging environment
- [ ] Bob: Review Carol's UI components
- [ ] Carol: Follow up with design team
"
# Search meeting notes
sah memo search "staging environment"
Technical Notes
sah memo create --title "Architecture Decision: Database Choice" --content "
# Database Choice for User Service
## Context
Need to choose database for new user management service.
## Options Considered
### PostgreSQL
**Pros**: ACID compliance, mature ecosystem, good performance
**Cons**: More complex setup, overkill for simple use cases
### SQLite
**Pros**: Simple setup, embedded, good for development
**Cons**: Not suitable for high concurrency
### MongoDB
**Pros**: Flexible schema, good for rapid prototyping
**Cons**: Eventual consistency, learning curve
## Decision
PostgreSQL - provides reliability and performance we need.
## Consequences
- Need to set up database infrastructure
- Team needs PostgreSQL training
- Migration strategy required for existing data
"
Search Examples
Indexing Code
# Index Rust project
sah search index "**/*.rs" --exclude "**/target/**"
# Index multiple languages
sah search index "**/*.{rs,py,js,ts}" --exclude "{**/target/**,**/node_modules/**,**/__pycache__/**}"
# Force re-index after major changes
sah search index "src/**/*.rs" --force
Searching Code
# Find error handling patterns
sah search query "error handling patterns"
# Find async/await usage
sah search query "async await implementation"
# Find database connection code
sah search query "database connection setup"
# Find specific API patterns
sah search query "REST API endpoint handlers" --limit 5
Configuration Examples
Basic Configuration
File: ~/.swissarmyhammer/sah.toml
[general]
auto_reload = true
default_timeout_ms = 30000
[logging]
level = "info"
format = "compact"
[template]
cache_size = 500
[search]
embedding_model = "nomic-embed-code"
max_file_size = 1048576
[workflow]
max_parallel_actions = 2
Project-Specific Configuration
File: ./.swissarmyhammer/sah.toml
[workflow]
# This project has resource constraints
max_parallel_actions = 1
[search]
# Index only specific directories
include_patterns = ["src/**/*.rs", "tests/**/*.rs"]
exclude_patterns = ["target/**", "**/*.bak"]
[issues]
# Use feature branch pattern
branch_pattern = "feature/{{name}}"
Environment Integration
Using Environment Variables
---
title: Environment-Aware Deploy
arguments:
- name: service
description: Service to deploy
required: true
---
Deploying {{service}} to {{NODE_ENV | default: "development"}}.
Target URL: {{DEPLOY_URL | default: "http://localhost:3000"}}
Configuration:
- Database: {{DATABASE_URL | default: "sqlite://local.db"}}
- Redis: {{REDIS_URL | default: "redis://localhost:6379"}}
{% if NODE_ENV == "production" %}
⚠️ **PRODUCTION DEPLOYMENT** - Extra care required!
{% endif %}
Shell Integration
#!/bin/bash
# deploy.sh - Integration script
set -e
echo "🔨 Running pre-deployment checks..."
sah flow run pre-deploy-checks
echo "🚀 Deploying application..."
sah prompt test deploy-prompt \
--var service="$SERVICE_NAME" \
--var environment="$NODE_ENV" \
--output deploy-plan.md
echo "📝 Creating deployment issue..."
sah issue create \
--name "deploy-$SERVICE_NAME-$(date +%Y%m%d)" \
--file deploy-plan.md
echo "✅ Deployment process initiated!"
These basic examples provide a foundation for building more complex prompts, workflows, and integrations with SwissArmyHammer.
Advanced Examples
Complex, real-world examples demonstrating advanced SwissArmyHammer features and patterns.
Advanced Prompts
Multi-Language Code Analyzer
A sophisticated prompt that analyzes code across multiple languages:
File: ~/.swissarmyhammer/prompts/multi-analyzer.md
---
title: Multi-Language Code Analyzer
description: Analyze code across different languages with unified reporting
version: "2.0"
arguments:
- name: files
description: Files to analyze (JSON array of {path, language, content})
type: array
required: true
- name: analysis_type
description: Type of analysis to perform
choices: ["security", "performance", "architecture", "comprehensive"]
default: "comprehensive"
- name: output_format
description: Output format for results
choices: ["markdown", "json", "html"]
default: "markdown"
- name: severity_threshold
description: Minimum severity to report
choices: ["info", "warning", "error", "critical"]
default: "warning"
---
# Multi-Language Code Analysis Report
{% assign total_files = files | size %}
{% assign languages = files | map: "language" | uniq %}
## Overview
- **Files Analyzed**: {{total_files}}
- **Languages**: {{languages | join: ", "}}
- **Analysis Type**: {{analysis_type | capitalize}}
- **Threshold**: {{severity_threshold | capitalize}}
{% for language in languages %}
{% assign lang_files = files | where: "language", language %}
## {{language | capitalize}} Analysis ({{lang_files | size}} files)
{% for file in lang_files %}
### {{file.path}}
{% case analysis_type %}
{% when "security" %}
Perform security analysis for {{language}}:
- Check for injection vulnerabilities
- Validate input sanitization
- Review authentication/authorization
- Check for hardcoded secrets
{% when "performance" %}
Perform performance analysis for {{language}}:
- Identify algorithmic inefficiencies
- Check memory usage patterns
- Review I/O operations
- Analyze concurrency issues
{% when "architecture" %}
Perform architectural analysis for {{language}}:
- Review design patterns usage
- Check separation of concerns
- Analyze dependencies
- Review error handling strategy
{% else %}
Perform comprehensive analysis for {{language}}:
- Security vulnerabilities
- Performance bottlenecks
- Architectural issues
- Code quality concerns
{% endcase %}
**Code:**
```{{language}}
{{file.content}}
{% endfor %} {% endfor %}
Analysis Instructions
For each file, provide:
- Summary - Overall code quality assessment
- Issues Found - Categorized by severity ({{severity_threshold}}+)
- Recommendations - Specific improvement suggestions
- Code Examples - Corrected code snippets where applicable
{% if output_format == “json” %} Format the response as valid JSON with this structure:
{
"summary": "Overall assessment",
"files": [
{
"path": "file_path",
"language": "language",
"issues": [
{
"severity": "error|warning|info",
"category": "security|performance|architecture|quality",
"line": 42,
"message": "Description of issue",
"recommendation": "How to fix it"
}
]
}
]
}
{% elsif output_format == “html” %} Format as clean HTML with proper styling and navigation. {% else %} Use clear markdown formatting with proper headers and code blocks. {% endif %}
### Intelligent Test Generator
Generate comprehensive test suites based on code analysis:
**File**: `~/.swissarmyhammer/prompts/test-generator.md`
```markdown
---
title: Intelligent Test Generator
description: Generate comprehensive test suites with edge cases
arguments:
- name: code
description: Source code to test
required: true
- name: language
description: Programming language
required: true
- name: test_framework
description: Testing framework to use
required: false
- name: coverage_goal
description: Target code coverage percentage
type: number
default: 90
- name: include_integration
description: Include integration tests
type: boolean
default: true
- name: include_performance
description: Include performance tests
type: boolean
default: false
---
# Test Suite Generation for {{language | capitalize}}
## Source Code Analysis
```{{language}}
{{code}}
Test Requirements
- Framework: {% if test_framework %}{{test_framework}}{% else %}Standard {{language}} testing{% endif %}
- Coverage Goal: {{coverage_goal}}%
- Integration Tests: {% if include_integration %}Yes{% else %}No{% endif %}
- Performance Tests: {% if include_performance %}Yes{% else %}No{% endif %}
Generate Tests
Please create a comprehensive test suite including:
1. Unit Tests
- Test all public functions/methods
- Cover happy path scenarios
- Test edge cases and boundary conditions
- Test error conditions and exception handling
- Validate input validation
2. Property-Based Tests (if applicable)
- Generate tests for invariant properties
- Test with random inputs
- Verify mathematical properties
{% if include_integration %}
3. Integration Tests
- Test component interactions
- Test external API integrations
- Test database operations
- Test file system operations {% endif %}
{% if include_performance %}
4. Performance Tests
- Benchmark critical functions
- Test memory usage
- Test with large datasets
- Measure execution time {% endif %}
5. Test Data & Fixtures
- Create realistic test data
- Mock external dependencies
- Set up test databases/files
Coverage Analysis
Aim for {{coverage_goal}}% coverage by testing:
- All code paths
- Conditional branches
- Loop iterations
- Exception handlers
{% case language %}
{% when “rust” %}
Use cargo test
conventions with:
#[test]
attributesassert_eq!
,assert!
macros#[should_panic]
for error casesproptest
for property-based testing {% when “python” %} Use pytest conventions with:test_
function prefixesassert
statements@pytest.fixture
for setup@pytest.parametrize
for data-driven tests {% when “javascript” %} Use Jest conventions with:describe()
andit()
blocksexpect().toBe()
assertionsbeforeEach()
/afterEach()
hooks- Mock functions for dependencies {% when “typescript” %} Use Jest with TypeScript:
- Type-safe test code
- Interface mocking
- Generic test utilities
- Async/await testing patterns {% endcase %}
Generate complete, runnable test code with clear documentation.
## Advanced Workflows
### Complete CI/CD Pipeline
A full-featured continuous integration and deployment workflow:
**File**: `~/.swissarmyhammer/workflows/ci-cd-pipeline.md`
```markdown
---
name: ci-cd-pipeline
description: Complete CI/CD pipeline with quality gates
version: "3.0"
initial_state: validate-pr
timeout_ms: 1800000 # 30 minutes
max_parallel: 6
variables:
- name: environment
description: Target deployment environment
choices: ["dev", "staging", "prod"]
default: "dev"
- name: skip_tests
description: Skip test execution (emergency deploys only)
type: boolean
default: false
- name: deployment_strategy
description: Deployment strategy
choices: ["rolling", "blue-green", "canary"]
default: "rolling"
- name: auto_rollback
description: Enable automatic rollback on failure
type: boolean
default: true
resources:
- name: test-db
type: docker-container
image: postgres:13
env:
POSTGRES_DB: testdb
POSTGRES_USER: test
POSTGRES_PASSWORD: test
cleanup: true
---
# CI/CD Pipeline Workflow
## validate-pr
**Description**: Validate pull request and setup
**Actions:**
- shell: `git fetch origin main`
- shell: `git diff --name-only origin/main...HEAD > changed-files.txt`
- conditional: Check if critical files changed
condition: file_contains("changed-files.txt", "Cargo.toml|package.json|requirements.txt")
true_state: dependency-check
- conditional: Security scan needed
condition: file_contains("changed-files.txt", ".rs|.py|.js|.ts")
true_action: shell: `security-scan.sh`
**Transitions:**
- If security issues found → security-review
- If dependencies changed → dependency-check
- Always → code-quality
## dependency-check
**Description**: Analyze dependency changes
**Actions:**
- shell: `cargo audit` (parallel, if Rust)
- shell: `npm audit` (parallel, if Node.js)
- shell: `safety check` (parallel, if Python)
- prompt: multi-analyzer files="$(cat changed-files.txt | head -10)" analysis_type="security"
**Transitions:**
- If vulnerabilities found → security-review
- Always → code-quality
## code-quality
**Description**: Run code quality checks
**Actions:**
- shell: `cargo fmt --check` (parallel, timeout: 30s)
- shell: `cargo clippy -- -D warnings` (parallel, timeout: 120s)
- shell: `eslint --ext .js,.ts .` (parallel, timeout: 60s)
- prompt: code-reviewer language="rust" code="$(git diff origin/main...HEAD)"
**Transitions:**
- If quality issues found → quality-review
- If skip_tests == true → build
- Always → test-suite
## test-suite
**Description**: Execute comprehensive test suite
**Actions:**
- fork: unit-tests
actions:
- shell: `cargo test --lib`
- shell: `npm test -- --coverage`
- shell: `pytest tests/unit/`
- fork: integration-tests
actions:
- shell: `cargo test --test integration`
- shell: `npm run test:integration`
- fork: e2e-tests
actions:
- shell: `npm run test:e2e`
- shell: `pytest tests/e2e/`
condition: environment != "dev"
**Transitions:**
- If any tests fail → test-failure
- When all forks complete → build
## build
**Description**: Build artifacts for deployment
**Actions:**
- shell: `docker build -t app:{{git.commit}} .`
- shell: `docker tag app:{{git.commit}} app:{{environment}}-latest`
- conditional: Multi-arch build
condition: environment == "prod"
true_action: shell: `docker buildx build --platform linux/amd64,linux/arm64 -t app:{{git.commit}} .`
**Transitions:**
- On success → security-scan
- On failure → build-failure
## security-scan
**Description**: Security scanning of built artifacts
**Actions:**
- shell: `trivy image app:{{git.commit}}`
- shell: `docker run --rm app:{{git.commit}} security-check.sh`
- prompt: multi-analyzer analysis_type="security" severity_threshold="error"
**Transitions:**
- If critical vulnerabilities → security-review
- Always → deploy-{{environment}}
## deploy-dev
**Description**: Deploy to development environment
**Actions:**
- shell: `kubectl config use-context dev`
- shell: `helm upgrade --install app ./charts/app --set image.tag={{git.commit}}`
- shell: `kubectl wait --for=condition=available --timeout=300s deployment/app`
**Transitions:**
- On success → smoke-tests
- On failure → deployment-failure
## deploy-staging
**Description**: Deploy to staging environment
**Actions:**
- shell: `kubectl config use-context staging`
- conditional: Blue-green deployment
condition: deployment_strategy == "blue-green"
true_workflow: blue-green-deploy
- conditional: Canary deployment
condition: deployment_strategy == "canary"
true_workflow: canary-deploy
- shell: `helm upgrade --install app ./charts/app --set image.tag={{git.commit}}`
**Transitions:**
- On success → staging-tests
- On failure → rollback-staging
## deploy-prod
**Description**: Deploy to production environment
**Actions:**
- shell: `kubectl config use-context prod`
- shell: `helm upgrade --install app ./charts/app --set image.tag={{git.commit}} --wait --timeout=10m`
- shell: `kubectl annotate deployment app deployment.kubernetes.io/revision={{git.commit}}`
**Transitions:**
- On success → production-verification
- On failure → rollback-production
## smoke-tests
**Description**: Run smoke tests against deployed application
**Actions:**
- wait: 30s # Allow deployment to stabilize
- shell: `curl -f http://app-{{environment}}.local/health`
- shell: `npm run test:smoke -- --env={{environment}}`
**Transitions:**
- On success → complete
- On failure → deployment-failure
## staging-tests
**Description**: Run full test suite against staging
**Actions:**
- shell: `npm run test:api -- --env=staging`
- shell: `npm run test:performance -- --env=staging`
- prompt: test-generator code="$(cat src/main.rs)" include_performance=true
**Transitions:**
- On success → staging-approval
- On failure → rollback-staging
## production-verification
**Description**: Verify production deployment
**Actions:**
- shell: `kubectl get pods -l app=myapp`
- shell: `curl -f https://api.myapp.com/health`
- shell: `npm run test:production-smoke`
- wait: 300s # Monitor for 5 minutes
- shell: `check-error-rates.sh`
**Transitions:**
- On success → complete
- If auto_rollback && errors detected → rollback-production
- On failure → production-incident
## Error States
## test-failure
**Description**: Handle test failures
**Actions:**
- prompt: helper task="analyzing test failures" detail_level="comprehensive"
- shell: `generate-test-report.sh > test-report.html`
- issue: create
name: "test-failure-{{git.commit | slice: 0, 8}}"
content: "Test failures in {{git.branch}}: $(cat test-failures.log)"
**Transitions:**
- Always → failed
## build-failure
**Description**: Handle build failures
**Actions:**
- prompt: multi-analyzer analysis_type="architecture" files="$(find . -name '*.rs' -o -name '*.toml')"
- shell: `docker logs $(docker ps -q) > build-logs.txt`
**Transitions:**
- Always → failed
## security-review
**Description**: Manual security review required
**Actions:**
- prompt: helper task="security review process" detail_level="comprehensive"
- issue: create
name: "security-review-{{execution.start_time | date: '%Y%m%d-%H%M'}}"
content: "Security review required for deployment to {{environment}}"
**Transitions:**
- Manual approval → continue-pipeline
- Always → blocked
## rollback-staging
**Description**: Rollback staging deployment
**Actions:**
- shell: `helm rollback app --namespace staging`
- shell: `kubectl wait --for=condition=available --timeout=300s deployment/app -n staging`
**Transitions:**
- Always → failed
## rollback-production
**Description**: Emergency production rollback
**Actions:**
- shell: `helm rollback app --namespace production`
- shell: `kubectl wait --for=condition=available --timeout=300s deployment/app -n production`
- shell: `send-alert.sh "Production rollback executed for {{git.commit}}"`
**Transitions:**
- Always → production-incident
## production-incident
**Description**: Handle production incidents
**Actions:**
- shell: `create-incident.sh --severity=high --title="Deployment failure {{git.commit}}"`
- prompt: helper task="production incident response" detail_level="comprehensive"
- issue: create
name: "prod-incident-{{execution.start_time | date: '%Y%m%d-%H%M'}}"
content: "Production incident during deployment of {{git.commit}} to {{environment}}"
**Transitions:**
- Always → failed
## staging-approval
**Description**: Wait for staging approval
**Actions:**
- prompt: helper task="staging deployment approval" detail_level="brief"
- wait: until approval_received("staging")
**Transitions:**
- On approval → deploy-prod
- On rejection → failed
## complete
**Description**: Pipeline completed successfully
**Actions:**
- shell: `update-deployment-status.sh --status=success --commit={{git.commit}}`
- memo: create
title: "Successful deployment {{git.commit | slice: 0, 8}}"
content: "Deployed {{git.commit}} to {{environment}} successfully at {{execution.start_time}}"
## failed
**Description**: Pipeline failed
**Actions:**
- shell: `update-deployment-status.sh --status=failed --commit={{git.commit}}`
- shell: `cleanup-resources.sh`
Microservices Orchestration
Complex workflow for managing microservices deployments:
File: ~/.swissarmyhammer/workflows/microservices-deploy.md
---
name: microservices-deploy
description: Orchestrate deployment of multiple microservices with dependencies
initial_state: dependency-analysis
variables:
- name: services
description: Services to deploy (JSON array)
type: array
required: true
- name: environment
description: Target environment
choices: ["dev", "staging", "prod"]
default: "dev"
- name: strategy
description: Deployment strategy
choices: ["sequential", "parallel", "dependency-order"]
default: "dependency-order"
---
## dependency-analysis
**Description**: Analyze service dependencies and create deployment order
**Actions:**
- shell: `analyze-dependencies.py {{services | join: " "}} > dependency-graph.json`
- prompt: multi-analyzer files="$(cat dependency-graph.json)" analysis_type="architecture"
**Transitions:**
- If strategy == "sequential" → deploy-sequential
- If strategy == "parallel" → deploy-parallel
- Always → deploy-dependency-order
## deploy-dependency-order
**Description**: Deploy services in dependency order
**Actions:**
- loop: Deploy each service layer
items: "{{dependency_layers}}"
state: deploy-service-layer
parallel: false
**Transitions:**
- When loop complete → integration-tests
- On any failure → rollback-all
## deploy-service-layer
**Description**: Deploy all services in current dependency layer
**Actions:**
- fork: Deploy services in parallel
items: "{{current_layer.services}}"
template: |
shell: `helm upgrade --install {{item.name}} ./charts/{{item.name}} --set image.tag={{item.version}} --namespace {{environment}}`
wait: until service_healthy("{{item.name}}")
**Transitions:**
- When all forks complete → next-layer
- On any failure → layer-failure
## integration-tests
**Description**: Run integration tests across all services
**Actions:**
- shell: `run-integration-tests.sh {{services | join: " "}} --env={{environment}}`
- prompt: test-generator code="$(cat integration-tests/*.py)" include_integration=true
**Transitions:**
- On success → service-mesh-config
- On failure → integration-failure
## service-mesh-config
**Description**: Configure service mesh and networking
**Actions:**
- shell: `kubectl apply -f service-mesh/{{environment}}/ --recursive`
- shell: `istioctl proxy-config cluster {{services | first}}-pod`
- wait: 60s # Allow mesh configuration to propagate
**Transitions:**
- Always → end-to-end-tests
## end-to-end-tests
**Description**: Run end-to-end tests across the entire system
**Actions:**
- shell: `npm run test:e2e -- --env={{environment}} --services="{{services | join: ','}}"`
- wait: 120s # Allow system to stabilize
**Transitions:**
- On success → complete
- On failure → system-failure
Advanced Template Patterns
Conditional Logic and Loops
{% comment %} Complex conditional logic {% endcomment %}
{% assign lang = language | default: "unknown" | downcase %}
{% assign is_compiled = false %}
{% assign is_interpreted = false %}
{% assign has_package_manager = false %}
{% case lang %}
{% when "rust", "go", "c", "cpp" %}
{% assign is_compiled = true %}
{% when "python", "javascript", "ruby" %}
{% assign is_interpreted = true %}
{% endcase %}
{% if lang == "rust" or lang == "javascript" or lang == "python" %}
{% assign has_package_manager = true %}
{% endif %}
## {{lang | capitalize}} Project Analysis
{% if is_compiled %}
### Compilation Requirements
- Build system: {% if lang == "rust" %}Cargo{% elsif lang == "go" %}Go modules{% else %}Make/CMake{% endif %}
- Optimization flags needed for production builds
{% endif %}
{% if has_package_manager %}
### Dependency Management
{% case lang %}
{% when "rust" %}
- Package manager: Cargo
- Manifest file: `Cargo.toml`
- Lock file: `Cargo.lock`
{% when "javascript" %}
- Package manager: npm/yarn/pnpm
- Manifest file: `package.json`
- Lock file: `package-lock.json`/`yarn.lock`
{% when "python" %}
- Package manager: pip/poetry/conda
- Manifest files: `requirements.txt`, `pyproject.toml`
- Lock file: `poetry.lock`
{% endcase %}
{% endif %}
{% comment %} Loop through complex data structures {% endcomment %}
{% assign grouped_issues = issues | group_by: "severity" %}
{% for group in grouped_issues %}
### {{group.name | capitalize}} Issues ({{group.items | size}})
{% for issue in group.items %}
{% assign icon = "ℹ️" %}
{% if issue.severity == "error" %}{% assign icon = "❌" %}{% endif %}
{% if issue.severity == "warning" %}{% assign icon = "⚠️" %}{% endif %}
- {{icon}} **{{issue.title}}** (Line {{issue.line}})
{{issue.description}}
{% if issue.fix %}
**Fix**: {{issue.fix}}
{% endif %}
{% endfor %}
{% endfor %}
Advanced Filter Usage
{% comment %} Custom filters and complex transformations {% endcomment %}
{% assign functions = code_analysis.functions | sort: "complexity" | reverse %}
{% assign high_complexity = functions | where: "complexity", "> 10" %}
## Complexity Analysis
### High Complexity Functions ({{high_complexity | size}})
{% for func in high_complexity %}
- **{{func.name}}** ({{func.complexity}} complexity)
- Lines: {{func.start_line}}-{{func.end_line}}
- Parameters: {{func.parameters | size}}
- Return type: {{func.return_type | default: "void"}}
{% if func.issues %}
- Issues: {{func.issues | map: "type" | uniq | join: ", "}}
{% endif %}
{% endfor %}
### Refactoring Suggestions
{% assign refactor_candidates = high_complexity | slice: 0, 3 %}
{% for func in refactor_candidates %}
{{forloop.index}}. **{{func.name}}**
- Current complexity: {{func.complexity}}
- Suggested approach: {{func.complexity | complexity_strategy}}
- Estimated effort: {{func.lines | lines_to_effort}} hours
{% endfor %}
### Code Metrics Summary
- Total functions: {{functions | size}}
- Average complexity: {{functions | map: "complexity" | sum | divided_by: functions.size | round: 1}}
- High complexity (>10): {{high_complexity | size}} ({{high_complexity | size | times: 100.0 | divided_by: functions.size | round: 1}}%)
- Maintainability index: {{code_analysis.maintainability | round: 1}}/100
These advanced examples demonstrate sophisticated patterns for complex real-world scenarios, showing how SwissArmyHammer can handle enterprise-level automation and analysis tasks.
Integration Examples
Real-world examples of integrating SwissArmyHammer with development tools, CI/CD systems, and workflows.
IDE Integration
VS Code Integration
Task Configuration
File: .vscode/tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "Test Prompt",
"type": "shell",
"command": "sah",
"args": [
"prompt",
"test",
"${input:promptName}",
"--var",
"file=${file}",
"--var",
"language=${input:language}"
],
"group": "test",
"presentation": {
"echo": true,
"reveal": "always",
"focus": false,
"panel": "new"
},
"problemMatcher": []
},
{
"label": "Run Workflow",
"type": "shell",
"command": "sah",
"args": [
"flow",
"run",
"${input:workflowName}",
"--var",
"file=${file}"
],
"group": "build"
},
{
"label": "Create Issue from Selection",
"type": "shell",
"command": "sah",
"args": [
"issue",
"create",
"--name",
"${input:issueName}",
"--content",
"${selectedText}"
],
"group": "build"
}
],
"inputs": [
{
"id": "promptName",
"description": "Prompt name",
"type": "promptString"
},
{
"id": "workflowName",
"description": "Workflow name",
"type": "promptString"
},
{
"id": "language",
"description": "Programming language",
"type": "pickString",
"options": ["rust", "python", "javascript", "typescript", "go"]
},
{
"id": "issueName",
"description": "Issue name",
"type": "promptString"
}
]
}
Keybindings
File: .vscode/keybindings.json
[
{
"key": "ctrl+shift+p t",
"command": "workbench.action.tasks.runTask",
"args": "Test Prompt"
},
{
"key": "ctrl+shift+p w",
"command": "workbench.action.tasks.runTask",
"args": "Run Workflow"
},
{
"key": "ctrl+shift+p i",
"command": "workbench.action.tasks.runTask",
"args": "Create Issue from Selection"
}
]
Neovim Integration
File: ~/.config/nvim/lua/sah.lua
local M = {}
-- Test current prompt
function M.test_prompt()
local prompt_name = vim.fn.input("Prompt name: ")
local current_file = vim.fn.expand("%")
local filetype = vim.bo.filetype
local cmd = string.format(
"sah prompt test %s --var file=%s --var language=%s",
prompt_name, current_file, filetype
)
vim.cmd("split | terminal " .. cmd)
end
-- Create issue from visual selection
function M.create_issue()
local issue_name = vim.fn.input("Issue name: ")
local selected_text = vim.fn.getreg('"')
local cmd = string.format(
"sah issue create --name '%s' --content '%s'",
issue_name, selected_text
)
vim.fn.system(cmd)
print("Issue created: " .. issue_name)
end
-- Search semantic code
function M.semantic_search()
local query = vim.fn.input("Search query: ")
local cmd = string.format("sah search query '%s' --format json", query)
local result = vim.fn.system(cmd)
-- Parse and display results
local results = vim.fn.json_decode(result)
vim.cmd("split")
vim.api.nvim_buf_set_lines(0, 0, -1, false, vim.split(result, "\n"))
end
return M
File: ~/.config/nvim/init.lua
local sah = require('sah')
-- Key mappings
vim.keymap.set('n', '<leader>pt', sah.test_prompt, { desc = 'Test SwissArmyHammer prompt' })
vim.keymap.set('v', '<leader>ic', sah.create_issue, { desc = 'Create issue from selection' })
vim.keymap.set('n', '<leader>ss', sah.semantic_search, { desc = 'Semantic code search' })
Git Integration
Git Hooks
Pre-commit Hook
File: .git/hooks/pre-commit
#!/bin/bash
set -e
echo "🔨 Running SwissArmyHammer pre-commit checks..."
# Validate all prompts and workflows
sah validate --strict --format json > validation-results.json
if [ $? -ne 0 ]; then
echo "❌ Validation failed. Fix issues before committing."
cat validation-results.json | jq '.errors[]'
exit 1
fi
# Run code review on changed files
git diff --cached --name-only --diff-filter=AM | grep -E '\.(rs|py|js|ts)$' > changed-code.txt
if [ -s changed-code.txt ]; then
echo "📝 Running code review on changed files..."
while IFS= read -r file; do
if [ -f "$file" ]; then
sah prompt test code-reviewer \
--var language="$(file-to-lang.sh "$file")" \
--var file="$file" \
--output "review-$file.md" \
--var focus='["bugs", "security"]'
fi
done < changed-code.txt
# Create review issue if problems found
if grep -q "ERROR\|CRITICAL" review-*.md 2>/dev/null; then
sah issue create \
--name "review-$(git rev-parse --short HEAD)" \
--content "$(cat review-*.md)"
echo "❌ Critical issues found. Review issue created."
rm -f review-*.md changed-code.txt validation-results.json
exit 1
fi
rm -f review-*.md
fi
rm -f changed-code.txt validation-results.json
echo "✅ Pre-commit checks passed!"
Post-commit Hook
File: .git/hooks/post-commit
#!/bin/bash
commit_hash=$(git rev-parse HEAD)
commit_message=$(git log -1 --pretty=%B)
files_changed=$(git diff-tree --no-commit-id --name-only -r HEAD | wc -l)
# Create commit memo
sah memo create \
--title "Commit $(echo $commit_hash | cut -c1-8)" \
--content "# Commit $commit_hash
## Message
$commit_message
## Files Changed
$files_changed files modified
## Changes
$(git show --stat HEAD)
"
# Index new files for search
git diff-tree --no-commit-id --name-only -r HEAD | while read file; do
if [[ "$file" =~ \.(rs|py|js|ts|md)$ ]]; then
sah search index "$file" >/dev/null 2>&1 &
fi
done
GitHub Actions Integration
File: .github/workflows/sah-integration.yml
name: SwissArmyHammer Integration
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install SwissArmyHammer
run: |
curl -L https://github.com/swissarmyhammer/swissarmyhammer/releases/latest/download/sah-linux-x64.tar.gz | tar xz
sudo mv sah /usr/local/bin/
sah --version
- name: Validate Configuration
run: |
sah validate --strict --format json > validation.json
if [ $? -ne 0 ]; then
echo "::error::Validation failed"
cat validation.json | jq '.errors[]'
exit 1
fi
- name: Upload Validation Results
uses: actions/upload-artifact@v3
if: always()
with:
name: validation-results
path: validation.json
code-review:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Install SwissArmyHammer
run: |
curl -L https://github.com/swissarmyhammer/swissarmyhammer/releases/latest/download/sah-linux-x64.tar.gz | tar xz
sudo mv sah /usr/local/bin/
- name: Get Changed Files
id: changed-files
run: |
git diff --name-only origin/main...HEAD | grep -E '\.(rs|py|js|ts)$' > changed-files.txt || echo "No code files changed"
- name: Run Code Review
if: hashFiles('changed-files.txt') != ''
run: |
mkdir -p reviews
while IFS= read -r file; do
if [ -f "$file" ]; then
lang=$(basename "$file" | sed 's/.*\.//')
sah prompt test code-reviewer \
--var language="$lang" \
--var file="$file" \
--var focus='["bugs", "security", "performance"]' \
--output "reviews/review-$(basename "$file").md"
fi
done < changed-files.txt
- name: Comment PR with Review
uses: actions/github-script@v6
if: hashFiles('reviews/*.md') != ''
with:
script: |
const fs = require('fs');
const path = require('path');
let comment = '## 🤖 SwissArmyHammer Code Review\n\n';
const reviewFiles = fs.readdirSync('reviews').filter(f => f.endsWith('.md'));
for (const file of reviewFiles) {
const content = fs.readFileSync(path.join('reviews', file), 'utf8');
comment += `### ${file.replace('review-', '').replace('.md', '')}\n\n`;
comment += content + '\n\n';
}
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: comment
});
semantic-index:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install SwissArmyHammer
run: |
curl -L https://github.com/swissarmyhammer/swissarmyhammer/releases/latest/download/sah-linux-x64.tar.gz | tar xz
sudo mv sah /usr/local/bin/
- name: Index Codebase
run: |
sah search index "**/*.{rs,py,js,ts}" --force
- name: Cache Search Index
uses: actions/cache@v3
with:
path: ~/.swissarmyhammer/search.db
key: search-index-${{ github.sha }}
restore-keys: |
search-index-
CI/CD Pipeline Integration
Jenkins Integration
File: Jenkinsfile
pipeline {
agent any
environment {
SAH_HOME = "${WORKSPACE}/.swissarmyhammer"
}
stages {
stage('Setup SwissArmyHammer') {
steps {
script {
sh '''
curl -L https://github.com/swissarmyhammer/swissarmyhammer/releases/latest/download/sah-linux-x64.tar.gz | tar xz
chmod +x sah
./sah --version
'''
}
}
}
stage('Validate Configuration') {
steps {
sh './sah validate --strict --format json > validation.json'
publishHTML([
allowMissing: false,
alwaysLinkToLastBuild: false,
keepAll: true,
reportDir: '.',
reportFiles: 'validation.json',
reportName: 'Validation Report'
])
}
}
stage('Code Review') {
when {
changeRequest()
}
steps {
script {
def changedFiles = sh(
script: "git diff --name-only origin/main...HEAD | grep -E '\\.(rs|py|js|ts)\$' || echo ''",
returnStdout: true
).trim()
if (changedFiles) {
changedFiles.split('\n').each { file ->
if (file.trim()) {
def lang = file.split('\\.').last()
sh "./sah prompt test code-reviewer --var language=${lang} --var file=${file} --output review-${file}.md"
}
}
// Archive review reports
archiveArtifacts artifacts: 'review-*.md', allowEmptyArchive: true
}
}
}
}
stage('Workflow Execution') {
parallel {
stage('Build Workflow') {
steps {
sh './sah flow run build-workflow --var environment=${BRANCH_NAME}'
}
}
stage('Test Workflow') {
steps {
sh './sah flow run test-workflow --var coverage_threshold=80'
}
}
}
}
stage('Semantic Indexing') {
steps {
sh './sah search index "**/*.{rs,py,js,ts}" --force'
archiveArtifacts artifacts: '.swissarmyhammer/search.db', allowEmptyArchive: true
}
}
stage('Issue Management') {
when {
anyOf {
branch 'main'
branch 'develop'
}
}
steps {
script {
// Create deployment issue
sh """
./sah issue create \\
--name 'deploy-${BUILD_NUMBER}' \\
--content '# Deployment ${BUILD_NUMBER}
## Build Info
- Branch: ${BRANCH_NAME}
- Commit: ${GIT_COMMIT}
- Build: ${BUILD_NUMBER}
- Timestamp: \$(date)
## Changes
\$(git log --oneline \${GIT_PREVIOUS_COMMIT}..\${GIT_COMMIT})
'
"""
}
}
}
}
post {
always {
// Create build memo
sh """
./sah memo create \\
--title 'Build ${BUILD_NUMBER} - ${BRANCH_NAME}' \\
--content '# Build Report
## Status
Status: ${currentBuild.currentResult}
## Duration
Duration: ${currentBuild.durationString}
## Environment
- Node: ${NODE_NAME}
- Workspace: ${WORKSPACE}
- Branch: ${BRANCH_NAME}
- Commit: ${GIT_COMMIT}
## Test Results
\$(cat test-results.txt 2>/dev/null || echo "No test results")
'
"""
}
failure {
sh """
./sah issue create \\
--name 'build-failure-${BUILD_NUMBER}' \\
--content '# Build Failure ${BUILD_NUMBER}
Build failed on ${BRANCH_NAME} at ${BUILD_TIMESTAMP}
## Error Log
\$(tail -50 ${WORKSPACE}/build.log)
## Investigation Steps
- [ ] Check build logs
- [ ] Verify dependencies
- [ ] Test locally
- [ ] Check recent changes
'
"""
}
}
}
GitLab CI Integration
File: .gitlab-ci.yml
variables:
SAH_VERSION: "latest"
SAH_HOME: "$CI_PROJECT_DIR/.swissarmyhammer"
stages:
- setup
- validate
- review
- build
- test
- deploy
- cleanup
install_sah:
stage: setup
script:
- curl -L https://github.com/swissarmyhammer/swissarmyhammer/releases/latest/download/sah-linux-x64.tar.gz | tar xz
- chmod +x sah
- ./sah --version
artifacts:
paths:
- sah
expire_in: 1 hour
validate_config:
stage: validate
dependencies:
- install_sah
script:
- ./sah validate --strict --format json | tee validation.json
artifacts:
reports:
junit: validation.json
paths:
- validation.json
expire_in: 1 week
code_review:
stage: review
dependencies:
- install_sah
only:
- merge_requests
script:
- git diff --name-only $CI_MERGE_REQUEST_TARGET_BRANCH_SHA...HEAD | grep -E '\.(rs|py|js|ts)$' > changed-files.txt || echo "No code files changed"
- |
if [ -s changed-files.txt ]; then
mkdir -p reviews
while IFS= read -r file; do
if [ -f "$file" ]; then
lang=$(basename "$file" | sed 's/.*\.//')
./sah prompt test code-reviewer \
--var language="$lang" \
--var file="$file" \
--var focus='["bugs", "security"]' \
--output "reviews/review-$(basename "$file").md"
fi
done < changed-files.txt
# Post review as MR comment
if ls reviews/*.md 1> /dev/null 2>&1; then
echo "## 🤖 SwissArmyHammer Code Review" > mr-comment.md
cat reviews/*.md >> mr-comment.md
curl -X POST \
-H "PRIVATE-TOKEN: $CI_JOB_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"body\": \"$(cat mr-comment.md | sed 's/"/\\"/g' | tr '\n' ' ')\"}" \
"$CI_API_V4_URL/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/notes"
fi
fi
artifacts:
paths:
- reviews/
expire_in: 1 week
run_workflows:
stage: build
dependencies:
- install_sah
parallel:
matrix:
- WORKFLOW: ["build-workflow", "test-workflow", "security-workflow"]
script:
- ./sah flow run $WORKFLOW --var environment=$CI_COMMIT_REF_NAME
artifacts:
paths:
- workflow-*.log
expire_in: 1 day
semantic_indexing:
stage: build
dependencies:
- install_sah
script:
- ./sah search index "**/*.{rs,py,js,ts}" --force
artifacts:
paths:
- .swissarmyhammer/search.db
expire_in: 1 week
cache:
key: semantic-index-$CI_COMMIT_SHA
paths:
- .swissarmyhammer/search.db
create_deployment_issue:
stage: deploy
dependencies:
- install_sah
only:
- main
- develop
script:
- |
./sah issue create \
--name "deploy-$CI_PIPELINE_ID" \
--content "# Deployment $CI_PIPELINE_ID
## Pipeline Info
- Branch: $CI_COMMIT_REF_NAME
- Commit: $CI_COMMIT_SHA
- Pipeline: $CI_PIPELINE_ID
- Timestamp: $(date)
## Changes
$(git log --oneline $CI_COMMIT_BEFORE_SHA..$CI_COMMIT_SHA)
## Deployment Checklist
- [ ] Pre-deployment tests pass
- [ ] Database migrations applied
- [ ] Configuration updated
- [ ] Health checks pass
- [ ] Monitoring alerts configured
"
create_build_memo:
stage: cleanup
dependencies:
- install_sah
when: always
script:
- |
./sah memo create \
--title "Pipeline $CI_PIPELINE_ID - $CI_COMMIT_REF_NAME" \
--content "# Pipeline Report
## Status
Status: $CI_JOB_STATUS
## Timing
- Started: $CI_PIPELINE_CREATED_AT
- Duration: $((CI_PIPELINE_CREATED_AT - $(date +%s))) seconds
## Environment
- Runner: $CI_RUNNER_DESCRIPTION
- Branch: $CI_COMMIT_REF_NAME
- Commit: $CI_COMMIT_SHA
## Artifacts
$(find . -name '*.log' -o -name '*.json' -o -name '*.md' | head -10)
"
Docker Integration
Multi-stage Dockerfile with SwissArmyHammer
# Build stage
FROM rust:1.70 as builder
# Install SwissArmyHammer
RUN curl -L https://github.com/swissarmyhammer/swissarmyhammer/releases/latest/download/sah-linux-x64.tar.gz | tar xz && \
mv sah /usr/local/bin/
# Copy source
WORKDIR /app
COPY . .
# Validate configuration
RUN sah validate --strict
# Run pre-build workflow
RUN sah flow run pre-build-workflow --var environment=container
# Build application
RUN cargo build --release
# Runtime stage
FROM ubuntu:22.04
# Install SwissArmyHammer for runtime
RUN apt-get update && apt-get install -y curl && \
curl -L https://github.com/swissarmyhammer/swissarmyhammer/releases/latest/download/sah-linux-x64.tar.gz | tar xz && \
mv sah /usr/local/bin/ && \
rm -rf /var/lib/apt/lists/*
# Copy application
COPY --from=builder /app/target/release/myapp /usr/local/bin/
# Copy SwissArmyHammer configuration
COPY --from=builder /app/.swissarmyhammer /opt/swissarmyhammer
# Health check using SwissArmyHammer
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD sah prompt test health-check --var service=myapp || exit 1
ENTRYPOINT ["myapp"]
Docker Compose with SwissArmyHammer
version: '3.8'
services:
app:
build: .
environment:
- SAH_HOME=/opt/swissarmyhammer
- SAH_LOG_LEVEL=info
volumes:
- ./workflows:/opt/swissarmyhammer/workflows
- sah-data:/opt/swissarmyhammer/data
healthcheck:
test: ["CMD", "sah", "prompt", "test", "health-check", "--var", "service=app"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
sah-server:
image: swissarmyhammer/swissarmyhammer:latest
command: ["sah", "serve", "--port", "8080"]
ports:
- "8080:8080"
volumes:
- ./prompts:/app/prompts:ro
- ./workflows:/app/workflows:ro
- sah-data:/app/data
environment:
- SAH_LOG_LEVEL=debug
- SAH_MCP_TIMEOUT=60000
workflow-runner:
image: swissarmyhammer/swissarmyhammer:latest
command: ["sah", "flow", "run", "monitoring-workflow", "--var", "interval=60"]
depends_on:
- app
volumes:
- ./workflows:/app/workflows:ro
- sah-data:/app/data
restart: unless-stopped
volumes:
sah-data:
Kubernetes Integration
SwissArmyHammer as a Service
# sah-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: sah-config
data:
sah.toml: |
[general]
auto_reload = true
[logging]
level = "info"
format = "json"
[mcp]
enable_tools = ["issues", "memoranda", "search"]
timeout_ms = 30000
---
# sah-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sah-server
spec:
replicas: 2
selector:
matchLabels:
app: sah-server
template:
metadata:
labels:
app: sah-server
spec:
containers:
- name: sah-server
image: swissarmyhammer/swissarmyhammer:latest
command: ["sah", "serve", "--port", "8080"]
ports:
- containerPort: 8080
env:
- name: SAH_HOME
value: "/app/sah"
- name: SAH_LOG_LEVEL
value: "info"
volumeMounts:
- name: config
mountPath: /app/sah/sah.toml
subPath: sah.toml
- name: prompts
mountPath: /app/sah/prompts
- name: workflows
mountPath: /app/sah/workflows
- name: data
mountPath: /app/sah/data
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: config
configMap:
name: sah-config
- name: prompts
configMap:
name: sah-prompts
- name: workflows
configMap:
name: sah-workflows
- name: data
persistentVolumeClaim:
claimName: sah-data
---
# sah-service.yaml
apiVersion: v1
kind: Service
metadata:
name: sah-server
spec:
selector:
app: sah-server
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
CronJob for Automated Workflows
apiVersion: batch/v1
kind: CronJob
metadata:
name: sah-maintenance
spec:
schedule: "0 2 * * *" # Daily at 2 AM
jobTemplate:
spec:
template:
spec:
containers:
- name: sah-maintenance
image: swissarmyhammer/swissarmyhammer:latest
command:
- /bin/bash
- -c
- |
# Run maintenance workflow
sah flow run maintenance-workflow --var environment=production
# Cleanup old issues
sah issue list --status complete --format json | \
jq -r '.[] | select(.created < (now - 86400*30)) | .name' | \
xargs -I {} sah issue delete {}
# Update search index
sah search index "**/*.{rs,py,js,ts}" --force
env:
- name: SAH_HOME
value: "/app/sah"
volumeMounts:
- name: sah-data
mountPath: /app/sah/data
restartPolicy: OnFailure
volumes:
- name: sah-data
persistentVolumeClaim:
claimName: sah-data
These integration examples demonstrate how SwissArmyHammer can be seamlessly incorporated into existing development workflows, providing AI-powered automation and analysis capabilities across the entire software development lifecycle.
Troubleshooting
Common issues and solutions for SwissArmyHammer installation, configuration, and usage.
Installation Issues
Binary Not Found
Problem: sah: command not found
Solutions:
# Check if sah is in PATH
which sah
echo $PATH
# Add to PATH (add to ~/.bashrc or ~/.zshrc)
export PATH="$PATH:/usr/local/bin"
# Or install to user directory
cargo install swissarmyhammer-cli --root ~/.local
export PATH="$PATH:~/.local/bin"
# Verify installation
sah --version
Permission Denied
Problem: Permission errors when running sah
Solutions:
# Fix binary permissions
chmod +x $(which sah)
# Fix directory permissions
chmod -R 755 ~/.swissarmyhammer
chmod 600 ~/.swissarmyhammer/sah.toml
Build Failures
Problem: Compilation errors when building from source
Solutions:
# Update Rust toolchain
rustup update
# Clear cargo cache
cargo clean
# Install with specific features
cargo install swissarmyhammer-cli --no-default-features --features basic
# Check system dependencies
# On Ubuntu/Debian:
sudo apt-get update
sudo apt-get install build-essential pkg-config libssl-dev
# On macOS:
xcode-select --install
brew install openssl pkg-config
Configuration Issues
Configuration Not Loading
Problem: sah doctor
shows configuration errors
Solutions:
# Check configuration file syntax
sah config show --format json
# Validate configuration
sah validate --config
# Reset to defaults
mv ~/.swissarmyhammer/sah.toml ~/.swissarmyhammer/sah.toml.backup
sah doctor --fix
# Check file permissions
ls -la ~/.swissarmyhammer/sah.toml
chmod 644 ~/.swissarmyhammer/sah.toml
Directory Structure Issues
Problem: Prompts or workflows not found
Solutions:
# Check directory structure
sah doctor --check directories
# Create missing directories
mkdir -p ~/.swissarmyhammer/{prompts,workflows,memoranda,issues}
# Check file permissions
find ~/.swissarmyhammer -type d -exec chmod 755 {} \;
find ~/.swissarmyhammer -type f -exec chmod 644 {} \;
# List prompt sources
sah prompt list --format table
Environment Variables
Problem: Environment variables not recognized
Solutions:
# Check environment variables
env | grep SAH_
# Set in shell profile
echo 'export SAH_HOME="$HOME/.swissarmyhammer"' >> ~/.bashrc
echo 'export SAH_LOG_LEVEL="info"' >> ~/.bashrc
source ~/.bashrc
# Test with explicit environment
SAH_LOG_LEVEL=debug sah doctor
MCP Integration Issues
Claude Code Connection Failed
Problem: MCP server not connecting to Claude Code
Solutions:
# Test MCP server directly
sah serve --stdio
# Type: {"jsonrpc":"2.0","id":1,"method":"initialize","params":{}}
# Should get initialization response
# Check MCP configuration
claude mcp list
claude mcp status sah
# Reconfigure MCP
claude mcp remove sah
claude mcp add --scope user sah sah serve
# Check logs
tail -f ~/.config/claude-code/logs/mcp.log
MCP Tools Not Available
Problem: SwissArmyHammer tools not showing up in Claude Code
Solutions:
# Check tool registration
sah serve --stdio
# Look for tools in initialize response
# Enable specific tools
sah config set mcp.enable_tools '["issues","memoranda","search","abort"]'
# Restart Claude Code
# Kill Claude Code process and restart
# Check tool permissions
sah config show | grep mcp
MCP Timeout Issues
Problem: MCP requests timing out
Solutions:
# Increase timeout
sah config set mcp.timeout_ms 60000
# Check system performance
sah doctor --check performance
# Monitor MCP server
SAH_LOG_LEVEL=debug sah serve
# Reduce concurrent requests
sah config set mcp.max_concurrent_requests 5
Prompt Issues
Prompt Not Found
Problem: sah prompt test my-prompt
returns “not found”
Solutions:
# List available prompts
sah prompt list
# Check specific source
sah prompt list --source user
sah prompt list --source local
sah prompt list --source builtin
# Check file location and name
ls -la ~/.swissarmyhammer/prompts/
ls -la ./.swissarmyhammer/prompts/
# Validate prompt syntax
sah prompt validate my-prompt
Template Rendering Errors
Problem: Liquid template errors during prompt rendering
Solutions:
# Check template syntax
sah prompt validate my-prompt --strict
# Test with verbose logging
SAH_LOG_LEVEL=debug sah prompt test my-prompt --var key=value
# Check argument requirements
sah prompt show my-prompt
# Test incrementally
sah prompt test my-prompt --var required_arg=value
Variable Substitution Issues
Problem: Variables not being substituted correctly
Solutions:
# Check argument names and types
sah prompt show my-prompt --format json
# Use correct variable format
sah prompt test my-prompt --var "name=value with spaces"
# Check for typos in template
sah prompt show my-prompt --raw | grep -n "{{.*}}"
# Test with variables file
echo '{"name":"value"}' > vars.json
sah prompt test my-prompt --vars-file vars.json
Workflow Issues
Workflow Execution Failures
Problem: Workflows failing to execute
Solutions:
# Validate workflow syntax
sah flow validate my-workflow --strict
# Test with dry run
sah flow run my-workflow --dry-run
# Check state machine
sah flow show my-workflow --diagram
# Run with verbose logging
SAH_LOG_LEVEL=debug sah flow run my-workflow
Shell Action Failures
Problem: Shell actions in workflows failing
Solutions:
# Check allowed commands
sah config show | grep security.allowed_commands
# Add required commands
sah config set 'security.allowed_commands ["git","npm","cargo","python"]'
# Check working directory
pwd
ls -la
# Test shell commands manually
git status
npm --version
# Increase shell timeout
sah config set workflow.actions.shell.timeout_ms 120000
State Transition Issues
Problem: Workflow stuck in specific state
Solutions:
# Check state definitions
sah flow show my-workflow
# Validate transition logic
sah flow validate my-workflow --check-cycles
# Start from different state
sah flow run my-workflow --start-state next-state
# Check workflow logs
tail -f ~/.swissarmyhammer/logs/workflow.log
Search Issues
Indexing Failures
Problem: sah search index
fails or crashes
Solutions:
# Check available disk space
df -h ~/.swissarmyhammer
# Check file permissions
ls -la ~/.swissarmyhammer/search.db
# Clear existing index
rm ~/.swissarmyhammer/search.db
sah search index "**/*.rs"
# Index smaller batches
sah search index "src/**/*.rs"
sah search index "tests/**/*.rs"
# Check memory usage
sah config set search.max_file_size 524288 # 512KB
Search Results Empty
Problem: Search queries return no results
Solutions:
# Check if files are indexed
ls -la ~/.swissarmyhammer/search.db
# Verify indexed patterns
sah search index "**/*.rs" --force
# Test different queries
sah search query "function"
sah search query "struct"
# Lower similarity threshold
sah search query "my query" --threshold 0.3
# Check indexed file types
sah doctor --check search
Embedding Model Issues
Problem: Embedding model download or loading failures
Solutions:
# Check internet connection
curl -I https://huggingface.co
# Clear model cache
rm -rf ~/.swissarmyhammer/models
# Use different model
sah config set search.embedding_model "all-MiniLM-L6-v2"
# Increase download timeout
sah config set search.model_download_timeout 600000
# Check available storage
df -h ~/.swissarmyhammer
Issue Management Problems
Git Integration Failures
Problem: Issue commands failing with git errors
Solutions:
# Check git repository
git status
git remote -v
# Configure git if needed
git config --global user.name "Your Name"
git config --global user.email "your.email@example.com"
# Check branch permissions
git branch -a
git checkout main
# Fix branch issues
git checkout main
git branch -D issue/problem-branch
sah issue work problem-issue
Branch Creation Issues
Problem: Cannot create branches for issues
Solutions:
# Check current git status
git status
git stash # if needed
# Check branch naming
sah config show | grep issues.branch_pattern
# Use custom branch pattern
sah config set 'issues.branch_pattern "feature/{{name}}"'
# Manual branch creation
git checkout -b issue/my-issue
sah issue work my-issue
Issue File Corruption
Problem: Issue files corrupted or unreadable
Solutions:
# Check file encoding
file ~/.swissarmyhammer/issues/my-issue.md
# Validate issue files
sah validate --strict
# Backup and restore
cp ~/.swissarmyhammer/issues/my-issue.md ~/.swissarmyhammer/issues/my-issue.md.backup
editor ~/.swissarmyhammer/issues/my-issue.md
# Check file permissions
chmod 644 ~/.swissarmyhammer/issues/*.md
Performance Issues
Slow Startup
Problem: SwissArmyHammer takes long to start
Solutions:
# Profile startup time
time sah --version
# Disable file watching
sah config set general.auto_reload false
# Clear caches
rm -rf ~/.swissarmyhammer/cache/
rm -rf ~/.swissarmyhammer/models/
# Reduce search index
rm ~/.swissarmyhammer/search.db
# Re-index only important files
Memory Usage Issues
Problem: High memory usage or out-of-memory errors
Solutions:
# Check memory limits
sah config show | grep security.max_memory_mb
# Reduce memory limits
sah config set security.max_memory_mb 256
# Limit file indexing
sah config set search.max_file_size 524288
# Reduce cache sizes
sah config set template.cache_size 100
sah config set workflow.cache_dir "/tmp/sah-cache"
Disk Usage Issues
Problem: SwissArmyHammer using too much disk space
Solutions:
# Check disk usage
du -sh ~/.swissarmyhammer/*
# Clean up caches
rm -rf ~/.swissarmyhammer/cache/
rm -rf ~/.swissarmyhammer/workflow_cache/
# Reduce search index size
sah config set search.max_file_size 262144 # 256KB
rm ~/.swissarmyhammer/search.db
sah search index "**/*.{rs,py}" --exclude "**/target/**"
# Set disk usage limits
sah config set security.max_disk_usage_mb 512
Network Issues
Firewall/Proxy Issues
Problem: Network requests failing behind firewall/proxy
Solutions:
# Configure proxy
export HTTP_PROXY=http://proxy.example.com:8080
export HTTPS_PROXY=http://proxy.example.com:8080
# Disable network features if needed
sah config set security.allow_network false
# Use offline models
# Place models manually in ~/.swissarmyhammer/models/
# Test network connectivity
curl -I https://huggingface.co/nomic-ai/nomic-embed-text-v1.5
Debugging
Enable Debug Logging
# Temporary debug mode
SAH_LOG_LEVEL=debug sah command
# Persistent debug mode
sah config set logging.level debug
sah config set logging.file ~/.swissarmyhammer/debug.log
# Trace level for deep debugging
SAH_LOG_LEVEL=trace sah command 2>&1 | tee debug-output.log
Collect Diagnostic Information
# Full system check
sah doctor --verbose --format json > diagnosis.json
# Configuration dump
sah config show --format json > config.json
# Environment information
env | grep SAH_ > environment.txt
sah --version > version.txt
# File system state
find ~/.swissarmyhammer -ls > filesystem.txt
Common Log Messages
Message | Meaning | Solution |
---|---|---|
Failed to load prompt | Prompt file syntax error | Run sah prompt validate |
Template rendering failed | Liquid template error | Check variable names and syntax |
MCP connection refused | Claude Code not connecting | Check MCP configuration |
Git operation failed | Git command error | Check git repository state |
Search index corrupted | Database corruption | Delete and rebuild search index |
Permission denied | File system permissions | Fix file/directory permissions |
Timeout exceeded | Operation took too long | Increase timeout settings |
Getting Help
If you encounter issues not covered here:
- Check logs: Look at
~/.swissarmyhammer/logs/
for detailed error messages - Run diagnostics: Use
sah doctor --verbose
for comprehensive system check - Search issues: Check GitHub Issues for similar problems
- Create issue: Report new bugs with:
- SwissArmyHammer version (
sah --version
) - Operating system and version
- Complete error message
- Steps to reproduce
- Output of
sah doctor --verbose --format json
- SwissArmyHammer version (
Emergency Recovery
If SwissArmyHammer is completely broken:
# Reset configuration to defaults
mv ~/.swissarmyhammer ~/.swissarmyhammer.backup
sah doctor --fix
# Reinstall from scratch
cargo uninstall swissarmyhammer-cli
cargo install swissarmyhammer-cli
sah doctor
This troubleshooting guide covers the most common issues and their solutions. Keep it handy for quick problem resolution.
Performance Tuning
Optimize SwissArmyHammer for speed, memory usage, and scalability in different environments.
Performance Overview
SwissArmyHammer is designed for performance across several dimensions:
- Startup Time: Fast initialization for CLI commands
- Memory Usage: Efficient memory management for large codebases
- I/O Performance: Optimized file system operations
- Search Speed: Fast semantic search with vector databases
- Template Rendering: Efficient Liquid template processing
- Concurrent Operations: Parallel execution where beneficial
Benchmarking
Built-in Benchmarks
SwissArmyHammer includes comprehensive benchmarks:
# Run all benchmarks
cargo bench
# Run specific benchmark suites
cargo bench search
cargo bench templates
cargo bench workflows
# Compare with baseline
cargo bench -- --save-baseline main
git checkout feature-branch
cargo bench -- --baseline main
Profiling Tools
CPU Profiling
# Install profiling tools
cargo install cargo-flamegraph
# Profile a specific command
cargo flamegraph --bin sah -- search query "error handling"
# Profile with perf (Linux)
perf record --call-graph=dwarf cargo run --bin sah -- search index "**/*.rs"
perf report
Memory Profiling
# Install memory profilers
cargo install cargo-profdata
# Profile memory usage
valgrind --tool=massif cargo run --bin sah -- search index "**/*.rs"
ms_print massif.out.12345
# Use heaptrack (Linux)
heaptrack cargo run --bin sah -- search index "**/*.rs"
heaptrack_gui heaptrack.sah.12345.gz
Configuration Tuning
General Performance Settings
# ~/.swissarmyhammer/sah.toml
[general]
# Disable auto-reload for better performance
auto_reload = false
# Increase timeout for large operations
default_timeout_ms = 60000
[template]
# Increase cache size for frequently used templates
cache_size = 2000
# Disable template recompilation in production
recompile_templates = false
[workflow]
# Increase parallel action limit for powerful machines
max_parallel_actions = 8
# Enable workflow caching
enable_caching = true
cache_dir = "/tmp/sah-workflow-cache"
[search]
# Use faster but larger embedding model
embedding_model = "nomic-embed-code"
# Increase memory limits for large indexes
max_memory_mb = 2048
# Optimize index for read performance
index_compression = false
Memory Optimization
[security]
# Reduce memory limits for resource-constrained environments
max_memory_mb = 256
max_disk_usage_mb = 1024
[search]
# Limit file size for indexing
max_file_size = 524288 # 512KB
# Reduce embedding dimensions for smaller memory footprint
embedding_dimensions = 384 # vs 768 default
[template]
# Smaller template cache
cache_size = 100
# Aggressive cache eviction
cache_ttl_ms = 300000 # 5 minutes
I/O Optimization
[general]
# Use faster file watching (when available)
file_watcher = "polling" # or "native"
# Batch file operations
batch_size = 100
[search]
# Use faster storage backend
storage_backend = "memory" # for small indexes
# storage_backend = "disk" # for large indexes
# Enable compression for large indexes
enable_compression = true
# Use faster hash function
hash_algorithm = "xxhash"
Search Performance
Indexing Optimization
Selective Indexing
# Index only important directories
sah search index "src/**/*.{rs,py,js}" --exclude "**/target/**"
# Avoid large generated files
sah search index "**/*.rs" \
--exclude "**/target/**" \
--exclude "**/node_modules/**" \
--exclude "**/*.generated.*"
# Set file size limits
sah search index "**/*.rs" --max-size 1048576 # 1MB limit
Parallel Indexing
[search]
# Enable parallel file processing
parallel_indexing = true
indexing_threads = 4
# Batch processing for better throughput
batch_size = 50
# Use memory mapping for large files
use_mmap = true
Incremental Indexing
# Only index changed files (much faster)
sah search index "**/*.rs" # Skips unchanged files automatically
# Force full reindex only when needed
sah search index "**/*.rs" --force
Query Optimization
Efficient Queries
# Use specific, focused queries
sah search query "async function error handling" --limit 5
# Adjust similarity threshold for faster results
sah search query "database connection" --threshold 0.7
# Use exact matches when possible
sah search query "fn main()" --threshold 0.9
Query Caching
[search]
# Enable query result caching
cache_results = true
result_cache_size = 1000
result_cache_ttl_ms = 300000 # 5 minutes
# Cache embeddings for repeated queries
cache_embeddings = true
embedding_cache_size = 10000
Template Performance
Template Optimization
Efficient Template Design
{% comment %}Good: Filter once, use multiple times{% endcomment %}
{% assign active_users = users | where: "active", true %}
Active users: {{active_users | size}}
Names: {{active_users | map: "name" | join: ", "}}
{% comment %}Avoid: Repeated filtering{% endcomment %}
Active users: {{users | where: "active", true | size}}
Names: {{users | where: "active", true | map: "name" | join: ", "}}
Loop Optimization
{% comment %}Good: Early termination{% endcomment %}
{% for item in items limit:10 %}
{% if item.important %}
{{item.name}}
{% break %}
{% endif %}
{% endfor %}
{% comment %}Good: Batch operations{% endcomment %}
{% assign important_items = items | where: "important", true %}
{% for item in important_items limit:10 %}
{{item.name}}
{% endfor %}
Template Caching
[template]
# Aggressive caching for production
cache_size = 5000
cache_compiled_templates = true
# Pre-compile frequently used templates
precompile_templates = [
"code-review",
"documentation",
"test-generator"
]
Variable Management
{% comment %}Cache expensive computations{% endcomment %}
{% assign file_count = files | size %}
{% if file_count > 0 %}
Processing {{file_count}} files...
{% for file in files %}
File: {{file.name}} ({{forloop.index}}/{{file_count}})
{% endfor %}
{% endif %}
Workflow Performance
Parallel Execution
[workflow]
# Optimize for CPU cores
max_parallel_actions = 8
# Enable fork-join optimization
optimize_forks = true
# Use async execution where possible
prefer_async = true
Action Optimization
Shell Actions
**Actions:**
# Good: Combine related commands
- shell: `cargo build && cargo test --lib` (timeout: 300s)
# Avoid: Separate slow commands
- shell: `cargo build` (timeout: 120s)
- shell: `cargo test --lib` (timeout: 180s)
Prompt Actions
**Actions:**
# Good: Batch similar prompts
- prompt: multi-analyzer files="$(find . -name '*.rs' | head -10)" analysis_type="comprehensive"
# Avoid: Individual file analysis
- prompt: code-reviewer file="src/main.rs"
- prompt: code-reviewer file="src/lib.rs"
State Machine Optimization
# Good: Minimize state transitions
### build-and-test
**Actions:**
- shell: `cargo build --release`
- shell: `cargo test --release`
**Transitions:**
- On success → deploy
- On failure → failed
# Avoid: Too many small states
### build
**Actions:**
- shell: `cargo build --release`
**Transitions:**
- Always → test
### test
**Actions:**
- shell: `cargo test --release`
**Transitions:**
- On success → deploy
System-Level Optimization
File System Performance
SSD Optimization
# Use SSD for search database
mkdir -p /mnt/ssd/sah-cache
sah config set search.index_path "/mnt/ssd/sah-cache/search.db"
# Use tmpfs for temporary operations
mkdir -p /tmp/sah-temp
sah config set workflow.temp_dir "/tmp/sah-temp"
Network File Systems
[general]
# Reduce file watching on network filesystems
auto_reload = false
# Use local cache
local_cache_dir = "/tmp/sah-cache"
[search]
# Cache index locally
local_index_cache = true
cache_dir = "/tmp/sah-search-cache"
Memory Management
Large Scale Operations
# For large codebases, use streaming operations
export SAH_STREAMING_MODE=true
export SAH_MAX_MEMORY=4G
# Process in batches
sah search index "**/*.rs" --batch-size 100
# Use disk-based sorting for large datasets
export SAH_USE_DISK_SORT=true
Memory-Constrained Environments
[search]
# Use smaller embedding model
embedding_model = "all-MiniLM-L6-v2" # 384 dimensions vs 768
# Reduce cache sizes
embedding_cache_size = 1000
result_cache_size = 100
# Enable aggressive garbage collection
gc_threshold = 1000
CPU Optimization
Multi-core Systems
[general]
# Use all available cores
worker_threads = 0 # Auto-detect
[search]
# Parallel indexing
indexing_threads = 8
search_threads = 4
[workflow]
# Parallel action execution
max_parallel_actions = 16
Single-core Systems
[general]
# Minimize threading overhead
worker_threads = 1
[search]
# Sequential processing
indexing_threads = 1
search_threads = 1
[workflow]
# Sequential execution
max_parallel_actions = 1
Monitoring and Profiling
Runtime Metrics
# Enable detailed timing
export SAH_ENABLE_TIMING=true
export SAH_LOG_LEVEL=debug
# Monitor with built-in metrics
sah doctor --check performance
# Profile specific operations
time sah search query "error handling"
time sah prompt test code-reviewer --var file=src/main.rs
Performance Monitoring
[logging]
# Enable performance logging
enable_timing = true
log_slow_operations = true
slow_operation_threshold_ms = 1000
[metrics]
# Export metrics for monitoring
enable_metrics = true
metrics_port = 9090
metrics_endpoint = "/metrics"
Continuous Performance Testing
# Add performance tests to CI
#!/bin/bash
# performance-test.sh
# Index performance
time_start=$(date +%s%N)
sah search index "**/*.rs" --force >/dev/null 2>&1
time_end=$(date +%s%N)
index_time=$(( (time_end - time_start) / 1000000 ))
echo "Index time: ${index_time}ms"
# Query performance
time_start=$(date +%s%N)
sah search query "async function" >/dev/null 2>&1
time_end=$(date +%s%N)
query_time=$(( (time_end - time_start) / 1000000 ))
echo "Query time: ${query_time}ms"
# Fail if performance regression
if [ $index_time -gt 30000 ]; then
echo "Index performance regression!"
exit 1
fi
if [ $query_time -gt 1000 ]; then
echo "Query performance regression!"
exit 1
fi
Performance Troubleshooting
Common Issues
Slow Startup
# Check file system performance
time ls -la ~/.swissarmyhammer/
# Disable auto-reload
sah config set general.auto_reload false
# Clear caches
rm -rf ~/.swissarmyhammer/cache/
High Memory Usage
# Monitor memory usage
ps aux | grep sah
pmap $(pidof sah)
# Reduce cache sizes
sah config set template.cache_size 100
sah config set search.embedding_cache_size 1000
# Enable streaming mode
export SAH_STREAMING_MODE=true
Slow Search Performance
# Check index size
ls -lh ~/.swissarmyhammer/search.db
# Rebuild index with optimizations
sah search index "**/*.rs" --force --optimize
# Use smaller embedding model
sah config set search.embedding_model "all-MiniLM-L6-v2"
By applying these performance tuning techniques, SwissArmyHammer can be optimized for various environments and use cases, from resource-constrained development machines to high-performance CI/CD servers.
Contributing
Welcome to SwissArmyHammer! We appreciate your interest in contributing to this project. This guide will help you get started.
Code of Conduct
SwissArmyHammer follows the Rust Code of Conduct. Please be respectful and inclusive in all interactions.
Getting Started
Development Environment
-
Install Rust: Ensure you have Rust 1.70 or later installed
-
Clone the repository:
git clone https://github.com/swissarmyhammer/swissarmyhammer.git cd swissarmyhammer
-
Install dependencies:
# Install development dependencies cargo install cargo-watch cargo-tarpaulin cargo-audit # Install pre-commit hooks (optional but recommended) pip install pre-commit pre-commit install
-
Run tests to verify setup:
cargo test cargo clippy cargo fmt --check
Project Structure
swissarmyhammer/
├── swissarmyhammer/ # Core library
├── swissarmyhammer-cli/ # Command-line interface
├── swissarmyhammer-tools/ # MCP tools and server
├── builtin/ # Built-in prompts and workflows
├── doc/ # Documentation (mdBook)
├── tests/ # Integration tests
└── benches/ # Benchmarks
How to Contribute
Reporting Issues
Before creating an issue, please:
- Search existing issues to avoid duplicates
- Use the issue templates when available
- Provide detailed information including:
- SwissArmyHammer version (
sah --version
) - Operating system and version
- Steps to reproduce
- Expected vs actual behavior
- Relevant configuration files
- SwissArmyHammer version (
Proposing Features
For new features:
- Open an issue with the “feature request” label
- Describe the problem you’re solving
- Provide examples of how it would work
- Consider implementation complexity
- Wait for maintainer feedback before starting work
Code Contributions
Pull Request Process
-
Fork the repository and create a feature branch
-
Make your changes following our coding standards
-
Add tests for new functionality
-
Update documentation if needed
-
Run the full test suite:
# Run all tests cargo test --workspace # Run integration tests cargo test --test '*' # Check formatting and lints cargo fmt --check cargo clippy -- -D warnings # Run benchmarks (if performance-related) cargo bench
-
Create a pull request with:
- Clear title and description
- Link to related issues
- Screenshots/examples if applicable
- Checklist of completed items
Coding Standards
Rust Code Style:
- Use
cargo fmt
for formatting - Pass
cargo clippy
with no warnings - Follow Rust API Guidelines
- Write comprehensive doc comments with examples
- Use meaningful variable and function names
Error Handling:
- Use the
anyhow
crate for error handling - Provide contextual error messages
- Use the
Result
type consistently - Don’t panic in library code
Testing:
- Write unit tests for all public functions
- Add integration tests for complex workflows
- Use property-based testing where appropriate
- Maintain test coverage above 80%
Documentation:
- Write rustdoc comments for all public items
- Include usage examples in documentation
- Update the user guide for new features
- Keep CHANGELOG.md updated
Code Review Guidelines
For Authors:
- Keep PRs focused and reasonably sized
- Respond to feedback promptly
- Be open to suggestions and changes
- Test edge cases and error conditions
For Reviewers:
- Be constructive and specific in feedback
- Test the changes locally when possible
- Check for security implications
- Verify documentation is updated
Documentation Contributions
Documentation improvements are always welcome:
- User Guide: Located in
doc/src/
- API Documentation: Rust doc comments in source code
- Examples: Located in
doc/src/examples/
- README: Project overview and quick start
When updating documentation:
- Use clear, concise language
- Provide practical examples
- Test all code examples
- Check for broken links
- Follow the existing style and structure
Development Workflows
Running Tests
# Unit tests only
cargo test --lib
# Integration tests only
cargo test --test '*'
# All tests with verbose output
cargo test --workspace --verbose
# Test with coverage
cargo tarpaulin --out html
# Test specific module
cargo test --package swissarmyhammer search::tests
Development Server
For MCP development:
# Run MCP server in development mode
cargo run --bin swissarmyhammer-cli serve --stdio
# Or with debug logging
SAH_LOG_LEVEL=debug cargo run --bin swissarmyhammer-cli serve --stdio
Benchmarking
# Run all benchmarks
cargo bench
# Run specific benchmark
cargo bench search
# Generate benchmark reports
cargo bench -- --save-baseline main
Debugging
# Run with debug logging
SAH_LOG_LEVEL=debug cargo run --bin swissarmyhammer-cli prompt list
# Use debugger
RUST_LOG=debug cargo run --bin swissarmyhammer-cli -- --help
# Memory debugging with valgrind
valgrind --tool=memcheck cargo run --bin swissarmyhammer-cli
Contribution Areas
High-Impact Areas
-
Performance Optimizations
- Search indexing speed
- Template rendering performance
- Memory usage reduction
- Startup time optimization
-
New Language Support
- Add TreeSitter parsers
- Language-specific prompt templates
- Build tool integrations
-
MCP Tool Enhancements
- New tool implementations
- Better error reporting
- Request/response validation
-
Documentation
- More examples and tutorials
- Video guides
- Translation to other languages
-
Testing
- Edge case coverage
- Performance regression tests
- Cross-platform testing
Good First Issues
Look for issues labeled good-first-issue
:
- Documentation improvements
- Small bug fixes
- Adding new built-in prompts
- Test coverage improvements
- Error message enhancements
Release Process
Versioning
SwissArmyHammer uses Semantic Versioning:
- MAJOR: Incompatible API changes
- MINOR: New functionality (backwards compatible)
- PATCH: Bug fixes (backwards compatible)
Release Checklist
- Update version numbers in
Cargo.toml
files - Update CHANGELOG.md with release notes
- Run full test suite on multiple platforms
- Update documentation if needed
- Create release PR for review
- Tag release after merge
- Build and publish binaries
- Update package registries (crates.io)
- Announce release
Community Guidelines
Communication
- GitHub Issues: Bug reports and feature requests
- GitHub Discussions: General questions and ideas
- Discord: Real-time chat (if available)
- Matrix: Alternative chat platform (if available)
Getting Help
If you need help:
- Check the documentation first
- Search existing issues
- Ask in GitHub Discussions
- Tag maintainers if urgent
Recognition
Contributors are recognized:
- In CONTRIBUTORS.md file
- In release notes for significant contributions
- Through GitHub’s contribution tracking
- In project documentation when appropriate
Legal
License
By contributing to SwissArmyHammer, you agree that your contributions will be licensed under the same license as the project (MIT or Apache-2.0).
Copyright
- You retain copyright of your contributions
- You grant the project permission to use your contributions
- You confirm you have the right to make the contribution
- You agree your contribution does not violate any third-party rights
Contributor License Agreement
Currently, no formal CLA is required, but this may change as the project grows. Contributors will be notified if a CLA becomes necessary.
Resources
Useful Links
Tools and Services
- CI/CD: GitHub Actions
- Code Coverage: Codecov
- Documentation: GitHub Pages with mdBook
- Package Registry: crates.io
- Binary Releases: GitHub Releases
Thank you for contributing to SwissArmyHammer! Your efforts help make AI-powered development tools more accessible and powerful for everyone.