Getting Started¶
Installation¶
Install PatchPal from PyPI:
Supported Operating Systems: Linux, MacOS, MS Windows.
Alternative: Run with Docker/Podman (no installation required)
If you prefer to use containers instead of installing PatchPal locally:
Option 1: Using patchpal-sandbox (Easiest)
After installing PatchPal (pip install patchpal), use the patchpal-sandbox command for automatic container setup:
# Interactive mode (any [LiteLLM-supported model](https://models.litellm.ai/) can be used)
patchpal-sandbox -- --model openai/gpt-5-mini
# With environment file
patchpal-sandbox --env-file .env -- --model anthropic/claude-sonnet-4-5
# Autopilot mode (for autonomous iterative development - see Autopilot docs)
patchpal-sandbox --env-file .env -- autopilot --prompt "..." --model anthropic/claude-sonnet-4-5
The patchpal-sandbox command automatically:
- Auto-detects Docker/Podman
- Mounts current directory and ~/.patchpal
- Loads API keys from .env file or environment
- Uses pre-built image (fast startup)
- Default: permissions enabled for interactive mode; disabled for autopilot mode
Option 2: Manual Docker/Podman Commands
# Using Docker with pre-built image (default model)
docker run -it --rm \
-v $(pwd):/workspace \
-e ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY \
ghcr.io/amaiya/patchpal-sandbox:latest \
patchpal --model anthropic/claude-sonnet-4-5
# Or with Podman
podman run -it --rm \
-v $(pwd):/workspace \
-e ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY \
ghcr.io/amaiya/patchpal-sandbox:latest \
patchpal --model anthropic/claude-sonnet-4-5
# Specify a different model
docker run -it --rm \
-v $(pwd):/workspace \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
ghcr.io/amaiya/patchpal-sandbox:latest \
patchpal --model openai/gpt-5-mini
This runs PatchPal in an isolated container with:
- PatchPal pre-installed (no pip install needed)
- Current directory mounted at /workspace
- Your API key passed through environment variable
- For other models, pass additional -e flags (e.g., -e OPENAI_API_KEY=$OPENAI_API_KEY)
- Pass patchpal arguments after patchpal command (e.g., patchpal --model openai/gpt-4o-mini, patchpal --autopilot)
Setup¶
- Get an API key or a Local LLM Engine:
- [Cloud] For Anthropic models (default): Sign up at https://console.anthropic.com/
- [Cloud] For OpenAI models: Get a key from https://platform.openai.com/
- [Local] For vLLM: Install from https://docs.vllm.ai/ (free - no API charges) Recommended for Local Use
- [Local] For Ollama: Install from https://ollama.com/ (⚠️ requires
OLLAMA_CONTEXT_LENGTH=32768- see Ollama section below) -
For other providers: Check the LiteLLM documentation
-
Set up your API key as environment variable:
# For Anthropic (default) export ANTHROPIC_API_KEY=your_api_key_here # For OpenAI export OPENAI_API_KEY=your_api_key_here # For vLLM - API key required only if configured export HOSTED_VLLM_API_BASE=http://localhost:8000 # depends on your vLLM setup export HOSTED_VLLM_API_KEY=token-abc123 # optional depending on your vLLM setup # No API required for Ollama. # For other providers, check LiteLLM docs -
Run PatchPal:
# Use default model (anthropic/claude-sonnet-4-5) patchpal # Use a specific model via command-line argument patchpal --model openai/gpt-5.2-codex # or openai/gpt-5-mini, anthropic/claude-opus-4-5, etc. # Use vLLM (local) # Note: vLLM server must be started with --tool-call-parser and --enable-auto-tool-choice export HOSTED_VLLM_API_BASE=http://localhost:8000 export HOSTED_VLLM_API_KEY=token-abc123 patchpal --model hosted_vllm/openai/gpt-oss-120b # Use Ollama (local - requires OLLAMA_CONTEXT_LENGTH=32768) export OLLAMA_CONTEXT_LENGTH=32768 patchpal --model ollama_chat/glm-4.7-flash:q4_K_M # Or set the model via environment variable export PATCHPAL_MODEL=anthropic/claude-opus-4-5 patchpal
Tip for Local Models: Local models (i.e., models served by Ollama or vLLM) may work better with these settings:
- PATCHPAL_MINIMAL_TOOLS=true and PATCHPAL_ENABLE_WEB=false - For models with function calling: Provides only essential tools (read_file, read_lines, write_file, edit_file, run_shell), reducing tool confusion
- PATCHPAL_REACT_MODE=true - For models without function calling: Enables text-based tool invocation (see ReAct mode docs)
- For Ollama, additionally setting PATCHPAL_STREAM_OUTPUT=false may help with tool call reliability