Using the Witsy CLI
Witsy includes a powerful Command Line Interface (CLI) that allows you to interact with your configured AI models directly from your terminal. This is ideal for automating tasks, piping data into LLMs, or performing filesystem-based operations using your desktop AI configuration.
Setup and Prerequisites
Before using the CLI, ensure the following:
- Witsy is running: The CLI communicates with the desktop application via a local HTTP server.
- Enable HTTP Endpoints: Go to Settings > General in the Witsy app and ensure Enable HTTP Endpoints is turned on.
- CLI Installation: If you installed Witsy via Homebrew (
brew install --cask witsy), thewitsycommand should be available in your path. Otherwise, ensure the Witsy binary directory is in your shell's PATH.
Basic Usage
The simplest way to use the CLI is to pass a prompt directly as an argument.
witsy "Explain the difference between a process and a thread"
Piping Input
You can pipe content from other commands into Witsy. This is useful for logs, code snippets, or document analysis.
cat server.log | witsy "Analyze these logs for any critical errors"
Saving Output
Since the CLI outputs raw text to stdout, you can redirect it to a file:
witsy "Write a Python script to scrape a website" > scraper.py
Selecting Engines and Models
Witsy uses your default engine and model by default. However, you can switch between any provider you have configured in the desktop app.
Listing Available Options
To see which engines and models are available for CLI use:
# List all configured engines (OpenAI, Anthropic, Ollama, etc.)
witsy engines
# List all models for a specific engine
witsy models ollama
Overriding the Model
Use the --engine and --model flags to target a specific LLM for a single command:
witsy --engine anthropic --model claude-3-5-sonnet-20240620 "Review this code"
Working with the Filesystem
The CLI features a "WorkDir" mode. When enabled, Witsy can access your local files to perform analysis or edits.
How to analyze local files
Provide the path to your project using --work-dir. This gives the LLM context about the files in that directory.
# Ask questions about a local repository
witsy --work-dir ./my-project "Explain how the authentication logic works in this project"
How to refactor code via CLI
Because Witsy supports MCP (Model Context Protocol) and filesystem plugins, you can ask it to perform edits directly.
witsy --work-dir . "Find all TODOs in the src directory and list them"
Practical Recipes
Summarize a Git Diff
Get a quick summary of your changes before committing:
git diff | witsy "Summarize these changes in 3 bullet points for a commit message"
Convert File Formats
Quickly transform data from one format to another:
cat data.csv | witsy "Convert this CSV data to a clean JSON array" > data.json
Automated Code Review
Review a specific file for security vulnerabilities:
witsy --work-dir . "Review security vulnerabilities in src/auth.ts"
Batch Processing
You can use Witsy CLI in shell scripts to process multiple files:
for file in ./docs/*.txt; do
echo "Processing $file..."
cat "$file" | witsy "Summarize this document" > "${file%.txt}_summary.txt"
done
Troubleshooting
"Connection Refused"
If the CLI cannot connect, ensure:
- The Witsy Desktop app is actually open.
- You haven't changed the default port in settings (the CLI expects the port configured in the app).
- Your firewall isn't blocking local connections on the Witsy port.
Missing Engines/Models
If an engine doesn't show up in witsy engines, go to the Witsy app and ensure you have added your API key for that provider and that the engine is enabled in the Settings > Engines section.