Introduction
Witsy is a versatile desktop AI assistant designed to give you full control over your LLM experience. Unlike locked-in platforms, Witsy follows a Bring Your Own Keys (BYOK) model, allowing you to connect to high-performance cloud providers or run completely private models locally on your machine.
As a Universal Model Context Protocol (MCP) client, Witsy enables you to connect LLMs to external tools and data sources—even if the model provider doesn't natively support MCP.
Getting Started
To begin using Witsy, download the latest version for your operating system:
- macOS: Download the
.dmgfrom the Releases page or use Homebrew:brew install --cask witsy - Windows/Linux: Download the appropriate installer from the Releases page.
How to Connect Your First Model
Witsy supports dozens of providers. Here is how to set up the two most common workflows:
Scenario 1: Using a Local Model (Ollama)
If you want a private, free experience without API costs:
- Install Ollama and run a model (e.g.,
ollama run llama3). - Open Witsy and navigate to Settings > Engines.
- Select Ollama. Witsy will automatically detect your local models.
- Select your model from the chat interface and start typing.
Scenario 2: Using Cloud Providers (OpenAI/Anthropic)
- Navigate to Settings > Engines.
- Input your API Key for your preferred provider (e.g., OpenAI, Anthropic, or Google Gemini).
- Witsy will fetch the available models. You can now use features like vision, image generation, and web search.
How to Use MCP Servers
One of Witsy’s most powerful features is its ability to act as a Universal MCP Client. This means you can give an LLM (like GPT-4o) access to your local filesystem, databases, or specialized APIs via MCP servers.
Adding an MCP Server
- Go to Settings > MCP.
- Click Add Server.
- Provide the configuration for the server (you can find pre-built servers on Smithery.ai).
- Once connected, your LLM will be able to invoke the tools provided by that server during chat sessions.
Automation via CLI and API
Witsy includes a built-in HTTP API and CLI for users who want to automate their workflows. This allows you to interact with Witsy from other applications or terminal scripts.
Enabling the API
To use the API, ensure it is enabled in Settings > General > Enable HTTP Endpoints. By default, Witsy listens on a local port (e.g., 4321).
Example: Checking Config via CLI
The Witsy CLI communicates with the desktop app. You can verify your current configuration programmatically:
import { WitsyAPI } from './cli/api';
const api = new WitsyAPI();
const config = await api.getConfig();
console.log(`Current Engine: ${config.engine.name}`);
console.log(`Current Model: ${config.model.name}`);
Example: Running a Chat Completion
You can send prompts to Witsy via a POST request to the local server. This is useful for building custom "shortcuts" or automation scripts.
curl -X POST http://localhost:4321/api/complete \
-H "Content-Type: application/json" \
-d '{
"engine": "openai",
"thread": [{"role": "user", "content": "Explain MCP in one sentence."}]
}'
Common Capabilities at a Glance
Witsy isn't just for text. Here is how you can use it for various media tasks:
- Image Creation: Use providers like DALL-E 3, Fal.ai, or Stable Diffusion.
- Web Search: Toggle search plugins (Perplexity, Brave, or Tavily) to give your LLM real-time internet access.
- Speech-to-Text: Use the Whisper integration to dictate your prompts.
- Scratchpad: Use the interactive Scratchpad to iterate on long-form content side-by-side with your chat.