Quick Start
Installation
Get Witsy up and running on your desktop in seconds.
macOS
The fastest way to install on macOS is via Homebrew:
brew install --cask witsy
Windows & Linux
- Visit the Witsy Releases page.
- Download the installer for your platform (
.exefor Windows,.AppImageor.debfor Linux). - Run the installer and launch the application.
Connecting Your First LLM
Witsy is a Bring Your Own Key (BYOK) application. You can either connect to a cloud provider (like OpenAI or Anthropic) or run models locally using Ollama.
Scenario A: Using a Cloud Provider (e.g., OpenAI)
- Open Witsy and click the Settings (gear icon).
- Navigate to the Models or Providers tab.
- Locate OpenAI in the list.
- Paste your API key into the field.
- Click Save. You can now select models like
gpt-4ofrom the main chat interface.
Scenario B: Running Locally with Ollama (Free)
If you prefer to run models locally without an API key:
- Download and install Ollama.
- Open your terminal and pull a model:
ollama run llama3. - In Witsy Settings, ensure the Ollama provider is enabled.
- Witsy will automatically detect your local models. Select a local model from the chat dropdown and start chatting.
Your First Chat
Once a provider is configured, you can start interacting with Witsy:
- Start a Conversation: Type your prompt in the bottom input bar.
- Attach Files/Images: Click the paperclip icon or drag and drop an image into the chat to use Vision capabilities (e.g., "Describe this image").
- Use the Scratchpad: For long-form content like coding or writing, click the Scratchpad icon to open a side-by-side editor where you can refine AI-generated text interactively.
How-To: Running MCP Servers
Witsy is a universal Model Context Protocol (MCP) client. This allows you to give any LLM (even local ones) access to external tools and data.
- Find a Server: Browse Smithery.ai for available MCP servers (e.g., Google Drive, GitHub, or local filesystem).
- Configure: In Witsy Settings, go to the Plugins/MCP section.
- Add Server: Paste the connection command provided by the MCP repository.
- Usage: In your chat, Witsy will now show available "Tools." You can ask, "What are my latest files in Google Drive?" and the LLM will use the MCP server to fetch the answer.
Using the CLI and HTTP API
For power users, Witsy includes a built-in HTTP server that allows you to interact with the assistant via the command line or scripts.
Enabling the API
- Go to Settings > General.
- Enable HTTP Endpoints.
Basic CLI Usage
Witsy provides a local API at http://localhost:[port]. You can query your configured engines and models programmatically.
Example: List Configured Engines
curl http://localhost:4321/api/engines
Example: Generate a Completion
You can send a chat thread to the complete endpoint:
curl -X POST http://localhost:4321/api/complete \
-H "Content-Type: application/json" \
-d '{
"engine": "openai",
"thread": [{ "role": "user", "content": "Write a hello world script in Python" }]
}'
Common Recipes
How to Create Images
Witsy supports OpenAI (DALL-E), Stable Diffusion, and more.
- Action: Select an Image Provider in settings.
- Prompt: Type
/image a futuristic city in neon lights(if using a plugin) or use the dedicated Image Generation UI.
How to Summarize a Webpage
Witsy includes a Webview feature.
- Action: Click the "Web" icon to open the internal browser.
- Recipe: Navigate to a URL, then click the Summarize button to send the page content directly to your current chat.