Connecting AI Providers
Witsy is a "Bring Your Own Key" (BYOK) application. This means you have full control over which AI models you use and only pay for what you consume directly to the providers. You can also run models locally for free using tools like Ollama.
Configuring a New Provider
To connect any AI provider, follow these general steps:
- Open Settings (click the gear icon or use
Cmd/Ctrl + ,). - Navigate to the Models or Engines section.
- Select your desired provider from the list.
- Enter your API Key and click Save.
Recipe: Connecting Cloud Providers (OpenAI, Anthropic, Gemini)
Cloud providers are the easiest to set up and offer the highest performance for complex tasks like MCP (Model Context Protocol) integration.
OpenAI
- Scenario: You want to use GPT-4o for chat and DALL-E 3 for image generation.
- Steps:
- Get an API key from the OpenAI Dashboard.
- In Witsy Settings, select OpenAI.
- Paste your key. Witsy will automatically fetch available models (GPT-4o, GPT-4-turbo, etc.).
Anthropic
- Scenario: You need the best performance for coding or using Witsy as an MCP Client.
- Steps:
- Create a key at the Anthropic Console.
- In Witsy, select Anthropic.
- Paste your key. You can now select models like
Claude 3.5 Sonnet.
Recipe: Running Local AI for Free (Ollama & LM Studio)
If you want to keep your data private or avoid API costs, you can use local providers.
Using Ollama
Witsy has first-class support for Ollama.
- Download & Install: Get Ollama from ollama.com.
- Run a Model: Open your terminal and run
ollama run llama3. - Witsy Setup:
- Go to Settings > Engines > Ollama.
- Witsy defaults to
http://localhost:11434. If Ollama is running, Witsy will automatically detect your downloaded models. - Tip: Ensure Ollama is running in the background for Witsy to connect.
Using LM Studio
- Open LM Studio and download a model.
- Go to the Local Server tab (icon looks like a double-ended arrow) and click Start Server.
- In Witsy:
- Select LM Studio in the Engines settings.
- Ensure the Server URL matches LM Studio (usually
http://localhost:1234/v1).
Scenario: Using OpenRouter for "Provider Fallback"
Witsy supports OpenRouter, which allows you to access almost any model through a single API. Witsy also supports a custom providerOrder configuration for OpenRouter.
How to configure custom provider routing:
- In OpenRouter settings, find the Provider Order field.
- Enter your preferred providers line-by-line (e.g.,
Anthropic,Google,OpenAI). - Witsy will pass this to OpenRouter to ensure your request is routed to your preferred infrastructure first, falling back only if necessary.
// Example Provider Order input
Anthropic
OpenAI
Together
Connecting "OpenAI-Compatible" Services
Many providers (like Groq, DeepSeek, Perplexity, or Together.ai) use the OpenAI API standard.
To connect them:
- Choose the specific provider if listed (e.g., Groq).
- If not listed, choose Custom OpenAI Provider.
- Enter the Base URL provided by the service (e.g.,
https://api.together.xyz/v1). - Enter your API Key.
- Click Refresh Models to populate the model list.
Troubleshooting Connections
| Issue | Solution |
| :--- | :--- |
| Model list is empty | Ensure your API key is correct and you have a positive balance with the provider. Click the "Refresh" icon next to the model selector. |
| Local model not found | Verify the local server (Ollama/LM Studio) is running. Check that the port (e.g., 11434) isn't blocked by a firewall. |
| Connection Timeout | If using a proxy or VPN, ensure localhost is excluded from proxy rules so Witsy can talk to local providers. |
Pro Tip: You can set different providers for different tasks. For example, use Anthropic for Chat, OpenAI for Image Creation, and ElevenLabs for Text-to-Speech—all active at the same time within Witsy.