Accessing the Local HTTP API
Witsy includes a built-in HTTP server that allows you to interact with your configured AI engines programmatically. This is ideal for building custom automation scripts, integration with productivity tools like Raycast or Alfred, or creating your own frontend for Witsy's backend.
Enabling the API
By default, the HTTP API is disabled for security. To enable it:
- Open Witsy Settings.
- Navigate to the General tab.
- Toggle on Enable HTTP Endpoints.
The API runs on http://localhost:[port]. While the port can vary based on availability, it is typically listed in the settings or discovered automatically by the Witsy CLI.
Common Recipes
1. Checking API Status and Current Configuration
Before sending prompts, you can verify if the API is reachable and see which engine and model Witsy is currently using as default.
Request:
curl http://localhost:4321/api/cli/config
Response Example:
{
"engine": { "id": "openai", "name": "OpenAI" },
"model": { "id": "gpt-4o", "name": "GPT-4o" },
"userDataPath": "/Users/name/Library/Application Support/Witsy",
"enableHttpEndpoints": true
}
2. Listing Available Engines and Models
If you want to switch engines programmatically, you first need to know what is configured and supported.
To list engines:
curl http://localhost:4321/api/engines
To list models for a specific engine (e.g., Ollama):
curl http://localhost:4321/api/models/ollama
3. Running a Chat Completion (Streaming)
The /api/complete endpoint allows you to send a conversation thread. By default, it uses Server-Sent Events (SSE) to stream the response.
Request:
curl -X POST http://localhost:4321/api/complete \
-H "Content-Type: application/json" \
-d '{
"engine": "openai",
"model": "gpt-4o",
"thread": [
{ "role": "user", "content": "Explain quantum computing in one sentence." }
]
}'
Output Format:
The API returns a stream of data chunks. Each line starts with data: .
data: {"type":"content","text":"Quantum"}
data: {"type":"content","text":" computing"}
...
data: [DONE]
4. Sending a Non-Streaming Prompt
If you prefer a standard JSON response instead of a stream, set "stream": false in your request body.
Request:
curl -X POST http://localhost:4321/api/complete \
-H "Content-Type: application/json" \
-d '{
"stream": false,
"thread": [{ "role": "user", "content": "Hello!" }]
}'
5. Saving a Conversation to Witsy History
You can programmatically inject a conversation into Witsy's UI history so you can continue the chat later in the desktop application.
Request:
curl -X POST http://localhost:4321/api/conversations \
-H "Content-Type: application/json" \
-d '{
"chat": {
"title": "Scripted Conversation",
"engine": "anthropic",
"model": "claude-3-5-sonnet",
"messages": [
{ "role": "user", "content": "Start a log." },
{ "role": "assistant", "content": "Log started." }
]
}
}'
API Reference Summary
| Method | Endpoint | Description |
| :--- | :--- | :--- |
| GET | /api/cli/config | Returns current active engine, model, and API status. |
| GET | /api/engines | Returns a list of all configured AI providers. |
| GET | /api/models/:engine | Returns available models for a specific provider. |
| POST | /api/complete | Executes a prompt. Supports stream (bool) and thread (array). |
| POST | /api/conversations | Saves a chat object to the Witsy history database. |
Usage Notes
- Content-Type: Always use
application/jsonfor POST requests. - Localhost Only: The API is bound to
127.0.0.1for security and is not accessible from outside your local machine. - Error Handling: If an engine is not configured in the Witsy UI, the API will return a
404 Not Foundor500 Internal Server Errorwith a message indicating the configuration issue.