Configure LLM model
Witty needs a valid LLM model. This section will describe how to add a LLM configuration.
LLM structure
A LLM model is composed of these fields:
provider: LLM provider;api_key: api key of the provider;endpoint: URL where the LLM model is located;api_version: api version defined by the provider;model: LLM model name;deployment: deployment name. It could be different frommodel.
Here's an example of a LLM configuration:
{
"provider": "azure_openai",
"api_key": "xxx",
"endpoint": "https://xxx.cognitiveservices.azure.com/",
"api_version": "2025-01-01-preview",
"model": "gpt-4.1",
"deployment": "gpt-4.1"
}
Interacting with LLM
Currently, there are the following APIs to interact with LLM configuration:
- GET /witty/v1/llm/config: retrieve the LLM configuration;
- POST /witty/v1/llm/config: create/edit a LLM configuration. The body is a JSON in the LLM structure seen before;
- POST /witty/v1/llm/chat: chat with LLM. The body is a JSON with this format
{
"query": "Some text"
}
Supported models
Currently Witty microservice has been tested against the following models/providers:
| LLM Provider | Model |
|---|---|
| azure_openai | gpt-4.1 (preferred) |
| azure_openai | gpt-5 |
| azure_openai | gpt-4.5-preview |
| azure_openai | gpt-4o |
| azure_openai | o1 |
Model o1-mini is currently not supported due to OpenAI limitation