| LLM provider choice | OpenAI · Azure · Anthropic · Mistral · Ollama · LocalAI · your own OpenAI-compatible API | fixed by the vendor — usually OpenAI behind the curtain |
|---|
| API key ownership | tenant configures their own key | vendor account, tenant pays for "AI credits" |
|---|
| Self-hosted option | Ollama, vLLM, LocalAI — air-gap possible | not supported, cloud enforced |
|---|
| Custom endpoint failure | hard-fail (data control retained) | irrelevant — no custom endpoint possible |
|---|
| Schema enforcement (tool calling) | in 10 editors · schema generated from TS model | partly free text, partly templates, rarely tool calling |
|---|
| Model per feature | each AI feature picks its own model | one model for every feature |
|---|
| Token transparency | live in tenant settings · per feature | "AI credit" bundles, no real-time view |
|---|
| Per-feature off switch | every AI mode independently switchable | global "AI on/off" or nothing at all |
|---|