Last updated: April 11, 2026

AI Use Policy

Scope

This policy applies to AI features delivered through AgentNxt Chat, Open WebUI, LiteLLM Gateway, Ollama, Langfuse, workflow agents, MCP integrations, and connected model providers.

Governance

AgentNxt centralizes model routing through LiteLLM where possible, allowing administrators to configure model availability, provider keys, budgets, logging, and access rules. Local model routes through Ollama can be used for workloads that should remain within the self-hosted environment.

Data Use

Prompts, files, context, tool calls, and outputs may be processed by configured models, gateways, observability systems, and workflow services. Customer administrators are responsible for choosing approved model routes for sensitive, confidential, regulated, or personal data.

Human Review

AI output must be reviewed before use in high-impact decisions, regulated advice, employment, credit, insurance, healthcare, legal analysis, safety-critical operations, identity decisions, or decisions with material effect on individuals.

Restricted Uses

Users must not use AgentNxt to create illegal content, malware, credential theft, biometric identification without authorization, deception, harassment, unlawful surveillance, discriminatory decisioning, or content that violates applicable law, provider policy, or customer policy.

Monitoring

Langfuse, LiteLLM, application logs, and security tooling may record metadata, traces, errors, token usage, and operational events for observability, abuse prevention, cost control, auditability, and incident response.