You pasted logs into ChatGPT and got a plausible RCA. It's wrong. What changes when your LLM can query the observability stack directly — and what new failure modes that creates.
Your LLM just deleted a production alert rule. The approval gate blocks irreversible operations — not every call, just the ones where 'undo' means filing a support ticket.
Everyone's plugging unvetted MCP servers into production LLMs. Nobody's asking who's liable when they leak credentials or delete data. The governance gap enterprises are ignoring.