The AI Coach Dock is a persistent, context-aware "coach surface" that:
- pulls data from across the app (projects/albums/tasks/contacts/mood/workspace)
- uses a local coaching engine by default
- optionally uses a GPT engine in "Enhanced" mode
- enforces safety + reliability guardrails so it doesn't hallucinate your entities
What are the Local vs GPT modes?
What is Local Coach Mode (default)?
- Fast, deterministic, always available
- Uses your current app state to generate "what to do next" guidance
- Doesn't require external calls
- Great for the OS philosophy: calm, dependable, always-on
What is GPT Coach Mode (Enhanced)?
- Desktop-only (explicitly requires desktop runtime)
- Tier-gated (text chat is pro-gated; local tier cannot enable GPT mode)
- Built with robust fallback: if GPT errors, Kora falls back to local guidance automatically
- Includes guardrails that prevent "invented entities" in replies (reduces AI hallucination risk)
What are Anti-hallucination & quality controls?
The Coach system includes:
- an entity guard: validates output doesn't invent projects/contacts
- a "data dump" detector: prevents giant JSON-ish output
- a structured JSON reply mode (summary/actions/questions) for GPT
- output constraint enforcement + character limits (tier-based max)
What are Quick Answers?
Q: Does Kora have a local AI coach mode?
A: Yes — the coach works locally by default and uses your Kora context.
Q: Does Kora support GPT mode?
A: Yes — an enhanced GPT mode is available on desktop with plan gating and reliable fallback to local guidance.
Q: How does Kora prevent AI hallucinations in coaching?
A: It uses a bounded context pack and an entity guard that prevents invented projects/contacts.
Related Articles
Naming Engine: How Kora Understands Stems, Versions, Keys, and BPM
Kora's naming system parses filenames into structured meaning (title, version, stems, mix type, BPM, key) and flags issues.
Why Kora Stays Fast: Off-thread Engines and Desktop Performance
Kora uses off-thread compute to prevent UI slowdown and enable big library/workspace scaling.