Skip to content

AI Features

AI in Herald is optional. When enabled, it powers classification tags, semantic search, mailbox chat, quick replies, Compose rewriting, subject suggestions, image descriptions, custom prompts, and contact enrichment.

The default AI path is Ollama on a local host. Herald can also be configured for Claude or an OpenAI-compatible provider. Non-AI mail reading, composing, cleanup, and sync keep working when AI is disabled or unavailable.

AreaWhat it shows
AI status chipBottom status fragment such as idle, tag, embed, reply, search, chat, defer, down, or off.
Classification tagsTimeline and Cleanup tag columns or preview tag lines.
Classification progressStatus fragment like current/total tag progress.
Embedding progressStatus fragment for embedding batch processing.
Semantic searchTimeline ? query prefix and Contacts ? mode.
Quick reply pickerCanned replies plus optional AI-generated replies.
Chat panelAI conversation over recent mailbox context and tool results.
Compose AI panelRewrite prompt, quick actions, AI response, and accept control.
Prompt editorSaved reusable AI prompts opened with P.
KeyContextPreconditionsResult
aMain UIAI classifier configured and folder has classifiable mail.Starts classification for the current folder.
ATimeline or Cleanup previewAI configured and a target email is selected.Re-classifies the current single email.
?Timeline search prefixAI/embeddings available.Runs semantic email search.
?ContactsAI/embeddings available.Opens semantic contact search.
ctrl+qTimelineCurrent email exists.Opens quick reply picker with canned and optional AI choices.
cMain UIAI configured, not loading, width allows chat.Opens chat panel.
ctrl+gComposeAI configured.Opens Compose AI assistant panel.
ctrl+jComposeAI configured and body or reply context exists.Generates subject suggestion.
ctrl+enterCompose AI panelAI response available.Accepts generated response into the body.
PMain UIPrompt editor closed.Opens custom prompt editor.
eContactsA contact is selected.Runs contact enrichment.
  1. Install and start Ollama.
  2. Pull the configured chat/classification model and embedding model.
  3. Set ollama.host, ollama.model, and ollama.embedding_model in config or through settings.
  4. Launch Herald and check the AI chip.
  1. Open the folder in Timeline or Cleanup.
  2. Press a.
  3. Watch status progress.
  4. Review the Tag column after classification completes.
  1. Select or open a Timeline email.
  2. Press ctrl+q.
  3. Choose a canned reply immediately or wait for AI replies when available.
  4. Select a reply to open Compose.
  1. Open Compose and write body text.
  2. Press ctrl+g.
  3. Use quick actions 1 through 5 or type a custom instruction.
  4. Press ctrl+enter to accept the generated text.
  1. Press 4.
  2. Select a contact.
  3. Press e.
  4. Review enriched company/topics in the detail panel when complete.
StateWhat happens
AI offNo classifier is configured; chip may show AI: off outside demo mode.
AI idleAI is configured but no task is active.
AI tagClassification is running.
AI embedEmbedding generation is running.
AI replyQuick replies are being generated.
AI searchSemantic search is running.
AI chatChat request/tool loop is running.
AI deferScheduler has deferred a task.
AI downProvider is unavailable or failed recently.
External providerSelected message/draft/search context may leave the machine for the requested feature.
Model changedHerald can invalidate stale embeddings tied to a previous embedding model.

AI features send only the context needed for the requested action, but that context can include sender, subject, body snippets, full body text, contact metadata, folder summaries, or tool results. Ollama keeps requests local to the configured Ollama host. Claude and OpenAI-compatible providers receive prompts through their APIs. Semantic embeddings are stored in SQLite and tied to the configured embedding model.

If AI actions report unavailable, test Ollama with curl http://localhost:11434/api/tags or verify external provider keys in settings.

If tags are blank, run a on the current folder and wait for classification progress to finish.

If semantic search is weak, confirm the embedding model is installed and allow embedding progress to finish.

If Compose AI says “Write something first”, add body text or open Compose from a reply context.