Major macOS performance and stability improvements, Outlook messaging support, smarter assistant context and memory, security hardening, and polished UI components.
Significant macOS performance and stability improvements: fixes for chat scroll freezes, sidebar re-render cascades, main thread blocking during file I/O, and SwiftUI invalidation issues — resulting in a noticeably smoother and more responsive experience
Outlook messaging support: Vellum can now connect to Microsoft Outlook as a messaging provider, joining the existing Slack integration and expanding where the assistant can be reached
Smarter assistant context and memory: the assistant now seeds capability memories for all skills (including bundled ones) and CLI commands at startup, improves semantic search, and better manages context window estimates
Security hardening across the assistant and gateway: removal of dangerouslySkipPermissions, stricter risk classifications for CLI subcommands and hooks directory mutations, validation of symlink targets before spawning, and tightened admin route authorization
Polished UI components and design consistency: redesigned skill detail page, shared file browser, improved dropdown and navigation items, context window indicator, and a consistent page container layout
v0.5.15
CLI signing key handling improvements and automatic key migration for smoother upgrades.
Improved signing key handling in the CLI to ensure the gateway's on-disk key is correctly prioritized, preventing potential authentication issues
Automatic migration of signing keys from the gateway disk when upgrading from versions prior to v0.5.14, ensuring a smooth upgrade experience without manual intervention
v0.5.14
Thinking blocks in chat, overhauled memory and retrieval, /compact command, expanded model support, and collapsible sidebar sections.
Thinking blocks are now visible in chat: the assistant's reasoning process is rendered inline as collapsible thinking blocks, giving users transparency into how responses are formed — thinking is now enabled by default
Significantly improved memory and retrieval: batched extraction, HyDE query expansion, MMR diversity ranking, a serendipity layer for surfacing unexpected relevant memories, and a new top-N retrieval format
New /compact slash command and context window indicator: manually trigger context compaction at any time, with a color-coded bar in the toolbar showing how full the context window is
Expanded model support and OpenRouter catalog: DeepSeek, Qwen, Mistral, Meta, Moonshot, and Amazon models added; Anthropic's 1M context window beta and fast mode now supported; OpenAI reasoning effort wired through to the API
Collapsible sidebar sections, channel conversations, and macOS polish: Scheduled and Background sidebar sections now collapsible with persisted state, channel-bound conversations displayed with read-only treatment