Skip to content
Back to blog

MUI X Chat v9 alpha

José Freitas

@joserodolfofreitas
+

In v9 we're laying the groundwork for AI-native conversational experiences in MUI X. MUI X Chat centers on ChatBox, with adapters and streaming designed for real product workflows.

This post is an orientation, not an API reference: how the layers stack, why adapters and streams sit at the center, and how early work connects to Data Grid, Scheduler, and the v9 overview.

This new major is part of a coordinated effort across the entire product suite; for a complete look at the MUI ecosystem changes, check out the Introducing Material UI and MUI X v9 blog post.

Table of contents

Quick start

Install the package and render your first chat surface in minutes: follow the quickstart guide for your first working ChatBox. Then wire your adapter and streaming path so users see tokens and tool output progressively as responses arrive.

What alpha means

MUI X Chat is deliberately early: the docs are minimal for now, and you should expect breaking changes along the way as we receive feedback from users. The goal is a foundation you can theme, swap providers on, and extend, not a frozen UI kit on day one.

If you're experimenting with tools, agents, or assistant UX beside your product surface, this is the layer where we want feedback and real integrations.

When to expect the stable

We're targeting a stable Chat release in early June, based on the feedback and adoption patterns we see during alpha. That target is expectation-setting, we will only ship as stable when the components are ready.

Keeping Chat in alpha is intentional, because this is the phase where we can still make breaking changes before the next major while the API surface is being stress-tested.

Chat showcase

MUI X Chat showcase in v9 alpha.

State, adapters, and streaming

MUI X Chat work centers on entity types and a normalized store so conversations, threads, messages, participants, tool calls, and results don't turn into duplicated, fragile state as histories grow.

An adapter sits between that state and your backend: same UI whether you call OpenAI, an in‑house model, HTTP, WebSockets, or SSE, and the same boundary when orchestration lives outside React.

A stream processor coordinates token streams, partial parts, tool starts, and tool results so the UI can stay streaming‑first: partial text, progress, and mid‑stream tool UI without ad‑hoc race handling in every app.

Core chat hooks wire store, processor, and adapter together; tests lock those flows so we can iterate without regressing send/stream/tool behavior across tiers.

Message parts beyond plain text

Modeling messages as parts (such as tool calls and results, sources, and file attachments) keeps assistant UX inspectable: users and developers can see what ran, on what data, and in what order.

Streaming treats responses as sequences of parts and tokens, not one immutable blob, which matters when Chat works alongside Data Grid transforms, Scheduler mutations, or other multi‑step automations where results arrive gradually.

What's next

Phase 0-1 (v9) is delivered as one milestone: package APIs, core hooks, themed ChatBox, docs and examples, opinionated layouts (conversation surface, input, history, threads), and first-class wiring to other MUI X components.

Phase 2 expands workflow patterns ("chat with your data," "chat with your schedule," mixed chart and grid flows), with production-ready docs that help teams ship without reinventing glue code.

Phase 3 ships templates and tighter ecosystem combinations (advanced components + Material UI + Console where licensing applies), aligned with the v9 direction of clear intents and reversible state.

We will roll these milestones through the v9 cycle in regular releases; follow MUI X releases for packaged updates.

Further reading

We want your feedback

Your input drives our direction. Join our GitHub communities today to share your insights, report issues, and help shape the future. Visit MUI X on GitHub.