Building a Chat UI Component Library with React
How I built a reusable chat component library with React 19 and Vite that works across multiple projects.
I needed a chat interface for an AI product. Message bubbles, streaming text, tool call indicators, typing animations. The usual. But I also needed the same components in two places — a standalone app and an embeddable widget.
So I built it as a component library with Vite's library mode, published it as a package, and use it in both.
The core hook
The useChat hook does most of the work. It manages message state, streaming, and error handling. When the user sends a message, the hook adds it to the list, creates a placeholder for the assistant's response, opens an SSE connection to the backend, and updates the assistant message as chunks arrive.
It returns the message list, a streaming boolean, and functions to send/clear. Thats the whole API surface.
Messages go to localStorage keyed by chat ID so conversations survive page refreshes. On init, the hook checks localStorage and restores saved messages.
Auto-scroll
New messages should scroll the chat to the bottom, but only if the user hasnt scrolled up to read older messages. I track whether the user is "near the bottom" (within 100px) using a scroll listener. If they are, new messages trigger a smooth scroll down. If theyve scrolled up, leave them alone.
Sounds trivial. Users notice instantly when its broken.
Streaming text animation
I add a blinking cursor at the end of the message while streaming. Small pulsing block element after the last character, disappears when streaming finishes.
Without this, people start reading and replying before the response is done. The cursor makes it obvious the answer is still coming in.
Tool call indicators
When Claude calls a tool mid-response (like hitting an API), I show a small inline card. Spinner and tool name while running, checkmark when done. So a response might look like: text, tool card (running), tool card (done), more text. Matches how Claude actually processes the request, and users can see the AI is doing something rather than sitting there.
Provider switching
We support multiple models. Pill buttons at the top of the chat, switch providers, clear conversation, start fresh. The hook takes a model parameter that gets sent with each request. Backend routes to the right model. Frontend doesnt care about model-specific APIs.
Building with Vite
Vite's library mode handles this well. Configure it to build as a library, externalize React (consuming app provides it), output ESM and CJS. The consuming app imports hooks and components, passes in an API endpoint and auth headers, done.
What I'd change
Virtual scrolling from the start. After 500+ messages, rendering everything gets noticeably slow. Should have seen that coming.
Markdown parsing should run in a Web Worker. Right now its on the main thread during streaming, and you can feel the stutter on long responses with complex formatting.
And I'd use a state machine (XState or similar) instead of a pile of useStates. The streaming lifecycle has too many states — idle, connecting, streaming, error, retrying — and managing transitions with boolean flags gets tangled fast.
But for a v1, ship it, see what breaks, fix it then.