Skip to content
back to blog

4 minEngineering

streamfield: a small library for AI streams that don't look broken

When the Vercel AI SDK streams a structured response, the fields flicker and snap into place as they arrive. streamfield is a tiny React library that fixes that. One component, four props, no dependencies.

npm: npm install streamfield

Repo: github.com/midimurphdesigns/streamfield

Live playground: streamfield.kevinmurphywebdev.com

What streamfield is

A small React library that takes the partial-object stream from the Vercel AI SDK and renders it without the flicker.

The problem

The Vercel AI SDK's streamObject re-sends the whole JSON object every chunk, so naive rendering rewrites the page on every chunk: the title flashes in, the bullets pop into the DOM, the summary keeps overwriting itself.

CSS transitions can't fix this. The DOM elements existed before the stream; only their text content changed. CSS animates property changes, not innerText swaps.

What streamfield does

For every field in your object, streamfield diffs the latest snapshot against the previous one and tells you which of three states the field is in right now:

  • pending means the field hasn't appeared yet. Use this state to reserve space or show a skeleton so layout doesn't jump when the field arrives.
  • streaming means the field is currently being written. Use this state to draw the user's eye to it (a shimmer sweep, an underline that grows with the text, a soft blur that clears as content lands).
  • complete means the field has stopped changing. Use this state to fire a sound, hide the cursor, mark the section as done, or trigger any action that depends on the field being finalized.

That's the whole value proposition. Three states per field, exposed as a data attribute you can style or as a render-prop value you can act on.

End-to-end example with streamObject

// app/api/suggest/route.ts
import { streamObject } from 'ai';
import { gateway } from '@ai-sdk/gateway';
import { z } from 'zod';

export async function POST(req: Request) {
  const { prompt } = await req.json();

  const result = streamObject({
    model: gateway('openai/gpt-4o-mini'),
    schema: z.object({
      title: z.string(),
      summary: z.string(),
      bullets: z.array(z.string()),
    }),
    prompt,
  });

  return result.toTextStreamResponse();
}
// app/page.tsx
'use client';

import { useState } from 'react';
import { experimental_useObject as useObject } from 'ai/react';
import { z } from 'zod';
import { StreamingReveal } from 'streamfield';
import 'streamfield/styles.css'; // optional defaults; skip for custom CSS

const schema = z.object({
  title: z.string(),
  summary: z.string(),
  bullets: z.array(z.string()),
});

type Suggestion = z.infer<typeof schema>;

export default function Page() {
  const { object, submit, isLoading } = useObject({
    api: '/api/suggest',
    schema,
  });

  return (
    <>
      <button onClick={() => submit({ prompt: 'top regions by ARR' })}>
        Ask
      </button>

      <StreamingReveal<Suggestion>
        stream={object ?? {}}
        done={!isLoading}
        variant="cascade"
      >
        {(f) => (
          <article>
            <h2 data-streamfield-state={f.title?.state}>
              {f.title?.value}
            </h2>
            <p data-streamfield-state={f.summary?.state}>
              {f.summary?.value}
            </p>
            <ul data-streamfield-state={f.bullets?.state}>
              {f.bullets?.value?.map((b, i) => <li key={i}>{b}</li>)}
            </ul>
          </article>
        )}
      </StreamingReveal>
    </>
  );
}

What's happening:

  • useObject from the Vercel AI SDK calls your /api/suggest route, streams the response, and exposes the current partial object as object.
  • That partial gets handed to <StreamingReveal> as stream, along with done={!isLoading} so the component knows when the stream finishes.
  • Inside the render-prop, every field in your schema shows up as f.<fieldName> with a state and a value. Stamp the state onto the element via data-streamfield-state and style it however you like.
  • If you imported streamfield/styles.css, the three variants (cascade, shimmer, underline-fill) handle the animation for you.

Why not just use a CSS animation on each field?

Two reasons that hold up under scrutiny:

  1. CSS can't see when a field starts vs. when it finishes. The HTML element exists before the stream, exists during the stream, and exists after. Without a state attribute, your CSS has nothing to react to. You'd have to track field lifecycle in JavaScript anyway, at which point you've reimplemented the diff streamfield does for you.

  2. The Vercel AI SDK gives you a snapshot, not a diff. Every chunk hands you the whole object again with whatever's filled in. React's reconciler doesn't know which fields changed, so it just rewrites every text node. Without a per-field state derived from snapshot comparison, you can't tell "this field is mid-write" from "this field is done" in any reliable way.

streamfield does the snapshot-comparison work once, in 70 lines, and exposes the result. You don't have to do it again in every consumer.

What it isn't

A few honest limits:

  • It's for structured streams. If you're streaming raw text token by token, the AI SDK already handles that well. Use streamText, not this.
  • It's React only.
  • It doesn't include an animation library. If you want spring physics on field reveals, bring Framer Motion or your own CSS. streamfield only tells you which state each field is in.
  • It won't help if you're not using streamObject (or another source that emits partial objects). For something like useChat where you're streaming a single message string, you don't need this.

Install it

npm install streamfield

The live playground at streamfield.kevinmurphywebdev.com shows the same partial rendered with and without the library, side by side. Scrub the slider and the difference makes the case.

Open source on GitHub. Issues and PRs welcome. If you ship something with it, message me on LinkedIn or Bluesky.

ask kev-o