Skip to content
back to portfolio

Open-source and personal projects2026

tablesalt — CSV agent with generative UI, reasoning trace, and live eval scoreboard

Public open-source data-exploration agent. Drop a CSV, ask a natural-language question, see generative UI — five render kinds (table, bar, line, stat, list) chosen by the model. text-to-SQL via the Vercel AI SDK + Vercel AI Gateway over DuckDB-WASM running in-browser; zero backend. The agent emits a 4-step reasoning trace before its answer; the eval scoreboard (12 labeled NYC-311 cases, render-kind / SQL-executes / SQL-semantic-match scored with live cost + latency) runs on demand from the front page. Consumes streamfield@^0.1.0 from npm for its streaming reasoning UI.

Author · Product EngineerNext.js 16React 19TypeScriptTailwind v4Vercel AI SDKVercel AI GatewayDuckDB-WASMstreamfield

Repo: github.com/midimurphdesigns/tablesalt

Live demo: tablesalt.kevinmurphywebdev.com

Read the full story: Building tablesalt

Drop a CSV. Ask a question. See generative UI. 5 render kinds the agent picks from (table, bar, line, stat, list) chosen by the model based on the shape of the answer. text-to-SQL via the Vercel AI SDK routed through the Vercel AI Gateway, running over DuckDB-WASM entirely in-browser. No upload, no backend, no signup. The eval scoreboard runs live from the front page.

How it's built

Next.js 16 + React 19 + TypeScript strict + Tailwind v4 + Vercel AI SDK v6 + Vercel AI Gateway. Two edge routes (/api/agent, /api/eval) are the only server surface; everything else is client-side. The client uses @duckdb/duckdb-wasm to parse and query CSVs in a Web Worker, so visitor data never leaves the browser. The model streams back a Zod-validated JSON object that leads with a 4-step reasoning trace (profile_schema, pick_render_kind, draft_sql, validate_sql), then the final SQL + render kind + caption. The client guards the SQL read-only and routes the result to the right render component with intentional reveal physics. The streaming reasoning summary is powered by streamfield. tablesalt is its first public npm consumer.

Live eval scoreboard

12 labeled NYC 311 cases, scored on three axes: render-kind correct, SQL executes against an in-process corpus, SQL semantically matches the expected query. Press the button and the eval runs against the live model right now: per-case latency, per-case cost, and final aggregate accuracy + total cost + per-case mean cost all stream in. No hardcoded numbers anywhere. Rate-limited at one run per IP per hour via Upstash Redis so the button is bounded.

What it demonstrates

  • Generative UI as the response surface. Five render kinds means the agent picks how to answer, not just what to answer.
  • Agent reasoning trace. A four-step thought process streams live before the final answer lands.
  • Evals as part of the product, not a hidden test suite. The scoreboard runs against the live model on demand, surfaces per-case token cost, and shows the accuracy numbers I'd otherwise be tempted to hide.
  • Frontend craft. Bar draws, polyline reveals, stat-card type weight, schema-profile cascade, streaming reasoning via the npm-published streamfield primitive. Each piece of motion exists to communicate state, not to decorate.
  • Zero-backend product. DuckDB-WASM means no upload, no privacy story to write, no signup wall to bounce visitors off.

What I wanted that chat boxes don't give me

Most AI-for-data demos answer in a chat bubble. The bubble is the wrong container. I wanted the answer to be the chart, sized and labeled and animated into place, with the SQL one click away if you want to verify it. Picking the render kind is the agent's most consequential decision; making the picked surface look intentional is the frontend's job.

Open source

MIT-licensed. The whole repo is one pnpm install away.

ask kev-o