Skip to content
back to blog

3 minEngineering

Building tablesalt: a CSV agent where the answer IS the UI

tablesalt is an in-browser data agent. You drop a CSV, ask a question, and the agent renders the answer as the right kind of UI: a chart, a stat card, a table, or a list. No chat bubbles.

Repo: github.com/midimurphdesigns/tablesalt

Live demo: tablesalt.kevinmurphywebdev.com

What tablesalt is

You drop a CSV into the browser. You ask a question. The agent decides what kind of answer it wants to be (a chart, a single number, a table, a list), writes one SQL query, runs the query against your file in the browser, and renders the result.

That's the whole product.

What other AI-for-data demos do, and why it falls flat

Open almost any AI-for-data demo today and you get the same shape: a chat input on one side, a chat reply on the other. You ask "what are my top five regions by revenue?" and the model writes back "Sure. Your top five regions by revenue are: North America at $1.2M, Europe at..." in a chat bubble.

This works as a tutorial. It doesn't work as a product. The user has to read a paragraph to see the answer. The answer should be the chart.

What tablesalt does differently

A few concrete choices set it apart:

  • The answer is a real UI element, not a chat reply. The agent picks one of five render kinds (chart, stat card, line chart, table, list) and the result lands as that thing. No prose wrapper.
  • You watch the agent think. Before the SQL runs, four short reasoning steps stream onto the screen one at a time: what the agent noticed about your data, what kind of answer it picked, the query it wrote, and what it checked before running. It feels like a person working, not a model dumping JSON.
  • The eval scoreboard is on the front page, and you press the button. Twelve hand-labelled questions run against the live model in front of you. The accuracy numbers are real. The per-case cost in dollars is on the screen. No hidden benchmark, no "we tested it ourselves once, trust us."
  • Nothing leaves your browser. DuckDB-WASM parses and queries the CSV locally. No upload step, no privacy story to write, no signup wall.

How it's built, briefly

Next.js 16 App Router with two edge API routes. The first sends the user's question to the model. The second runs the eval. Both use streamObject from the Vercel AI SDK with a Zod schema, which means the four reasoning steps and the final SQL come back as one progressively-completing JSON object. The streaming reveal of those fields is handled by streamfield, a small library I extracted from tablesalt and published to npm.

Models are routed through the Vercel AI Gateway. One environment variable replaces every per-provider API key. Switching between openai/gpt-4o-mini, gpt-4o, Claude Haiku, and Claude Sonnet during development was a one-line config change. I picked gpt-4o-mini because the eval scoreboard said it was the cheapest model that got the answers right. That decision is reproducible on the page.

What's deliberately not in v0.1

The post would be dishonest if it didn't name the limits.

  • No auth. No saved sessions. No multi-file joins.
  • No write-back to your CSV. The SQL guard rejects anything that isn't a SELECT.
  • One model call per question. The agent's reasoning trace makes it look like a multi-step agent, but it's really one round-trip with structured intermediate fields. A real tool-use loop is a v0.2 decision if the simpler version stops being enough.

tablesalt is open source on GitHub and live at tablesalt.kevinmurphywebdev.com. The fastest way to evaluate it is the live demo. Drop one of the sample CSVs, ask a question, and see what lands.

ask kev-o