The AI Marketing Agent is a Void Crystal demo that lets a human and an AI co-edit a living strategy document through a chat interface, with the document always visible, editable by both sides, and versioned. It runs on the OpenAI Responses API with File Search and is built on Void Crystal’s shared platform of Next.js, FastAPI, and Supabase.
Problem
We run IB Solutions on an EOS-inspired operating system. The core planning artifact is a growth strategy document — roughly five pages covering vision, services, positioning, content plans, and backlogs. This document changes constantly as strategy crystallises through daily marketing and sales work.
The original workflow used ChatGPT’s project feature with the document attached as context. Two problems surfaced quickly. First, planning sessions often improved the document itself, which meant exporting edits, manually stitching them into the Google Drive copy, re-exporting, and re-uploading to the project — sometimes multiple times per day. Second, when the AI was asked to return a full updated document, it consistently wiped unrelated sections. The larger the document grew, the worse this became.
What it does
The agent provides a chat interface with a sliding document panel. The human can read, edit, and sync the document at any time. The agent has the same document loaded in its context and can edit it directly.
Three capabilities were built to solve the core friction:
Section-scoped editing. When the agent receives an edit request, it first identifies which sections of the document need to change and states its reasoning. It then locks itself to those sections only. This eliminated the problem of unrelated sections being wiped during edits.
Version control. Every edit creates a version. The human can compare any two versions side by side, see exactly what the agent changed, and revert to any previous state in one action.
Model switching. The agent can switch between heavier and lighter language models depending on the task. Strategic planning and document editing use a more capable model. General discussion uses a lighter, cheaper model. This reduced per-task costs from $0.30–0.60 to $0.03–0.06 for routine work.
A manual mode toggle (Discuss / Edit document) was also added after noticing that the agent misread intent roughly 5% of the time — starting to edit when the user wanted to discuss, or discussing when the user wanted an edit.
How it works
The agent is built inside Void Crystal, a custom internal platform consisting of a Next.js frontend, a FastAPI (Python) backend, Supabase for database and authentication, and the OpenAI API for AI capabilities.
Single source of truth. The growth strategy document lives as a Markdown file that is uploaded to an OpenAI vector store. The agent uses the OpenAI Responses API with File Search, so every turn can semantically search the strategy before responding. This means the agent always works from the current version of the document, not a stale snapshot.
Conversation and streaming. Users chat in sessions stored in PostgreSQL (Supabase). The backend sends conversation history plus the latest message to the OpenAI Responses API. Replies stream back to the frontend so the user sees the answer as it generates.
Tools. The agent can call tools during a conversation. For strategy editing, it can list sections, read a specific section, and replace a single section. After each edit the document is backed up and re-synced to the vector store so the next turn uses the updated strategy. This is the mechanism behind section-scoped editing — the agent cannot overwrite the full document, only declared sections.
Modes and cost control. When the user selects “Edit document” mode, the backend injects a lightweight snapshot of the strategy (section headings and short previews) so the agent can plan which sections to read or edit using fewer tokens. In “Discuss” mode, strategy tools are disabled entirely. Model switching routes heavier editing tasks to a more capable model and lighter discussions to a cheaper one. A second lever is conversation history control: the backend can send the full session history for maximum context, or limit it to the last ten messages when full context is not needed, reducing token usage on longer sessions. Together these controls keep per-session costs predictable.
How it maps to real work
Any team that maintains a living operational document — a strategic plan, a playbook, an OKR tracker, a campaign brief — deals with the same friction: the document drifts from reality because updating it is slow, and AI assistants that try to help tend to break what they don’t touch.
The section-scoped editing pattern is the most transferable piece. It applies to any scenario where an AI agent needs to modify part of a long document without corrupting the rest.
What we learned
The biggest lesson was that whole-document replacement is a fundamentally broken pattern for AI-assisted editing on anything longer than a page or two. The agent needs to reason about scope before it touches anything. Forcing it to declare which sections it will edit and why — before it writes a single character — eliminated nearly all destructive edits.
Model switching was a cost discovery, not a planned feature. Early usage showed that most interactions are lightweight discussions that don’t need the most expensive model. Routing by task type cut costs by roughly 10x on routine work without any noticeable quality drop.
The 5% mode-detection failure rate is a limitation we chose to solve with a manual toggle rather than more prompt engineering. It is a pragmatic call: the toggle adds one click and removes all ambiguity. As models improve at intent classification, this will likely become unnecessary.
Read more
A deeper look at the workflow problem, who this pattern is useful for, and what it means for marketing and operations leaders.