Skip to content

Dog-fooding agent-doc Part 2: rmemo, tag-path, and Releasing v0.16.1

Brian Takita
Authors:Brian Takita
Posted on:March 13, 2026

A live session walking through agent-doc's compaction workflow, showcasing rmemo — a 358-byte reactive JavaScript library, introducing tag-path semantic code search with TreeSitter, and shipping agent-doc v0.16.1. Reflections on vibe coding, race conditions, and why naming conventions matter.

Dog-fooding agent-doc Part 2: rmemo, tag-path, and Releasing v0.16.1

This is a companion post to my second dog-fooding video, where I walk through a live session with agent-doc, show off some of my JavaScript libraries, and ship v0.16.1 on camera. If you missed Part 1, watch it here.

Agent-doc compaction: squeezing 1,750 lines into 144

The session starts with compacting an agent-doc file from a previous session. Compaction is one of agent-doc's core features — it takes a long, sprawling session document and distills it down to a dense dashboard. Think of it as lossy compression for working memory. The AI preserves the essential state — lessons learned, pending items, architectural decisions — and drops the conversational noise.

Normally compaction gets a file down to around 30 lines. This particular file was stubborn. It started at over 1,750 lines and initially only dropped to about 1,000. A second pass brought it to 350. Eventually it settled at 144 lines. Large files with a lot of accumulated context just take more passes.

The interesting part is that compaction reveals something about how information density works. Most of what happens in a working session is transient — debugging output, exploratory prompts, incremental progress. The compacted state is the resolution that matters. It is the system viewed from a higher level of abstraction, retaining only what is relevant to future context.

Catching Claude being sneaky: the Python replace incident

Here is a real vibe-coding moment. During compaction, I noticed something odd — the system mentioned a "Python replace" step. But agent-doc is implemented in Rust. What happened?

Claude Code had used a one-off Python script to do a string replacement that agent-doc's append-mode streaming could not handle natively. The exchange content needed to be replaced, not appended to, and the skill did not support that operation. So Claude improvised with Python.

This is exactly the kind of thing that gives vibe coding a bad reputation, and honestly, it is a legitimate concern. The AI found a gap in the tool's capabilities and silently worked around it with an ad-hoc script. If you are not paying attention, you never catch it. The fix: make agent-doc compact handle stream-template-mode documents natively, with a single atomic write.

The lesson: vibe coding requires vigilance. The AI is solving your problem, but it might solve it in ways that create new, harder-to-see problems. You have to stay engaged with how it solves things, not just that it solves them.

The IntelliJ race condition: when your editor polls too fast

The compaction also surfaced a race condition with IntelliJ IDEA's virtual file system. IntelliJ polls the filesystem every 300 milliseconds. Agent-doc's compaction was doing two separate writes — a patch followed by a clear — and the IDE caught the intermediate state between the two writes.

The fix is straightforward: batch all writes into a single atomic rename. Build the entire document in memory, write to a temp file, rename over the target. No intermediate state for the IDE to observe. But the deeper insight is about real-time systems and the agent-coding paradigm:

When you are not writing the code by hand, you lose the tactile feedback of sequencing. You do not feel the timing the way you do when you are the one writing write() then rename(). The timing bugs become invisible until their consequences surface. Modeling the lifecycle — through diagrams, state machines, explicit documentation — becomes essential.

rmemo: a 358-byte reactive library

I took a detour to show off rmemo, a reactive memoization library I built for JavaScript/TypeScript. The core memo function compiles down to 358 bytes. The full isomorphic library (browser + server) with reactivity comes in under a kilobyte.

rmemo is a fork of vanjs, pushed to its absolute minimum. It provides memoized reactive functions — essentially signals — with the smallest possible footprint. I spent a lot of time squeezing every byte out of it. It uses a factory-function convention with trailing underscores (div_, span_) so you can still use the bare name for variables.

Nobody uses it except me. But building it was an exercise in understanding what the minimum viable reactive system actually looks like. How small can you make a signal graph and still have it be useful? The answer is: surprisingly small. 358 bytes small.

Tag-path: semantic search across naming conventions

The project I am most excited about right now is tag-path. The core idea: code identifiers are composed of semantic tags. person_name in snake_case and personName in camelCase carry the same meaning — they are both the tags [person, name] arranged in a path. Tag-path decomposes identifiers into their constituent tags and lets you search semantically across naming conventions.

This matters more than it might seem. Different languages use different conventions — camelCase for JavaScript functions, snake_case for Rust, PascalCase for Go exports, Ada_Case in some systems languages. Zig uses camelCase for functions but snake_case for variables. Tag-path captures all of these in TOML preset files per language.

The TreeSitter integration is the key piece. By parsing actual syntax trees, tag-path understands what kind of identifier something is — function name, variable, type, constant — and applies the appropriate convention rules. This creates a deterministic semantic bridge between an identifier's name and its documentation.

The bigger vision: if we can deterministically decompose code identifiers into semantic tags, we can create better bridges between code and LLMs. The variable names and function names in a codebase carry meaning, and tag-path makes that meaning machine-readable.

Spatial affinity in document editing

One subtle advantage of agent-doc's in-place editing model: spatial affinity. When I wrote a prompt about "preventing the race condition," I did not have to specify which race condition. The prompt was spatially close to the section discussing the IntelliJ VFS polling issue. The AI inferred the scope from proximity.

This is a small thing, but it compounds. In a traditional chat interface, every prompt needs full context. In a document-based workflow, the document is the context. Position carries meaning. You write near the thing you are talking about, and the system understands. It is the difference between a conversation and a workspace.

Releasing v0.16.1

The session ends with shipping agent-doc v0.16.1. The release includes native compact handling for stream-template-mode documents and the atomic write fix for the IntelliJ race condition.

The broader pattern here is what I am calling dog-fooding as development methodology. I use agent-doc to develop agent-doc. The bugs I find are the bugs a real user finds, because I am the real user. The compaction workflow, the race conditions, the sneaky Python workarounds — these all surfaced because I was using the tool in production, not because I wrote a test for them.

What is next

  • Existence language: executable philosophy for projects — prose that can be executed using LLMs
  • Tag-path + SIFT integration: bridging code semantics and LLM understanding through deterministic tag decomposition
  • Better videos: full screen from the start, guaranteed

agent-doc is available on crates.io. rmemo lives at github.com/ctx-core/rmemo. If you are interested in tag-path or Existence Lang, stay tuned — they are coming.