In practice

These are moments from work sessions — tasks where Enzyme informed what the agent brought to the table.


Prepping for a call

You

I’m meeting with a potential partner in an hour. Pull up anything relevant — I want to remember what threads were live last time we talked.

Agent

runs petri, sees #product-decisions active; runs enzyme catalyze "integration direction and what changed in the partnership conversation"

From your notes after the first call two weeks ago:

“He kept coming back to the integration timeline. Not the technical complexity — the trust question. Whether committing to the API meant committing to the relationship.”

You also wrote this — it wasn’t in the transcript, it was your read on the conversation:

“The real hesitation isn’t about scope. It’s about whether we’ve seen enough of how they work to bet a quarter on the integration.”

Last time you were pulling on whether the pilot results justified the commitment. You said you’d send a scoping doc. Did that happen?

The agent found a post-call reflection note — not the transcript, but what the user had been thinking about afterward. The catalyst for #product-decisions bridged the query to that note because the reflection used language about trust and commitment that the meeting summary didn’t capture. The user walked into the call knowing their own prior thinking, not just the agenda.


Writing a draft

You

I’m drafting something about why tags don’t work for retrieval. What have I written about this before?

Agent

runs enzyme catalyze "tagging systems and their limitations for finding old notes"

Two things surfaced that are in tension with each other.

From a note eight months ago:

“Choosing a tag is thinking. When I tag a note #architecture, I’m making a claim — this connects to the thread about system boundaries and where we draw them.”

And from a community discussion last month:

“Nobody described a moment where a tag helped them find something they’d forgotten about. Every positive story was lookup, workflow, or browsing.”

You’re arguing that tags don’t work for retrieval while also believing that tagging is a meaningful act of thinking. The draft might need to hold both — tags as anchors for your own practice, not as a retrieval mechanism for future search.

The agent surfaced a contradiction across two notes written months apart. The catalyst that bridged them was asking about the difference between organizing and finding. The contradiction became the spine of the draft — the published version holds both positions rather than arguing one side.


Enriching a meeting transcript

You

Here’s the transcript from this morning’s call. Distill it, but pull in anything from the workspace that connects.

Agent

reads transcript; runs enzyme catalyze "organizational language patterns and how groups develop shared vocabulary"

The call covered how different teams in the org use different words for the same thing — “accountability groups” vs “challenge groups” vs “small groups.” The question on the table: whether shared understanding can be surfaced from the bottom up rather than imposed top-down.

Your workspace has a thread on this. From a note three months ago:

“The gap analysis — which terms confuse people, which groups use different words for the same thing, how to surface shared understanding from the bottom up — is exactly the kind of work the catalyst layer was designed for.”

The pattern: organizations drift apart linguistically, and the drift is invisible until someone maps it. The distilled note connects this morning’s conversation to work you’ve already been thinking about.

The final meeting note contained details that weren’t in the transcript — the connection to prior thinking about vocabulary drift. Enzyme surfaced a note that reframed what was discussed as a pattern the user had already been working through in a different context. The meeting note became a synthesis, not just a summary.


Searching applied external content

You

I applied my workspace to those research papers last week. Anything in there about how constraints shape creative output?

Agent

runs enzyme catalyze "constraints as generative rather than limiting" --vault ./research-papers

Three papers came through your catalyst about whether the impulse to systematize kills the thing being systematized.

One of them — a 2024 paper on design pedagogy — argues that students produce more original work when given tighter briefs. The language is academic but the argument maps to what you’ve been writing about in #product-decisions: the best products emerged when the team worked within a tight constraint rather than an open brief.

The paper uses “productive constraint” — your workspace uses “formation through friction.” Same idea, different vocabulary. Your catalysts found the overlap because they asked the right question of the papers.

The user’s catalysts reached into an academic corpus and found papers using entirely different language for the same concern. A keyword search for “constraints” would have returned hundreds of results. The catalyst-mediated search found three that resonated with how the user already thinks about the topic.