This is our March 2026 development summary. We publish one each quarter.
The parallel setup
We work on 2-3 client projects in parallel using Cursor agents. While one agent is building, we review another's output in real time in Cursor's UI. Running multiple projects simultaneously means we are never idle waiting for an agent to finish.
The loop per task is the same across every project:
- Task definition
- Build mode planning
- Approving the plan
- Code review and testing
- Feedback
This loop runs across multiple projects at the same time. One agent is in the build phase while we are in the review phase with another.
Global rules and project-scope rules
Over time we noticed best practices getting skipped and our own coding style not showing up consistently in agent output. So we started accumulating Cursor rules.
Global rules apply across all agents on all projects: the baseline standards and patterns every agent has to follow. Project-scope rules are specific to that client's codebase: its structure, conventions, what to avoid, and context about what the system does and what the current sprint is about.
Both rule sets build up incrementally. Each time we catch something being done wrong, or a pattern we want enforced, it goes into the appropriate rules file. The next agent session picks it up automatically.
Self-written agent skills for repetitive tasks
When we catch a repetitive task — something that comes up again and again where various files have to be modified in a specific way — we handle it once through the full review loop. Once it is done correctly, we ask the agent to write a skill for itself in the project scope.
The next time the same task comes up, the agent runs from its own skill. Faster and at higher quality, without us redefining what needs to happen.
Test-driven development to reduce review cycles
We use TDD heavily. Defining tests before the agent builds gives a clear, unambiguous definition of done. The agent knows what a successfully completed task looks like. That means fewer review cycles per task for us — less back and forth on whether something is actually finished.
Domain docs for specific directories
For specific domain directories in a project — a services layer, for example — we ask the agent to maintain a doc describing how to best create and structure code in that domain. The doc lives in the project and grows as the codebase grows.
Any agent working in that directory reads the doc first. The output quality improves because the agent is working from a defined pattern rather than inferring structure from the existing code.
These practices compound. Rules files reduce setup time. Self-written skills reduce repetition. TDD reduces review cycles. Domain docs raise the floor on output quality. Each quarter we review what is working and update accordingly.
Need Similar Solutions?
If you're facing similar challenges or want to explore how I can help with your project, let's talk.