Agentic Code Review: Pattern Matching for AI
Agentic coding has changed how teams ship software. AI agents write entire features, scaffold components, and wire up APIs in seconds. But there is a problem that nobody talks about: someone still has to review all that code. And the volume is increasing faster than teams can keep up.
In our team, we noticed this shift over the past months. The number of pull requests increased. Code output per developer increased significantly. And the reviews? They started to lag behind. Not because the code was bad, but because there was simply too much of it. The bottleneck moved from writing code to reviewing code.
We needed a way to scale the review process alongside the code output. The solution we landed on was surprisingly simple: document our project’s patterns and anti-patterns in structured files, bundle them into a single reference document, and let AI agents review code against it, both on GitHub and locally before a PR is even created.
The Review Bottleneck
When a single developer can produce the output of N developers with the help of an AI agent, the team’s review capacity does not magically triple. Senior developers who used to review two or three PRs a day are now staring at five or six, each larger than before. The context switching alone is exhausting.
But the real issue is consistency. AI agents are excellent at generating working code. They are less excellent at following the specific conventions your team has without documentation agreed on. They do not know that your project always structures various components a certain way, or that server functions must follow a specific blueprint, or that date formatting must go through shared utilities. These are the kinds of things that live in the heads of senior developers and get enforced through code review.
The question becomes: can we offload most of that pattern enforcement to the AI itself?
The Prerequisite: Consistent Architecture
Before you can document patterns, you need patterns to document. This sounds obvious, but it is the part most teams skip. If your project does not follow a consistent architecture, if every developer structures components differently, if there is no agreed-upon folder structure, if server functions look different in every feature, then there is nothing to document.
In our case, we had invested time upfront in establishing a feature-based architecture with clear conventions. Every feature follows the same folder structure. Every server function follows the same blueprint. Every data table has the same component composition. This consistency is what makes pattern documentation possible and valuable.
If your project is a patchwork of different approaches, the first step is not to write pattern docs. The first step is to pick one approach and migrate toward it. The pattern documentation comes after.
Writing Pattern Files
As a consulting team lead for the last 12 months on this project, I generated the first few pattern files with the help of an AI agent and placed them in a docs/patterns/ directory in our monorepo. Each file documents one category of patterns. Over time, other team members started contributing their own:
docs/patterns/
├── components-button.md
├── components-form.md
├── components-data-table.md
├── server-functions.md
├── dates.md
├── ...
└── CONTRIBUTE.mdEach pattern file follows a template. It starts with a title, then numbered sections covering each variant or concern. The key elements are:
- Structural trees showing component composition using pseudo-code, not actual TypeScript
- Anti-patterns with
// Wrongvs// Correctcomparisons - References pointing to real files in the repo as concrete examples
Here is what a simplified pattern looks like for dates:
# Dates and Times Patterns
## Constants
All date and date-time format constants must go through
the shared constants in packages/shared/src/utils/dates/.
## Date and Time Formatting
All date and date-time display logic must go through
the shared formatting functions: displayDate, displayDateTime.
## Anti-Patterns
### 1. Direct format() for Display
// Wrong
import { format } from "date-fns";
format(date, "yyyy-MM-dd");
// Correct
import { displayDate } from "@/utils/dates";
displayDate(date);
### 2. Manual Timestamp Conversion
// Wrong
new Date(Number(seconds) * 1000)
// Correct
displayDateTime(timestamp)
### 3. Locale-Dependent Display Methods
// Wrong
date.toLocaleDateString()
// Correct
displayDate(date)
Ref: packages/shared/src/utils/dates/date-formatting.ts
Ref: packages/shared/src/utils/dates/constants.tsThe structural trees are important. They show the shape of how components compose together without drowning in implementation details. An AI agent can read a tree like this and understand the expected structure of a button component, a form component, or a server function without needing to parse hundreds of lines of TypeScript.
One important principle: the pattern files should not repeat implementation logic. They describe the shape and constraints, then reference real files in the codebase. If an AI agent or a developer needs to see the full implementation, they follow the reference. This keeps the documentation lightweight and prevents it from drifting out of sync with the actual code.
The anti-patterns are equally important. They give the AI concrete examples of what to flag during review. Instead of vague rules like “follow our conventions,” the anti-pattern shows exactly what the wrong approach looks like and explains why it breaks.
The references at the end of each section point to real files in the repo. This serves two purposes: developers can look up working examples, and AI agents can follow the reference to see the full implementation if they need more context.
Growing the (Anti-)Pattern Library
As the pattern library grows, you do not want to be the only person writing pattern files. We added a CONTRIBUTE.md file to docs/patterns/ that documents how to write a new pattern, the expected structure, how to format structural trees, how to write anti-patterns, and where to register the new pattern in the instructions file.
This serves a dual purpose. Human developers can read it before contributing a new pattern. But more importantly, you can point your AI agent to it. When you notice a recurring pattern in the codebase that is not documented yet, you can tell your agent:
Read docs/patterns/CONTRIBUTE.md, then document the pattern
for how we implement picker components. Look at the existing
picker implementations in the codebase for reference.The agent reads the contribution guide, understands the expected format, explores the codebase for real examples, and produces a pattern file that matches the style of the existing ones. You review it, refine it, and merge it. The pattern library grows without you having to write every file from scratch.
You could take this one step further and create a Skill for your AI agent. Skills are reusable prompts that agents can execute natively, so instead of pasting the same prompt every time, you ask the agent to use the document-pattern skill that already knows to read the contribution guide and follow the right format.
Bundling into a Single Instructions File
Individual pattern files are great for developers who want to read about a specific topic. But for AI-powered code review, we needed a single entry point that tells the AI when to check which pattern.
We created a .github/copilot-instructions.md file that acts as a routing layer. It does not duplicate the pattern content. Instead, it defines trigger conditions and points to the relevant pattern file:
# Copilot Instructions
## Anti-Patterns
When reviewing a pull request, identify any changed files
that may relate to anti-patterns documented below.
### Components
#### Button Components
If a file is denoted with `{prefix}-button.tsx`, check the
referenced pattern to follow our common implementation structure.
Reference: docs/patterns/components-button.md
#### Form Components
If a file is denoted with `{prefix}-form.tsx`, check the
referenced pattern to follow our common implementation structure.
Reference: docs/patterns/components-form.md
### Actions
#### Server Functions
If a file is denoted with `{prefix}-action.ts`, check the
referenced pattern to follow our common implementation structure.
Reference: docs/patterns/server-functions.mdThe format is intentionally simple. Each entry has a trigger condition (usually a file naming convention) and a reference to the full pattern documentation. The AI sees a changed file named create-order-action.ts, matches it against the trigger “files denoted with {prefix}-action.ts,” and knows to check the server function patterns.
This routing approach keeps the instructions file concise. The AI does not need to load every pattern for every review, it only follows the references that match the changed files.
An alternative approach is to extract all pattern content into a single docs/anti-patterns.md file and reference it from the copilot instructions. This would keep the Copilot instructions leaner. We considered this but decided against it, because we wanted to avoid multi-hop references for GitHub Copilot as much as possible. Having the trigger conditions directly in the instructions file means Copilot can match files and follow a single reference rather than navigating through an intermediate index.
AI-Powered Review on GitHub
GitHub Copilot picks up the .github/copilot-instructions.md file automatically during pull request reviews. When a team member opens a PR, Copilot reads the instructions, identifies which patterns apply to the changed files, and flags violations in its review comments.
This catches a surprising number of issues that would otherwise eat up human reviewer time:
- A new button component that manages dialog state in the form instead of the button
- A server function that skips the auth check
- Date formatting using
toLocaleDateString()instead of the shared utility - A component that does not follow the established folder structure convention
Each review comment includes the source pattern reference, so the developer can look up the full documentation and understand why the pattern exists. This turns every review comment into a learning opportunity rather than just a correction.
Local Pattern Checking with AI Agents
The GitHub review is valuable, but it happens after the PR is already created. Even better is catching violations before the code leaves the developer’s machine.
Any AI agent that can read files and run git commands can do this. The workflow is straightforward: the developer asks their local AI agent to diff their branch against main and check whether any patterns are being violated. The agent reads the instructions file, identifies which patterns apply to the changed files, follows the references to the full pattern docs, and reports violations.
The prompt is simple:
Check my branch against main and verify whether any
anti-patterns from docs/patterns/ are being violated
in the changed files.The AI agent diffs the branch, sees that create-order-action.ts was changed, matches it against the server function pattern, reads the full pattern file, and checks the implementation against the documented invariants and anti-patterns. If the auth check is missing or the error handling does not follow the expected blueprint, it flags it immediately.
This local check takes seconds. It is faster than waiting for a CI pipeline and cheaper than a senior developer’s review time. Developers start treating it like a linter, run it before pushing, fix the violations, then open the PR.
AI-Powered Correction
Flagging violations is useful. Fixing them automatically is even better. Once your patterns and anti-patterns are documented, the AI agent does not just know what is wrong. It also knows what the correct version looks like. Instead of reporting “this date formatting uses toLocaleDateString() instead of the shared utility,” the agent can rewrite the code to use displayDate() directly.
The prompt changes only slightly:
Check my branch against main, verify whether any
anti-patterns from docs/patterns/ are being violated,
and fix the violations.The agent diffs the branch, identifies the violations, reads the pattern file to understand the correct approach, and applies the fix. The developer reviews the diff and commits. This turns anti-pattern enforcement from a back-and-forth review cycle into a one-step correction.
Not every violation can be auto-fixed. Structural issues like a component that splits responsibilities incorrectly require more context. But the mechanical fixes, like using the wrong import, calling a raw API instead of the shared utility, or formatting dates manually, these are exactly the kind of changes an AI agent handles well.
AI-Powered Implementation
So far we have talked about reviewing, checking, and correcting existing code. But the pattern documentation works just as well in the other direction: implementing new features from scratch.
When an AI agent scaffolds a new button component, a new server function, or a new data table, it typically guesses at the structure based on its training data and your project’s folders/files with their implementation details. The result works, but it does not match your project’s conventions. With documented patterns, you can tell the agent to follow them from the start:
Create a new delete action for the order feature.
Follow the patterns in docs/patterns/server-functions.md.The agent reads the pattern file, understands the blueprint (auth check first, translations, try/catch with fromErrorToActionState, file naming convention), and produces code that already matches your conventions. No review comments needed, no correction pass, no back-and-forth.
This is where the investment in pattern documentation pays off the most. Every new feature that an AI agent implements correctly on the first try is a review cycle that never has to happen. The patterns stop being a reactive safety net and become a proactive guide that shapes how code gets written in the first place.
How to Start
If you want to adopt this approach in your team, here is a practical path:
Establish consistent patterns first. You cannot document what does not exist. Pick one area of your codebase (e.g., forms, actions, data tables) and standardize it across features. Refactor existing code to follow the pattern.
Start with one pattern file. Do not try to document everything at once. Pick the pattern that causes the most review comments and document it. Include structural trees, anti-patterns with wrong/correct comparisons, and references to real files. Bonus tip: give an agent read-only access to your GitHub repository and let it scrape PR reviews from the last three months. It can identify which anti-patterns came up most frequently during manual review and tell you exactly where to start.
Create the instructions file. Add a single entry that maps a file naming convention to your pattern doc. Test it with your AI review tool.
Iterate based on review feedback. Every time a human reviewer catches a pattern violation that the AI missed, ask yourself: is this documented? If not, add it. If it is documented but the trigger condition did not match, refine the trigger.
Encourage local checking. Show developers how to run the pattern check locally with their AI agent. Once they see it catch real issues before review, adoption happens naturally.
Add a CONTRIBUTE.md. Document how to write new pattern files so that any team member, or their AI agent, can contribute. Standardize the format so that new patterns are consistent with existing ones. Consider creating a Skill for your AI agent so that documenting a new pattern becomes a single command rather than a copy-pasted prompt.
Scaling with Agentic Coding
The irony of agentic coding is that the same AI that creates the review bottleneck can also solve it. By documenting your patterns in a format that AI agents can consume, you turn your team’s conventions from tribal knowledge into machine-readable rules.
This does not replace human code review. Complex architectural decisions, business logic validation, and security concerns still need human eyes. But the mechanical pattern enforcement, the “you used the wrong import path” and “this component should manage state differently”, can be offloaded to AI.
The key insight is that pattern documentation is not just for humans anymore. When your AI agent can read your patterns, it can enforce them. When it can enforce them, your senior developers can focus their review time on the things that actually require human judgment. And when review velocity matches code output velocity, agentic coding stops being a bottleneck and starts being what it promised to be: a multiplier.