Vercel React Best Practices: Agent Skills as Knowledge Distribution
Research Date: 2026-01-20 Source URL: https://x.com/dani_avila7/status/2011622169793749209
Reference URLs
- Vercel Blog: Introducing React Best Practices - Official announcement
- vercel-labs/agent-skills GitHub - Source repository (13.7K stars)
- react-best-practices Skill - Specific skill content
- Claude Code Templates (aitmpl.com) - Community skill distribution platform
- davila7/claude-code-templates GitHub - CLI tool repository (17.6K stars)
Summary
On January 14, 2026, Vercel published react-best-practices, a structured repository encoding 10+ years of React and Next.js performance optimization knowledge into a format optimized for AI coding agents. The release represents a significant development in the emerging “Agent Skills” paradigm—the distribution of domain expertise as installable knowledge packages that AI agents can query during code generation and review.
The skill contains 40+ rules across 8 categories, ordered by measurable impact from CRITICAL to LOW. The two highest-priority areas address async waterfalls (sequential blocking of parallel-capable operations) and bundle size optimization—problems that typically contribute most to real-world performance regressions. The format was designed for machine consumption: individual rule files compile into AGENTS.md, a single queryable document that agents reference when reviewing code or suggesting optimizations.
Daniel San (@dani_avila7) announced the skill’s availability through the Claude Code Templates ecosystem on January 15, 2026, achieving significant engagement (232K views, 2.6K likes, 4.5K bookmarks), indicating strong developer interest in agent-consumable performance knowledge.
The Agent Skills Architecture
Agent Skills follow a consistent structure that enables both human contribution and machine consumption:
| Component | Purpose |
|---|---|
rules/ | Individual rule files with frontmatter (title, impact, tags) |
_sections.md | Section metadata defining categories and ordering |
_template.md | Template for contributing new rules |
AGENTS.md | Compiled output—single document for agent consumption |
SKILL.md | Activation instructions—when to apply this skill |
metadata.json | Version, organization, abstract |
test-cases.json | LLM evaluation test cases (generated) |
The build process (pnpm build) compiles individual rules into AGENTS.md, automatically generating rule IDs (e.g., 1.1, 1.2) and sorting rules alphabetically within each section. This architecture enables collaborative maintenance while producing a consistent machine-readable output.
Performance Rule Categories and Priorities
The framework prioritizes fixes by measurable impact, acknowledging that most performance work fails because it starts too low in the stack:
Category 1: Eliminating Async Waterfalls (CRITICAL)
Async waterfalls occur when operations that could execute in parallel are inadvertently serialized. The rules address:
- Parallel independent operations: Use
Promise.all()for concurrent execution - Delayed await placement: Move
awaitto where results are actually needed - Conditional blocking: Avoid awaiting data that conditional branches may not use
Example pattern from the skill:
// Incorrect: blocks unused branch
async function handleRequest(userId: string, skipProcessing: boolean) {
const userData = await fetchUserData(userId) // Always waits
if (skipProcessing) {
return { skipped: true } // userData not used
}
return processUserData(userData)
}
// Correct: only blocks when needed
async function handleRequest(userId: string, skipProcessing: boolean) {
if (skipProcessing) {
return { skipped: true }
}
const userData = await fetchUserData(userId)
return processUserData(userData)
}
Category 2: Bundle Size Optimization (CRITICAL)
Bundle size directly impacts load time, parse time, and mobile network performance:
- Avoid barrel file re-exports: Direct imports prevent pulling entire modules
- Dynamic imports for heavy dependencies: Defer non-critical JavaScript
- Post-hydration loading: Move non-essential scripts after interactive state
Categories 3-8: Incremental Optimization
| Category | Focus Areas |
|---|---|
| Server-Side Performance | React.cache() for deduplication, LRU caching, minimizing serialization boundaries |
| Client-Side Data Fetching | ISR/SWR patterns, deferred data loading, fetch timing optimization |
| Re-render Optimization | React.memo with custom comparators, profiling-driven memoization |
| Rendering Performance | React Server Components, lazy hydration |
| JavaScript Performance | Loop combination, object pooling, avoiding inline allocations |
| Advanced Patterns | Cross-request caching, streaming, concurrent rendering |
Impact Level Framework
Each rule includes an impact rating enabling prioritized triage:
| Level | Meaning | Typical Effect |
|---|---|---|
| CRITICAL | Highest priority | Seconds of user-visible latency |
| HIGH | Significant improvement | Hundreds of milliseconds |
| MEDIUM-HIGH | Moderate-high gains | Measurable but not dominant |
| MEDIUM | Moderate improvement | Noticeable in aggregate |
| LOW-MEDIUM | Incremental | Adds up across many sessions |
| LOW | Polish | Minor improvements |
Real-World Origins
The rules derive from production performance work at Vercel. Documented examples include:
-
Combining loop iterations: A chat page scanning message lists eight separate times was consolidated into a single pass—significant when processing thousands of messages.
-
Parallelizing awaits: An API waiting for sequential database calls that had no dependency was refactored to concurrent execution, cutting total wait time in half.
-
Lazy state initialization: A component parsing JSON from
localStorageon every render was fixed withuseState(() => JSON.parse(...)), eliminating repeated work.
Installation and Usage
Via add-skill CLI:
npx add-skill vercel-labs/agent-skills
Via Claude Code Templates:
npx claude-code-templates@latest --skill=web-development/react-best-practices --yes
Manual installation: Place AGENTS.md in project root. Available at:
https://github.com/vercel-labs/agent-skills/blob/main/skills/react-best-practices/AGENTS.md
Once installed, agents reference these patterns when:
- Reviewing code for performance issues
- Suggesting optimizations during refactors
- Generating new components with performance-aware patterns
Broader Implications: Skills as Knowledge Distribution
The react-best-practices release exemplifies an emerging pattern in agentic tooling: encoding institutional knowledge into machine-consumable formats. Rather than relying on LLM training data (which may be outdated or incomplete), skills provide:
- Authoritative knowledge: Direct from practitioners with production experience
- Versioned updates: Skills can be updated independently of model training
- Contextual activation:
SKILL.mddefines when knowledge applies - Testable correctness:
test-cases.jsonenables evaluation against known patterns
This parallels how npm distributes code dependencies—skills distribute knowledge dependencies that inform agent behavior.
Ecosystem Context
The Claude Code Templates platform (aitmpl.com) demonstrates the scale of this emerging ecosystem:
| Component Type | Count |
|---|---|
| Agents | 310 |
| Commands | 228 |
| Settings | 62 |
| Hooks | 42 |
| MCPs | 64 |
| Skills | 356 |
The platform achieved Vercel OSS Program membership and 17.6K GitHub stars, indicating significant adoption of the agent-skill distribution model.
Key Findings
- Vercel’s
react-best-practicesencapsulates 10+ years of performance optimization knowledge into 40+ rules across 8 impact-ordered categories - The skill architecture (individual rules → compiled AGENTS.md) enables collaborative maintenance while producing machine-readable output
- Priority ordering (CRITICAL: waterfalls/bundles → LOW: advanced patterns) reflects real-world impact measurement
- The release represents a broader shift toward “skills as knowledge distribution”—version-controlled, authoritative domain expertise installable by AI agents
- Significant community engagement (13.7K stars on agent-skills, 232K views on announcement) indicates strong demand for agent-consumable performance knowledge