As coding shifts toward AI agents, test quality becomes 100x more critical. In 60 seconds, shield your tests from 50+ common mistakes that AI coding tools make
Download one rule file today and watch your project naturally shift toward more reliable, maintainable testing
- π Battle-tested wisdom from 20+ years of testing literature - Curated best practices from the most influential testing books of the last two decades
- π€ AI-optimized format - Specifically formatted for Claude, Cursor, Copilot, and other AI coding assistants to understand and apply
- β‘ Context-window friendly - Smartly compacted rules that maximize value while minimizing token usage
A. Use this option for standard testing like integration, component, or unit tests (e.g., Vitest, Playwright, Storybook, testing library, Jest, Node.js test runner):
# Download the testing best practices
curl -O https://raw.githubusercontent.com/goldbergyoni/golden-testing-rules-for-ai/main/testing-best-practices.mdOr view and copy the rules directly from GitHub
B. Use this option for the sole purpose of system-wide end-to-end tests (e.g., Playwright/Cypress without a mocked backend):
# Download the end-to-end testing best practices
curl -O https://raw.githubusercontent.com/goldbergyoni/golden-testing-rules-for-ai/main/e2e-testing-best-practices.mdOr view the E2E rules directly from GitHub
Why system-wide E2E testing has different rules? These tests span multiple processes and components, making them more complex and slower than isolated tests. Due to their distributed nature, they require relaxed readability standards and different test data approaches
For Claude Code:
- Rename the file to
CLAUDE.mdand place it under your testing folder or wherever you use to locate testing best practices
For Cursor:
- Paste the contents into a
.cursorrulesfile in your project root
Note: Currently available for TypeScript/JavaScript. Support for more languages and tool-specific formats coming soon!
That's it! Your AI assistant now follows golden testing standards. π‘οΈ
// BAD: Vague title, no clear scenario
test('should work correctly', async () => {
// Missing Arrange phase - no data setup visible
// Implementation detail testing
const service = new OrderService()
expect(service.internalCache).toBeDefined()
// Arbitrary timeout - flaky test waiting
await page.waitForTimeout(2000)
const result = await service.processOrder()
// Missing smoking gun - where's the test data?
expect(result.total).toBe(150)
})test('When processing order items, then total reflects sum plus tax', async () => {
// Arrange - all data visible (π« Smoking gun principle)
const order1 = buildOrderItem({ price: 50, quantity: 1 })
const order2 = buildOrderItem({ price: 25, quantity: 1 })
// Act
const result = await httpClient.post('/order/process', [order1, order2])
// Assert - clear cause and effect
expect(result.total).toBe((order1.price + order2.price) * config.taxRate)
})Our rules are distilled from decades of testing wisdom:
The definitive catalog of test patterns and anti-patterns. Meszaros identifies 400+ patterns that form the foundation of modern testing practices.
The GOOS book pioneered the London School of TDD and introduced the concept of testing from the outside-in, focusing on behavior over implementation.
Beck's seminal work introduced TDD to the world, establishing the Red-Green-Refactor cycle and the principle of writing tests first.
Osherove's practical guide teaches the difference between good and bad tests, emphasizing maintainability, readability, and trustworthiness.
A comprehensive guide with 50+ best practices specifically for JavaScript testing, battle-tested by thousands of developers. 25,000+ β on GitHub
AI coding assistants are powerful but often generate tests with subtle issues:
- Hidden dependencies that make tests fragile
- Implementation testing instead of behavior testing
- Missing test data that makes failures hard to debug
- Arbitrary waits that create flaky tests
- Poor structure that hurts maintainability
Our rules teach AI to avoid these pitfalls automatically.
Found a testing anti-pattern that AI tools keep generating? Open an issue or PR! We're building the definitive resource for AI-powered testing excellence.
MIT - Use these rules freely in your projects
Yoni Goldberg - Developer and consultant with extensive experience in testing excellence. Has worked with over 40 organizations to improve their testing practices. Author of Node.js and JavaScript testing best practices repositories with a combined 130,000+ GitHub stars π, helping developers worldwide write better, more maintainable tests.
Making AI coding assistants write tests like seasoned testing experts, one rule at a time.