Every engineering team has unspoken rules, the kind of standards you’ve repeated in PRs a hundred times but never bothered to write down. They don’t show up in your CI, but you feel them every time a teammate misses them.
With AI code reviewers, those patterns don’t need to stay scattered. At Baz, we’ve been using our own tool to build Custom Reviewers that catch the stuff we care about. And it’s made our review process faster, clearer, and more consistent without sacrificing context.
In this post, we’re sharing four examples of Baz-style reviewers pulled from our own repos. Think of it as a starting prompt library for your team. Looking for prompts from the top open source repositories? Check out Awesome Reviewers here
1. Explicit configurable parameters
Every team has a few hidden defaults that no one questions, until they break something. Timeouts, thresholds, and filters often get hardcoded or quietly inherited. We created a reviewer that flags missing parameters and encourages more maintainable, forward-compatible code.
See it in action:


Copy this prompt:
Make all significant configuration values explicit and configurable rather than relying on hidden defaults or hardcoded values. This improves code flexibility and maintainability.
For time-sensitive operations, explicitly set timeout values based on expected duration:
```typescript
// Instead of relying on default 30 second timeout
const VISIBILITY_TIMEOUT = 3 * 60 * 60 // Three hours
super(QUEUE_NAME, { visibilityTimeout: VISIBILITY_TIMEOUT })
```
For filtering and data selection operations, provide configurable parameters with sensible defaults:
```typescript
function getComments(
repoId: string,
maxAgeInMonths: number = 3 // Configurable with reasonable default
) {
return db.query({
where: [
eq(repositoriesInPlatform.id, repoId),
eq(pullRequestsInPlatform.state, "merged"),
// Add time-based filtering with configurable parameter
gte(commentsInPlatform.createdAt, getDateBeforeMonths(maxAgeInMonths))
]
})
}
```
When defining data models, expose all relevant configuration options rather than only a subset, making your interfaces complete and forward-compatible.
2. Manage interactive states
Modern frontends are async by default, but review comments often miss subtle bugs like double-clicks, forgotten loading states, or misused React hooks. We trained a reviewer to flag missing state safeguards.
See it in action:


Copy this prompt:
Always implement appropriate state management for interactive React components. This includes:
1. Disabling buttons during async operations to prevent duplicate submissions
2. Using the right hooks for state derivation (useMemo for computed values, useState for UI state)
3. Providing users with control over component visibility
Example for handling loading state:
```tsx
function SubmitButton({ onSubmit }) {
const [isLoading, setIsLoading] = useState(false);
const handleSubmit = async () => {
setIsLoading(true);
try {
await onSubmit();
} finally {
setIsLoading(false);
}
};
return (
<Button
onClick={handleSubmit}
disabled={isLoading}
>
{isLoading ? "Processing..." : "Let's Go"}
</Button>
);
}
```
For toggled components:
```tsx
function NotificationSystem() {
const [isOpen, setIsOpen] = useState(false);
return (
<>
<IconButton onClick={() => setIsOpen(!isOpen)}>
<BellIcon />
</IconButton>
{isOpen && <Notifications onClose={() => setIsOpen(false)} />}
</>
);
}
```
3. Explicit over implicit errors
When errors get swallowed, bugs slip by. Worse, reviewers get used to ignoring error handling altogether. Our internal Rust reviewer now flags silent panics, use of expect(), and error returns that fail to log anything useful.
See it in action:


Copy this prompt:
Always make errors explicit by using structured error types, proper propagation, and clear logging rather than resorting to panics or silent failures.
This means:
1. Use structured error types (like enums) instead of string literals or generic errors
2. Avoid panic-inducing functions like `expect()` and `unreachable!()`
3. Properly capture and log errors with appropriate logging methods
4. Return errors explicitly instead of empty collections or default values when failures occur
**Bad:**
```rust
match finding_type {
FindingType::NamingAndTypos => Ok(Self::NamingAndTypos),
// Other variants...
_ => unreachable!(), // Could panic at runtime
}
// Or
let parent_comment = comments.get_parent()
.expect("Got a parent comment with None discussion_id"); // Could panic
// Or
if result.is_err() {
// Error details are lost
// No logging
}
// Or
fn get_users() -> (Vec<User>, Vec<Details>, HashMap<String, Info>) {
if api_call_failed {
// Error is masked, returning empty data instead
(Vec::new(), Vec::new(), HashMap::new())
}
// ...
}
```
**Good:**
```rust
match finding_type {
FindingType::NamingAndTypos => Ok(Self::NamingAndTypos),
// Other variants...
_ => Err(UnsupportedFindingTypeError::new(finding_type)),
}
// Or
let parent_comment = comments.get_parent()
.ok_or_else(|| MissingParentError::new("discussion_id missing"))?;
// Or
if let Err(e) = result {
log_error(&ErrorKind::ApiError,
&format!("Failed to process request: {:?}", e));
// Error handling logic...
}
// Or
fn get_users() -> Result<(Vec<User>, Vec<Details>, HashMap<String, Info>), ApiError> {
match api_call() {
Ok(data) => Ok((data.users, data.details, data.info)),
Err(e) => Err(ApiError::UserFetchFailed(e))
}
}
```
By making errors explicit, you improve system reliability, make debugging easier, and ensure errors are properly handled rather than causing unexpected runtime behavior.
4. Parameterize deployment configurations
CI/CD reviewers can do more than lint YAML. We built one that catches hardcoded infra values and flags scaling configs that might jeopardize availability.
See it in action:


Copy this prompt:
Deployment configurations in CI/CD workflows should use variables instead of hardcoded values and include appropriate production-level settings to ensure maintainability and reliability.
For infrastructure identifiers and environment-specific values, use workflow variables:
```yaml
# Good
copier_image: ${{ vars.AWS_ACCOUNT_PROD }}.dkr.ecr.${{ vars.AWS_REGION_PROD }}.amazonaws.com/ecr-copier:v1.1.0
# Avoid
copier_image: 497250501322.dkr.ecr.us-east-2.amazonaws.com/ecr-copier:v1.1.0
```
For production deployments, configure appropriate scaling parameters that ensure high availability:
```yaml
# Good - ensures high availability
min: 2
# Avoid - single instance risks downtime
min: 1
```
This approach allows for easier maintenance when account information changes and ensures production environments maintain proper resilience through appropriate scaling configurations.
Try these prompts with Baz Custom Reviewers
These examples were built using Baz Custom Reviewers, a flexible way to turn your team’s review principles into reusable, testable AI reviewers. With Baz, you can:
- Write reviewers as system prompts or chain-of-thought reasoning
- Evaluate performance across different teams or services
- Track reviewer effectiveness over time using in-product evals
- Share and reuse reviewers across teams with full versioning and memory
You can try all of these examples and more in the Reviewer Playground, or start from your own internal patterns. It's like writing your team’s reviewer once, and never repeating the same comment again.