All Articles

Code reviews are broken: How GitHub’s poor UX Is hurting developers

Why GitHub's outdated code review experience is frustrating developers and stifling progress. Explore the key flaws in its workflow and what needs to change for modern development.

Blog
11.27.2024
Guy Eisenkot, CEO & Co-Founder
3 min


The code review process is more essential than ever but as an evolving practice has been stalled. Code reviews are meant to ensure that the codebase remains stable and maintainable - but because the experience is fraught with roadblocks we’re stuck in the monotonous, often frustrating loop of “just get it reviewed,” leading to the good old “LGTM” (Looks Good To Me).

But let’s be real: Does it actually “look good”? Or are we simply rushing through because the tools and practices we rely on haven’t evolved to meet the complexity of modern software development? 

In today’s fast-paced development environments—where we’re juggling multi-repos, microservices, and multi-language tech stacks—code review tools like GitHub are actually holding us back. Even tasks like code generation meant to speed up development are underwhelming in completing complex multi-repo scenarios. We can and should expect more from the code review experience.

It’s time to be critical, call out the flaws, and explore how we can move beyond the “LGTM!”.

The pain points of code reviews (where GitHub is failing us)

The modern development cycle is no longer the simple, linear process it once was. We’re dealing with complex code bases and diverse technologies and when it’s time to review a major ship - the moment of review is, well, really complicated. 

At this moment of review there are three key areas where a developer’s process is significantly impacted: required navigation through multiple files across multiple directories, required ability to comprehend the impact of changes in a single file over multiple sections, and the required ability to evaluate the impact of files with modified code (but that haven’t actually changed). 

While Github absolutely changed the way we work as developers, ushering in the era of Git, the experience isn’t supporting our new workflow needs. Let’s break down where the real pain points lie:

  1. Fragmented Conversations and Comments
    GitHub splits conversations and comments into different tabs, which seems minor until you’re in the thick of a code review. Constantly toggling between these tabs disrupts the flow and makes it harder to follow the logic of the feedback. This forces developers to context-switch unnecessarily, which is not just annoying, but counterproductive.
  2. Duplicated Files and Commits
    In GitHub, files and commits are essentially duplicated fields of each other. The result? More noise. We’re constantly sifting through the same information in different formats, trying to piece together what’s important and what’s not. It’s an inefficiency that adds up and drains the momentum from the review process.
  3. Disjointed GitHub Actions Logs
    GitHub Actions could be a powerful tool in the code review process, providing critical build and test results. But here’s the catch: the logs are detached from the rest of the review, thrown into a separate console. This creates yet another context-switch for developers, further complicating what should be a streamlined review process.

However, in fairness,  let’s zoom out a bit.  A key point about the above challenges are because we as an industry haven’t stopped to ask ourselves if how we write and review code is serving how we run and compile code.

Why do we review code this way? An outdated architecture for a modern problem

The root of the problem goes deeper than just poor UX. It’s the architecture of the code review process itself. Most programming languages were designed with certain file structures and conventions in mind meant to allow developers to easily access and reuse code. For example, common conventions that define how code files should be named and cataloged.

While those design principles serve how code is run and compiled, they don’t necessarily lend themselves to effective code reviews. Here’s why:

  • PR-Level and Repository Confines: Code reviews are typically scoped to pull requests (PRs) within a single repository. This narrow focus is problematic in multi-repo projects, where a change in one repo can affect others. But the code review process doesn’t accommodate this complexity, leaving gaps in understanding and oversight.
  • File-Level Diff Viewing: Diffs are generally reviewed at the file level, showing changes in isolation. While this approach helps focus on the specific modifications, it ignores the broader impact of these changes on the overall system. The ripple effects of altering code in one place aren’t always immediately apparent.
  • Random Ordering: Files in a PR are often ordered alphabetically by default, with no regard for their relationships to one another. This means developers have to jump back and forth, making sense of the entire change, instead of reviewing it in a logical, structured flow.

Beyond these specific architectural challenges, it’s no surprise that most developer’s lack of extensive cross-language and cross-repo understanding leads to siloed reviews - not because of lack of effort but truly because of the required breadth Interdependencies and ripple effects are missed and issues are introduced later in the development cycle or worse– in production. Developers simply don’t have the tools to fully understand the ripple effects of their changes.

So why  - especially with deep contextual telemetry available and emerging AI analysis tooling - have we settled to continue reviewing code this way? We don’t have to accept the fact that our code review process cannot help solve these challenges.

Designing a code review process for modern development

There is a different way to think about our code review process from both a tooling and process perspective. The architecture and experience of code reviews deserves an overhaul that really meets the workflow of the developer, team, and organization where they are in their complex environments.

 In practice that looks like: 

  1. Contextual Review: Instead of siloing reviews by repo or file, we need a system that can provide context across the entire codebase. Whether it’s multi-repo, multi-language, or microservice-based, your review tool should help you understand the big picture and how changes in one place affect others. Like where Baz organizes and highlights reviews by topics with impacted API highlights and CI fail checks.   [link to specific changelog or tweet]
  2. Cross-Repo and Cross-Language Awareness: Modern development isn’t confined to a single repo or language. Code review tools need to keep up by offering insights that go beyond the narrow scope of a PR. Like in Baz’ Breaking Change Detection for APIs   [link to specific changelog or tweet]
  3. AI and Observability as a multiplier: AI-powered tools can help fill in the gaps, providing deep analysis and insight into how changes affect the broader system. Imagine a review tool that doesn’t just show diffs but highlights dependencies, identifies potential conflicts, and even suggests fixes—all without leaving your workflow. This is where Baz leverages AST diffing [link to specific changelog or tweet]

The goal should be to empower developers to focus on what really matters—high-quality, reliable code. 

Unbreakable bond, unbreakable code

At Baz, we believe strong, reliable code is built with context. Our AI-powered platform is designed to streamline code reviews by focusing on what really matters—identifying critical changes, analyzing dependencies, and offering clear explanations. No noise, less blockers, and no more fragmented tools. With Baz, your developers can focus on writing beautiful, stable code that ships faster and works better.

Want to learn more about how Baz can elevate your code reviews? Check out the latest in the changelog here and join the waitlist here → https://baz.co/