All Articles

The Next Evolution in Code Review: Tracing-Driven Insights

Learn how integrating telemetry, OpenTelemetry, and AI-driven insights can enhance code reviews and optimize development workflows. Explore how Baz is leading this evolution in software development practices.

Blog
11.27.2024
Guy Eisenkot, CEO & Co-Founder
3 min

Most SaaS platforms today are characterized by multiple siloed teams working with a dizzying plethora of microservices. In these environments, comprehensive observability is required for teams to effectively predict, prevent, and respond to incidents. Telemetry--and specifically tracing-- delivers this level of observability, so it’s no surprise it has become standard practice for SaaS platform developers.

Historically, different teams have different tech stacks which in turn yielded different logging formats. As a result, application monitoring was domain specific which leads to several issues: 

  • Varied formats make it challenging to correlate metrics and logs
  • Siloed monitoring data makes it hard to get an overview of application performance
  • Different teams tackling similar monitoring problems independently leads to duplication of efforts and wasted resources

With frameworks that enable tracing, like OpenTelemetry, teams can gain consistent insights into application behavior and greatly reduce the challenges of siloed engineering practices. Organizations have a seamless flow of data across teams and services with enhanced observability, solving many of the historical challenges associated with fragmentation. 

At least, that’s the idea. In practice, while it revolutionized monitoring for apps, there’s been a significant underutilization of tracing in coding and code review practices, aggravated by a lack of tooling to effectively exploit it.

We believe there are two key areas where we’ll see observability become a multiplier for coding: first with telemetry-informed pre-production code composition and second in the code review process itself.

Tracing to drive refactoring and code optimization in pre-production

We’re now grounded in how telemetry data offers real-time visibility into how code behaves in production environments. Tracing the flow of requests and events through an application gives developers actionable information on how components interact during runtime and this type of live debugging is gaining more traction with developers. 

But its value extends beyond production monitoring and incorporating this data earlier in the development lifecycle can help teams optimize code and refactor inefficiencies before deployment.

For instance, traces can illuminate how specific functions or services behave under real-world conditions, providing engineers with critical insights for optimization. This could include identifying performance bottlenecks, resource-heavy operations, or areas of the codebase that frequently contribute to system downtime.

By integrating tracing data into pre-production processes, development teams can:

  • Refine technical design decisions: Telemetry provides insight into usage patterns, allowing developers to baseline usage across services and align coding conventions accordingly.
  • Optimize for maintainability: By understanding how code interacts with other parts of the system, teams can ensure long-term maintainability.
  • Drive better refactoring decisions: Instead of refactoring code based on subjective factors, telemetry provides data-driven guidance, helping teams make more informed choices.

Ultimately, this tracing-informed approach empowers teams to deliver higher-quality code before it ever reaches production.

Enhancing code reviews with tracing for better software quality

Now that we have a perspective on how observability can improve pre-prod code development, let’s talk about what that looks like in practice. 

 Code reviews are a cornerstone of any high-functioning development team, focusing on code quality, adherence to style guidelines, and correctness. However, traditional code review practices often overlook critical data about how code changes affect system performance in a live environment. 

This is where tracing can provide an additional layer of insight to show developers not only whether code "looks" correct but also how it behaves in real-world scenarios. Reviewers can assess how new code interacts with other components, how it affects requests flow, and if it introduces inefficiencies. 

Tracing-informed code reviews offer several advantages especially in distributed systems with performance and reliability. From correlating code changes to the tracing data for visibility into runtime behavior to the ability to make data-driven objective decisions, developers and reviewers are empowered to ship better quality code. 

While the potential benefits are clear, the challenge lies in effectively integrating this into the review process without adding friction to existing developer workflows. Each service and component logs its own metrics, events, and traces, vast amounts of data is produced. Without proper filtering, it becomes difficult to distinguish meaningful insights (signals) from routine, non-critical information (noise). 

This is where Baz steps in.

Identifying critical signals in the noise: AI-driven code reviews

Baz leverages AI, observability, and an optimized development experience to address the lack of actionable tracing data during code reviews. 

We use tracing data to illuminate specific waypoints in the code, such as annotated functions and unique components. Abstract Syntax Tree (AST) analysis allows us to pinpoint these elements, giving a clearer understanding of how changes will impact the system.

By connecting tracing data with AST analysis, we can gain a comprehensive view of the application across commits, releases, artifacts, repositories, and even across different programming languages. This holistic approach enables us to see the impact of each individual piece of code.

With Baz, reviewers can focus on what really matters: how the application code runs in production. By identifying user interactions and how they are affected by code changes, tracing empowers reviewers to comprehend the context and potential impact, or "blast radius," of the code under review. This leads to more informed decision-making and ultimately higher-quality code in delivery.

The future of code reviews is data-driven

As development practices evolve, the integration of telemetry and AI into workflows will become increasingly essential. Tracing has already transformed application monitoring, and its role in development—particularly in code reviews—is poised to grow.

By combining tracing data with AI-driven insights, Baz enables development teams to ship higher-quality code more efficiently. As teams adopt tracing-informed code reviews, they’ll benefit from smarter, more effective reviews that lead to better software outcomes.

Want to learn more about how Baz can elevate your code reviews? Check out the latest in the changelog here and join the waitlist here https://baz.co/