How to Effectively Measure Software Productivity to Drive Results

Compared with other critical business functions such as sales or customer operations, software development is perennially undermeasured. The long-held belief by many in tech is that it’s not possible to do it correctly—and that, in any case, only trained engineers are knowledgeable enough to assess the performance of their peers. Yet that status quo is no longer sustainable. Now that most companies are becoming (to one degree or another) software companies, regardless of industry, leaders need to know they are deploying their most valuable talent as successfully as possible.

This article is a collaborative effort by Chandra Gnanasambandam, Martin Harrysson, Alharith Hussin, Jason Keovichit, and Shivam Srivastava, representing views from McKinsey’s Digital and Technology, Media & Telecommunications Practices.

There is no denying that measuring developer productivity is difficult. Other functions can be measured reasonably well, some even with just a single metric; whereas in software development, the link between inputs and outputs is considerably less clear. Software development is also highly collaborative, complex, and creative work and requires different metrics for different levels (such as systems, teams, and individuals). What’s more, even if there is genuine commitment to track productivity properly, traditional metrics can require systems and software that are set up to allow more nuanced and comprehensive measurement. For some standard metrics, entire tech stacks and development pipelines need to be reconfigured to enable tracking, and putting in place the necessary instruments and tools to yield meaningful insights can require significant, long-term investment. Furthermore, the landscape of software development is changing quickly as generative AI tools such as Copilot X and ChatGPT have the potential to enable developers to complete tasks up to two times faster.

To help overcome these challenges and make this critical task more feasible, we developed an approach to measuring software developer productivity that is easier to deploy with surveys or existing data (such as in backlog management tools). In so doing, we built on the foundation of existing productivity metrics that industry leaders have developed over the years, with an eye toward revealing opportunities for performance improvements.

This new approach has been implemented at nearly 20 tech, finance, and pharmaceutical companies, and the initial results are promising. They include the following improvements:

With access to richer productivity data and insights, leaders can begin to answer pressing questions about the software engineering talent they fought so hard to attract and retain, such as the following:

To use a sufficiently nuanced system of measuring developer productivity, it’s essential to understand the three types of metrics that need to be tracked: those at the system level, the team level, and the individual level. Unlike a function such as sales, where a system-level metric of dollars earned or deals closed could be used to measure the work of both teams and individuals, software development is collaborative in a distinctive way that requires different lenses. For instance, while deployment frequency is a perfectly good metric to assess systems or teams, it depends on all team members doing their respective tasks and is, therefore, not a useful way to track individual performance.

Another critical dimension to recognize is what the various metrics do and do not tell you. For example, measuring deployment frequency or lead time for changes can give you a clear view of certain outcomes, but not of whether an engineering organization is optimized. And while metrics such as story points completed or interruptions can help determine optimization, they require more investigation to identify improvements that might be beneficial.

In building our set of metrics, we looked to expand on the two sets of metrics already developed by the software industry. The first is DORA metrics, named for Google’s DevOps research and assessment team. These are the closest the tech sector has to a standard, and they are great at measuring outcomes. When a DORA metric returns a subpar outcome, it is a signal to investigate what has gone wrong, which can often involve protracted sleuthing. For example, if a metric such as deployment frequency increases or decreases, there can be multiple causes. Determining what they are and how to resolve them is often not straightforward.

Measuring productivity is critical for any business, but it can be particularly challenging when it comes to software development. Unlike factory production where you can easily count units produced, there are no obvious or easy productivity metrics for software developers. However, that doesn’t mean measuring productivity is impossible or not worthwhile. By tracking the right metrics, software teams and organizations can gain valuable insights into how efficiently and effectively they are working This allows them to identify areas for improvement, prove the value delivered by engineering, and optimize how developer time and talent are utilized

So how exactly should you go about measuring productivity for software teams? Here are some best practices and key metrics to consider:

Adopt a Multidimensional Approach

Given the complexity of software development, using any single metric in isolation is usually insufficient and potentially misleading. The most effective way to gauge productivity is to look at a combination of metrics across different dimensions like quality, velocity, efficiency, outcome, and capability. This provides a more holistic assessment.

Some examples of metrics to track across these dimensions include

Quality: defects, technical debt, customer satisfaction

Velocity: story points completed, lead time, release frequency

Efficiency: inner loop time, outer loop time, context switches

Outcome: business value delivered, productivity index

Capability skills matrix training completion certification levels

By regularly reviewing a range of metrics together, you gain a clearer picture of what is working well versus potential problem areas that need attention.

Leverage Automated Tracking Where Possible

Manually gathering productivity data across multiple metrics can be time-consuming and difficult to scale, especially for large engineering organizations. That’s why it’s recommended to leverage automated tracking through existing software development and collaboration tools as much as possible.

Many common dev tools like JIRA, GitHub, CircleCI, and Jenkins have built-in analytics, dashboards, and reporting capabilities that can be configured to automatically track key productivity metrics. For example, pulling velocity data from a backlog management system or build success rates from a CI/CD pipeline.

This allows engineering leaders to monitor productivity metrics continuously with minimal overhead. It also provides objective data that removes bias or inaccuracy that can come with manual tracking.

Normalize for Team Differences

It’s important to recognize that productivity benchmarks will vary significantly based on factors like:

  • Team size
  • Type of project
  • Programming languages and tech stack
  • Developer experience levels
  • Use of modern dev practices like CI/CD

You can’t realistically expect a newly formed team working on a complex, legacy system to achieve the same velocity as a seasoned team using modern tech on greenfield development.

That’s why productivity metrics should be normalized and evaluated in the context of each team’s unique situation. Comparing teams against their own historical baselines rather than arbitrary external benchmarks leads to more meaningful insights.

Focus on Trends Not One-Off Data Points

Software development is complex, so productivity metrics will naturally have some level of variability between sprints or iterations. You shouldn’t overreact to a single data point that seems out of the norm.

Instead, the most meaningful insights come from analyzing trends over a longer time horizon. If velocity, defect rates, or customer satisfaction metrics are consistently trending up or down over multiple sprints, that is when you need to dig deeper into what is driving the trend.

By taking a longer view, you avoid introducing unnecessary churn or changes due to normal short-term variations.

Link Metrics to Business Outcomes

Productivity metrics on their own may not mean much to non-technical stakeholders. To get organizational buy-in on improving engineering productivity, it helps to link metrics back to tangible business outcomes.

For example, show how improved developer velocity and quality leads to:

  • Faster time-to-market for new features
  • Higher customer retention and satisfaction
  • Increased revenue and market share gains

This demonstrates the concrete business value generated by more productive software teams. It helps justify investments to further optimize the development process.

Foster Transparency Between Teams

Data-driven insights are most powerful when shared openly across the organization. However, engineering teams don’t always have visibility into metrics outside their immediate scope.

Fostering transparency and information sharing between teams helps spread productivity best practices. For instance, if one team has significantly reduced defects or technical debt, understanding what changes they made can help other teams replicate their success.

Productivity dashboards that aggregate key metrics across all engineering teams are extremely valuable for driving organization-wide improvements.

Provide Context for Productivity Expectations

It’s demotivating for developers when leaders have unrealistic expectations around productivity metrics. Setting arbitrary velocity or story point targets often does more harm than good.

Provide proper context by explaining how productivity metrics are being used to identify areas for improvement – not to punish teams or impose unreasonable demands. Explain why specific metrics are important and how they ultimately help engineering be more impactful.

With proper communication and context-setting, productivity metrics become a collaborative tool rather than a looming threat.

Incorporate Qualitative Feedback

While quantitative productivity metrics are invaluable, also consider qualitative feedback through developer surveys, interviews, and retrospectives.

This gives insight into impediments or distractions that may not be captured in the data. For example, a team could be hitting all their velocity targets but qualitative feedback reveals low morale due to excessive overtime.

Combining quantitative metrics and qualitative feedback paints a more complete picture so you know where to focus improvement efforts.

Reevaluate and Iterate Metrics Regularly

Don’t fall into the trap of tracking the same productivity metrics indefinitely without reassessing their value. As priorities shift and software delivery evolves, what made sense to measure a year or two ago may no longer provide meaningful or actionable insights.

Schedule regular reviews of your productivity metrics where you can candidly discuss as a team:

  • Are we tracking the right metrics?
  • Do any current metrics need to be removed or modified?
  • Are there better metrics we should start tracking?

Continuously reevaluating ensures your productivity tracking stays relevant and impactful.

There is no perfect set of software productivity metrics that applies universally. The keys are adopting a diverse set of metrics, automating data collection where possible, and continuously revisiting your approach. With an iterative, collaborative process, engineering teams gain data-driven insights to help improve their effectiveness over time. This becomes a positive, value-adding exercise rather than an empty reporting requirement.

While measuring software productivity takes thought and effort, the long-term payoff for engineering organizations and businesses makes it time well spent. Utilizing metrics effectively unlocks gains in quality, velocity, efficiency, and capability that ultimately drive better business outcomes.

how to measure software productivity

Creating value beyond the hype Let’s deliver on the promise of technology from strategy to scale.

The second set of industry-developed measurements is SPACE metrics (satisfaction and well-being, performance, activity, communication and collaboration, and efficiency and flow), which GitHub and Microsoft Research developed to augment DORA metrics. By adopting an individual lens, particularly around developer well-being, SPACE metrics are great at clarifying whether an engineering organization is optimized. For example, an increase in interruptions that developers experience indicates a need for optimization.

On top of these already powerful metrics, our approach seeks to identify what can be done to improve how products are delivered and what those improvements are worth, without the need for heavy instrumentation. Complementing DORA and SPACE metrics with opportunity-focused metrics can create an end-to-end view of software developer productivity (Exhibit 1).

These opportunity-focused productivity metrics use a few different lenses to generate a nuanced view of the complex range of activities involved with software product development.

Inner/outer loop time spent. To identify specific areas for improvement, it’s helpful to think of the activities involved in software development as being arranged in two loops (Exhibit 2). An inner loop comprises activities directly related to creating the product: coding, building, and unit testing. An outer loop comprises other tasks developers must do to push their code to production: integration, integration testing, releasing, and deployment. From both a productivity and personal-experience standpoint, maximizing the amount of time developers spend in the inner loop is desirable: building products directly generates value and is what most developers are excited to do. Outer-loop activities are seen by most developers as necessary but generally unsatisfying chores. Putting time into better tooling and automation for the outer loop allows developers to spend more time on inner-loop activities.

Top tech companies aim for developers to spend up to 70 percent of their time doing inner-loop activities. For example, one company that had previously completed a successful agile transformation learned that its developers, instead of coding, were spending too much time on low-value-added tasks such as provisioning infrastructure, running manual unit tests, and managing test data. Armed with that insight, it launched a series of new tools and automation projects to help with those tasks across the software development life cycle.

Measuring Software Developer Productivity??? | Prime Reacts

What are productivity metrics for software development?

Productivity metrics for software development clarify performance expectations. Your team remains engaged as they know exactly what is expected of them at work. Metrics communicate your expectations and show you how to measure developer productivity without bias.

How do you measure software productivity?

Measuring software productivity is a unique challenge because software comprises lines of code, but customers buy software for the functionality. In contrast, for instance, carmakers can evaluate production based on a simple equation of the number of cars they produce per hour divided by the number of workers.

How do you measure developer productivity?

To use a sufficiently nuanced system of measuring developer productivity, it’s essential to understand the three types of metrics that need to be tracked: those at the system level, the team level, and the individual level.

What is software productivity?

Software productivity can be defined as the ratio between the functional values of software produced to the efforts and expense required for development. There are many ways to measure productivity, however, most managers use two types of metrics: Size-related metrics indicating the size of outcomes from an activity.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *