The Revenue Bridge: Avoiding the Common Pitfalls

Now that we’ve covered the mechanics of the revenue bridge, I want to talk about where things usually go sideways. Like almost every analysis I’ve ever done, the biggest hurdle is data quality.

In my current role, it’s not unusual to have four or five million transactions running through a bridge. When you’re dealing with that kind of scale, quantity is easy—but quality is everything.

1. Master Your Data Discovery

You need to identify the "noise" in your data. If your dataset is cluttered with samples, accounting irregularities, rebates, zero-price or zero-quantity transactions, or odd engineering fees and shipping costs, you need a plan.

If you as a pricing team don’t have direct control over those elements, my advice is to remove them. Just be transparent about it. State clearly that you’ve excluded them so the organization knows they are looking at "clean" transactions—the revenue we actually control and the results of our pricing strategy.

This stage is time-consuming. You have to learn which columns matter and which ones are "dirty." It’s tempting to skip this, but doing a bridge before cleaning your data only gives you a blurry perspective.

2. Define Your Columns (And Be Specific)

Confusion is the enemy of a good analysis. You must understand exactly what each column represents so you can explain it to others.

Take the Price Effect, for example. When you aggregate price increases and decreases, what does that actually mean? Usually, it refers only to recurring business—products or services that have prices in both the base and current periods. It shows how our price changes contributed to overall revenue. If you aren't crystal clear on that definition, you’re going to face a lot of confusion from your stakeholders.

The same applies to Recurring Volume.

  • Does it include negative quantities?
  • Are there year-end credits from December hitting the data in January?
  • How are you handling "non-repeating" or new business volume?

Every bucket has its own unique quirks. Be diligent with your formulas to ensure they capture exactly what you intended.

3. Handling the Exceptions

Samples are a classic example of a "hidden" error. Zero-price samples mess with your average selling prices (ASP). You need to decide: do you filter them out? Do you create a separate view? Or are you willing to let them fall into an "exception effect" bucket? There’s no single right answer, but you need to understand where and why they happen.

4. The Power of the Smallest Aggregate Level

I can’t stress this enough: measure everything at the smallest level possible—usually the customer-part or customer-service level.

When you build from the bottom up, the bridge becomes incredibly dynamic. You can see a price change we made in June and track its ripple effects:

  • Did it increase revenue?
  • Did we lose customers?
  • Is the "natural" churn exceeding what we expected?

Interestingly, a decrease in recurring revenue isn't always bad. If we’re stripping out low-margin "extra" work and making the same amount of profit with less effort, that’s a win in my book. It’s essentially a break-even analysis in action.

5. Driving Actionable Insight

Because the bridge is built on granular data, you can slice it however you need. You can look at a specific product portfolio, a sales rep, or a VP’s entire territory.

When you look at it by sales rep, for example, you can see their true impact: Are they capturing price? Are they successfully cross-selling? This level of detail moves the revenue bridge from a "report" to a "roadmap" that gives clear direction to sales, product managers, and the entire pricing team.

The Bottom Line

The revenue bridge is a powerful tool, but its power depends on your diligence. By understanding the common pitfalls—and exactly what your data represents—you can communicate with the organization with much more confidence.


Comments

Popular posts from this blog

Intro

Assessing Averages

Revenue Bridge?