Making fairer comparisons in reward determination

Why better partner comparisons are the key to fairer outcomes

March 23, 2026

Ray D'Cruz
,
CEO
,
Performance Leader

Fairness in partner reward is a foundation for trust in the partnership. When partners believe the system is fair, cohesion builds. When they perceive favouritism or inconsistency, cohesion is at risk.

The challenge is that fair reward determination requires effective comparison between partners. Dividing profit is a zero-sum game. To determine whether Partner A's reward is appropriate, RemCom members need to understand how it relates to Partners B, C, and D in similar circumstances.

Yet most RemCom processes make clear comparisons remarkably difficult.

The comparison problem

In a typical RemCom setting, partners are discussed in order. In larger firms, not every partner receives the same attention and time. One approach we've seen in global firms is to allocate partners into groups before the discussion begins. If a partner receives the same rating from two reviewers, say a regional head and a practice head, they will likely receive less discussion time. Where there's disagreement, more time is allocated. This acts as a first cut: it's not to say that partners in the second group can't be moved up for deeper analysis when something emerges, but it's a starting point designed to make the process more efficient.

Even with these kinds of efficiencies, the more partners who are subject to the process, the harder it becomes to maintain consistency. By the time the RemCom reaches partner number 50, your recollection of partner number three is hazy at best. Some data points are referenced while others are ignored. The quantitative data that can be placed in a spreadsheet becomes the default anchor, but numbers have contexts too.

Qualitative inputs like client feedback, peer feedback, self-assessments, or reviewer assessments are even more problematic. They vary hugely in structure, length, and quality. A two-paragraph self-assessment sits alongside a two-page one. A glowing client reference is weighed against a measured, nuanced reviewer comment. Comparing like with like feels impossible.

The result: anchoring and a range of other cognitive biases take hold. Research into RemCom decision-making identifies at least eight common biases, including availability bias (favouring information that comes to mind quickly, such as easily understood financial metrics over harder-to-interpret qualitative feedback), recency bias (giving disproportionate weight to recent interactions), and confirmation bias (seeking out information that reinforces existing views about a partner). These biases are not signs of bad intent, they are natural consequences of asking people to process large volumes of inconsistent information under time pressure.

When time becomes a proxy for fairness

When I hear about the sheer length of time it takes RemComs to make decisions, I sometimes wonder whether time becomes a substitute for enhancing the perception of fairness. The RemCom must have done a good job, they were locked away in a room for three days. That feels like losing two battles: fairness hasn't necessarily improved, and an enormous amount of leadership time has been consumed.

To be clear, we still see a fundamental need for people to make these decisions. The goal is not to remove human judgement, but to support it with better information, presented more consistently.

How technology changes the equation

Technology now offers a meaningful shift. AI-powered tools can summarise lengthy feedback into consistent structures, for example, aligned to a contribution framework or decision-making criteria. Qualitative data can be standardised: highlighting two key strengths and two areas for improvement, or condensing a lengthy self-assessment into a concise, structured summary.

Smart filters and comparison views enable RemCom members to select cohorts or groups and examine two, three, or four partners side-by-side, with both qualitative and quantitative data presented in a consistent format. Quantitative data can be colour-coded and shown with percentage spreads to make relative performance immediately visible. AI-generated summaries ensure that the qualitative story is told at a comparable level of detail for every partner.

The result is that comparisons become easier to make. Rationales are easier to agree, feedback is easier to provide, and challenges for improvement are easier to set out, all in relative terms to the cohort.

Fairer processes build trust

Partners may disagree with specific outcomes. That is an inevitable feature of dividing a finite profit pool. But when they understand and believe in the process, when they can see that comparisons were made thoughtfully and consistently, cohesion is enhanced.

And this brings us back to the zero-sum game. At one level, dividing profits fits that description: there is a profit pool and a number of partners, and the pool must be divided. But the best firms use this process to spur entrepreneurialism, innovation, and growth. They look ahead. They support high-quality goal-setting, encourage collaboration on strategic goals, and invest in future value creation. That forward-looking orientation contributes significantly to cohesion and moves the partnership beyond a zero-sum mindset.

If you're interested in discussing how your firm can improve the quality of its partner comparisons, please contact us at Performance Leader.

Please include your international code e.g. +1

Thank you!

Your demo request has been received!

Oops! Something went wrong while submitting the form.