Episode 105 Transcript

Ep. 105 - From Blind Spots to Breakthroughs: The Power of Benchmarking w/ Charles Gustine & Connor Budden

Banoo Behboodi: Hi, everyone. Welcome to the Professional Services Pursuit, a podcast featuring expert advice and insights on the professional services industry. Today's episode comes from one of my favorite sessions at Converge, where we made a pretty big announcement and shared powerful insights. As most of us know, the most effective leaders don't just measure success by their own metrics. They look outward, benchmarking as their peers to see what's really possible. In this session, you'll hear surprise lessons from SPI’s decades of benchmarking, stories of wake-up calls that turn blind spots into breakthroughs. You'll also learn how top firms are using these insights to avoid costly missteps and lay the groundwork for bold transformation. Enjoy.

Charles Gustine: Hi, everyone. Welcome to "From Wake-Up Calls to What's Next: Benchmarking That Transforms." I'm Charles Gustine, Director of Customer and Market Insights at Kantata. I'm really excited to be back on the Converge stage today, this time alongside Connor Budden, Global Director at Service Performance Insight, to talk about the power of benchmarking.

I'm going to introduce Connor and SPI a little more in just a moment as I ask Connor to help break down what the Professional Services Maturity Model is. Next, we're going to get into various topics. What is the impact of leadership? SPI has these five pillars, but maybe leadership is the most impactful of them all. How do benchmarking wake-up calls drive that better leadership that we're all looking for?

We're actually going to be unveiling a major announcement for the first time. This has been a week of announcements and previews and unveilings, but this is a really big one as we'll talk about what SPI Insight is and what it means for the Expertise Engine and what's next.

Really quickly before I hand off to Connor, we are going to be talking about some product features here. Safe harbor statement: what's represented here may not ultimately be exactly what happens. With that, Connor, I'm going to hand over to you to talk a bit about where SPI has been, where it's going, and what it's doing.

Connor Budden: Perfect. Thanks, Charles. I think from my side, let's start with the question. How do you really know if your services organization is world class or if it's just running on heroic effort? Are you the Barcelona football soccer team or have you just happened to hire Cristiano Ronaldo? We've all seen it where you have consultants duct taping process together. Slack is blowing up, late nights. All of that works, but it's fragile. Jumping onto the next slide for me, Charles. At SPI Research, we've been benchmarking professional services firms for 20 years. We've had more than 8,000 firms take part in that benchmark. This year we've launched our 19th benchmark. What I'm going to share today comes from that data. This is about what separates the top 5% of performers we see from the rest. Let's dig in.

We have five pillars. As Charles mentioned, we've got leadership, which is about setting vision and direction. We've got client relationships: building the backlog, creating demand, and sales. Talent is the central pillar because without the right people, nothing else works. If you've ever had to start a project with the one consultant who just learned how to use the tool they're about to implement, you know what I'm talking about.

Service execution is delivering what it is that you've sold. For us in the professional services industry, this is where our profit is made. Finance and operations are about tracking KPIs, margins, receivables, and keeping the business on target. We like to think of these five pillars as gears in a machine. If one slips, the whole system starts to suffer a little bit. You can have strong sales, but if you don't have strong delivery, it's a disaster. You can have great delivery, but if you don't have the talent, it's impossible anyway.

On the right, you'll see we've got a bit of a gauge. This is a reminder that at SPI we view maturity as always relative. You're not just moving through the framework that we've set; it's you versus the market and where the market is today. If I give a quick analogy to explain that: Imagine you're playing golf. You shoot ten under par and you win the tournament. That's great. You come back a year later, shoot ten under par, and you're 40th. It's not that you've gotten worse; the rest of the field has gotten better. That's one of the things that we're super conscious about.

If you look here, we've got level one. This is where most firms tend to start. We call this heroic. People are pushing hard to get things done. It's brute force. It's not necessarily scalable. On the other side of that stage, we've got the optimized side, which is our green bit. This is level five: real time, governance, processes humming along. That real business engine is just working. Only 5% of firms ever get that. That top tier is where we see the extraordinary EBITDA and margin. We don't see a lot of firms that stay there the entire time. We see firms jump backward and forward between level four and level five.

Let's talk a little bit about why maturity matters. I presented this slide a few times, and executives always do the same thing. They look straight at the bottom part of this, the EBITDA. To be honest, they're right to. If you look at the EBITDA difference between a level one firm and a level two firm, it's almost double. You look at the difference between a level four firm and a level five firm, the jump is massive. This happens to be the data from our latest benchmark 2025. However, this is consistent throughout all 20 years that we've done benchmarking. Even though last year was a poor year for revenue growth and PS, one of the lowest we'd ever seen on record, we still saw these jumps play out between the levels one, two, and three in our maturity model that we've got.

Let's dive into the cogs and how things are a little bit more interconnected. Here we start talking about cross-functional maturity. It's not just good to be well versed in one pillar and score high in one pillar; you have to look at things across the board.

If we look here, we've got leadership as represented by the purple cogs. We've got client relationships or sales represented by the orange cogs, time in blue, service execution in red, and ops and finance in green.

If you've got great leadership, you can have the best vision in the world, but if you don't have the capability to deliver, it's a little bit like announcing you're going to run a marathon and then realizing that you've only trained by walking to the fridge. It just doesn't work.

We use this to work out where your gears are aligned and where they grind, where they need to change a bit. Let's dig a bit deeper with our scorecards. We use scorecards and take a look at our 165 metrics. We use them to position your performance in relation to everyone else.

What we're really doing is looking for where you are red, where you are level one compared to the rest of the market. We call this getting the red out. The easiest way to make those jumps between level one, two, or three and to increase your chances of that EBITDA jump is to get the red out and start improving in those areas.

We treat this scorecard like dashboard warning lights. If you ignore them for too long or if you don't know that they're there, chances are you're going to start to have some problems with your engine.

Charles Gustine: Question for you, Connor. How are these scorecard engagements that you do different from what people might get from reading the annual benchmark report, which obviously is very comprehensive, but in what ways does this go a level deeper than that?

Connor Budden: The data that we provide here is specific to your company and your industry, whereas the benchmark report just shows you all of the average data that we have across everything. This also specifically rates your company against everyone else as well. You could sit there and try to do that against the benchmark report, but it won't be specific to your industry or the kind of companies that are out there. If you're an IT organization and you're being compared against architecture, that's not a lot of help. You want to compare like for like.

Charles Gustine: Yeah, you'll never get granular enough with the report because it can show the average for services and it can show the average for your size, but you can never aggregate what's the average for my industry, my size, my geography. When you're doing these scorecard engagements, you're really refining down across—I don't know what the number is, how much you have in your data; it's a large amount. The really precise version of that data set aligns to the client you're working with.

Connor Budden: Yeah. No. It's over a million data points now.

Charles Gustine: That's a lot. I'm going to take the reins for just a minute, then we're going to get into some of the actual war stories, the real world stories that come from these sorts of direct, hands-on engagements that SPI does around benchmarking that move beyond having a subtle awareness of this thing, but the actual conversations that come out of these getting the red out conversations. Why does this matter? Going back to Connor's point about EBITDA, at Kantata, we talk about this idea of balancing through AI-optimized insight across these different things. It's not just about the bottom line. For example, if we improve our utilization by 1% or our margin by 1%, which is a great thing to do. You don't even really need the SPI benchmark for this. If you have 250 billable resources and a $200 hourly rate, if you improve by a utilization point, that's about $1 million in revenue annually based on that increased chargeability.

But that's not the end of the story, because that's not the whole point of health. This is actually something that Hester, the CFO, mentioned on day one. If that 1% utilization increase comes at the expense of client referenceability or at the expense of talent retention, then that is not fundamentally a healthy organization.

How do we maintain that balance and that tension across every project and our entire portfolio of projects in a way that is intentional rather than sometimes blindly? Every project has the opportunity to be these trade-offs. Many times I find when businesses are going, our utilization is right where we want it to be, our projects are coming in on time and on budget, and our clients are happy. If you're doing that out of spreadsheets, if you're doing that out of sub-optimized processes, I can tell you what it's coming at the expense of. It's probably coming at the expense of this other part of the circle, the team.

These things correlate. Project overruns, this seems obvious, but the SPI data tells us statistically have an obvious impact on client referenceability. Fewer than 5% of projects that are running correlates to greater than 73% referenceability of clients. More than 30% of projects that are running decreases that referenceability by 18%.

On the employee side, on average, it takes 140 days to find, recruit, hire, and onboard a new consultant after an employee departs. The actual cost of replacing a valuable consultant, according to SPI, usually exceeds $150,000. That's the cost of the hiring process, but also the cost of the downtime that comes with that outage.

Considering all these costs in tandem is really important when we consider why we benchmark and create that situational awareness of not just what our performance looks like, but what it looks like relative to what other businesses like us are doing. We know where the biggest opportunities for improvement are.

Before we get into the wake-up calls, I want to do a live poll and get a sense of how people think of benchmarking now. How would you describe your organization’s current approach to benchmarking performance? We'll give a minute for people to put in their answers for this. Is it continuous optimization benchmarking embedded in how you operate? Is it a regular practice? You're benchmarking at set intervals annually, quarterly? Occasional check-ins, when we need to do it, we do it mostly ad hoc, or not a priority. We rarely benchmark and don't necessarily see it as essential. Still seeing some responses roll in. Why don't we go ahead and push the results live now and talk about what we're seeing, Connor?

All right. So I think everyone's seeing the responses now. So it looks like 35% say it's a regular practice. They benchmark at set intervals annually, quarterly maybe annually in tandem with when the annual SPI benchmark comes out. and then about 26% on either side saying it's either occasional or continuous optimization. Anything about it that surprised you, Connor?

Connor Budden: No, I think that's pretty good. That's a good testament to Kantata’s customers here using the benchmarking process and going forward. When I'm out in the market, I’d expect it to be a little lower than that in general. That's pretty promising, I'd say.

Charles Gustine: All right, let's get into the set up. Connor, I know when we were talking and prepping for this session and we were talking about which of the five pillars matters most, one of your premises is it's leadership that drives the most impact.

Connor Budden: I do get asked this question as well. Connor, I don't have any benchmarks. I'm not buying anything. I don't have any money. What should I focus on, given your knowledge right now in the industry of what's happening? For me, there's a clear winner of all the pillars, and it's leadership.

There are five pillars. The one that correlates most highly with high performance is leadership, without a doubt. You can see from the table here—this is just one of the 165 metrics that we capture—this is talking about the well-understood vision, mission, and strategy piece. Everything under here correlates with a higher percentage of respondents more likely to recommend the company to friends and families, higher on-time project delivery, and all of those things go up the stronger leadership is.

One of the things I started to think about is, if we pause for a second and think about the best leaders we've ever worked for, we then start to think about what made them stand out. If you move on to the next slide for me here, Charles, was it their clarity, their vision, their inspiration, communication, or their ability to prioritize that made them stand out as leaders?

If I think about my career and where I've been, great leaders are decisive. They act with conviction. The problem is, when I look at some of the leaders that haven't been so great, one of the things they've lacked is conviction. One of the first things I would say to go.

The reason for that is they didn't have the evidence to act on. Time and again, from our perspective, when we look at the data, if you get the right leadership, performance in your company will follow.

Charles Gustine: Now let's pivot into these real world stories where you're working directly with these clients at SPI, anonymized, real world stories from your work benchmarking services organization. What are some of these wake up calls that you're inspiring that are taking people from blind spots to breakthroughs? What do those conversations tend to look like?

The first one I wanted to focus on, I use this analogy frequently ever since a few years ago I heard from a services leader. They put it in a way that I just quoted every time I've gotten a chance since. My job is to maintain the tightest possible team, to provide the best services to customers, which is actually a really hard job. How do you maintain the tightest possible team? I think of services as everyone is trying to pull as tightly on this rubber band without giving it any slack, but also not so tightly that it snaps.

What we tend to find is when you put in a PSA like Kantata, it's almost like turning on the lights in a room. Most businesses realize there's actually a lot more slack in the rubber band than they thought. There's some capacity over here. What's that doing there? Some businesses realize we are a minute away from this rubber band snapping, like we said yesterday in the IDC session, a self-reinforcing death spiral. Connor, share your perspective around an instance where you've given guidance around this.

Connor Budden: Well, I think to your point on the rubber bands, there isn't just one, but we collect 165 metrics. There's at least 165. Let's be honest, there are going to be more that we don't touch at the moment.

If I give the example, there was a firm that we worked with. They had 50 billable consultants and 100 operational staff. Normally, I would expect a business like that to have between 10% and 15%. These guys weren't just a little bit overweight; they were bottom-percentile bad in terms of what I had seen across more than a million data points we have.

Before we start to run around and panic about the state that the organization is in, there is a first sense check, which is: is this business special? Are they doing something fundamentally different from the rest of the market that would warrant this particular metric to be in this direction? Spoiler: in this case, they weren’t.

When they came to us initially, it was very much, "We think we've got a bit of a problem. We think we might be a little bit overweight." When we started to dive in a little bit deeper, their original discussion was, "We might fix this next quarter." We dive in a little bit deeper, and it changes from, "We're going to deal with it the next quarter," to, "We should probably come up with a plan of action this week, next week, and get something kicked off before the end of the month."

In that case, it wasn't just letting go of a bunch of operational staff. Some of it was moving operational staff to the billable side because they had more backlog than they could deal with. There were lots of no-brainers to move them from A to B.

Charles Gustine: The next thing I want to focus on is incremental gains. When I talked about the SPI benchmark, I always try not to talk about level one to level five, but just level two to level three, level three to level four, because the gains that can be seen here. A level two organization on average has their staff, each person 229 hours per year on admin time. Level three, 188 hours, so it's a delta of 40 hours or so. When you extrapolate that out across 250 people, it's $2 million in revenue. This is somewhat obvious, but when you really break it down, it can be beneficial. I want you to tell the story about a bucket of time that people spent time on that. Usually, it’s considered good non-billable time, but that depends.

Connor Budden: Yeah. One of those, it always depends. Doing our standard scorecard across 165 metrics, most of the usual suspects pop up. Once you're in the groove, you see the same things over and over. There was one metric that stood out. It was a little unusual, shall we say—mandatory training days. They were running at twice the market average. At first glance, not a crisis, but you start to dig in and it turns out to be a hangover from a past issue where their consultants were trained enough and they just forgot to update the policy when they started to get the benefits of putting in place a new policy.

We adjusted that. We freed up experienced consultants from unnecessary training. More time for client delivery; simple fix, immediately boosted productivity. I think this is one of the powers of benchmarking. We can look at what high-performing organizations are doing and what you are doing at the moment. We can use those high-performing organizations as a North Star if they're relevant enough for what it is that we should be aiming for. This is about using the data to help spot hidden drains on performance that you would never normally otherwise see.

Charles Gustine: Once again, your question to them, as your sense tracking is: Is training your consultants twice as much making them twice as good? When the answer is no, the obvious fix is you're putting too much effort into this thing that you can actually pull back on, which ties into the last story we want to share. In the last session, the poll we ran around what objectives our customers are trying to focus on right now, the number one thing that came in was increasing projects delivered on time. That's a nice hook for this. The thing to keep in mind is that idea of what on time looks like in the market is shifting, and you just want to make sure that you're not becoming so good at something that it comes at the expense of everything else, to the point of balance that we talked about earlier. Connor, why don't you talk a little bit about the perspective on on time?

Connor Budden: Yeah, you're exactly right. Most execs have a gut instinct: our margins are too low, we’re discounting too much, products are overrunning. They're often right, but sometimes the problem's bigger than they expect or it's not necessarily where they should be focusing. Problem being bigger than they expect referenced my story earlier about the overweight client. The client that we were looking at was saying we're not delivering or our on-time delivery is slipping. We need to work on it. We used the benchmark and managed to validate that it wasn't just them that was slipping—the entire market, year on year, has been getting worse project on-time delivery scores for a few years now. It's not that they were just getting worse; everyone in the tournament was getting worse at golf, not just them. The question then becomes, is it worth putting in the extra effort to fight what's happening? We’ve got this GIF in the corner of Usain Bolt doing his 100-meter sprint. If he's going to shave off another second of that 100-meter time, that's an unimaginable amount of effort. If you’re level five in something, like project on-time delivery, are you really getting the rewards for the amount of effort you're going to put in to improve that one metric, or are you just over-indexing on a particular area?

Charles Gustine: Which ties back into prioritization. There's a thousand things your organization can be working on. But in trying to be 100% better than the field on this thing, as the whole field is getting worse, are you basically guaranteeing you're going to tear an Achilles? Maybe slow down. You still get the benefit of being top of the pack.

Connor Budden: You're still first without tearing your Achilles.

Charles Gustine: Let's talk about how AI fits into this.

Connor Budden: We've just completed our AI study for 2025. You must be living under a rock if you haven't heard this already. The game is changing. Things are shifting. We know.

For 20 years we've had benchmarking more as a rearview mirror thing where people come and reference us and use it. AI is starting to change that.

This table here is showing which area you think AI is going to have the biggest improvement in for 2024 and 2025. One of the biggest changes is in corporate reporting. That is super important because, as I mentioned earlier, leadership is one of the biggest drivers for high performance.

We're seeing a big change in corporate reporting. That reporting is giving people information. It's allowing them to act quicker and be more decisive. We're seeing a big change there because they're moving from decisions in weeks to decisions in seconds. Leaders are starting to ask questions and get contextual answers instantly, whereas before it would have taken them ages.

The "Good to Great" reference in the corner means it's going to be easier to be a good leader as per the definition of today, but the bar for great will rise.

Charles Gustine: Which ties into the big thing that we want to talk about today and unveil. We want people to imagine a scenario where you're not looking at a report in order to get your benchmarking data, but it actually finds you. Benchmarking insights surfaced in context. Granular, peer-aligned data, the kind of data we were talking about with Connor that you get from the scorecard engagement, but without having to go out of the way to do the scorecard engagement to get that initial wake-up call.

Up-to-date visibility into what average performance looks like for your cohort and high performance looks like your cohort. Most of all, AI agents move from time savers to force multipliers because they're trained on rich, independent benchmarking data and can surface recommendations shaped on trends and best practices in the industry.

We have been talking throughout this about the expertise engine being this thing that understands your business, but doesn't understand your business in a vacuum. It understands it in context of the industry around it and what high performance looks like. If that were able to be true, every PSA organization would have the chance to be this transformative wake-up call.

So, Connor, I'm going to pass it to you first because I want you to talk a little bit about what the general vision for SPI Inside is. Why this? Why now?

Connor Budden: Perfect. This is today's big reveal. For the first time, we are embedding years of aggregated, anonymized delivery metrics directly into the PSA platform that you already use.

We're calling it SPI Inside. With SPI Inside, your system isn't just going to report utilization, margin, or delivery. It's going to tell you what those numbers mean in the context of the market that you're operating in.

If we dive in a little deeper, you'll see here on the next slide, there's an example of a conversation with an agent. I don't expect you to read all that text, but essentially, if I give you a brief overview, you'll be able to ask, our utilization is 77%. Is that strong enough? Our project margin is slipping. Where should I go and focus? You're going to get an answer that's not "it depends" or "let's take that offline." We've heard that too many times from consultants. For the first time, we're going to have benchmarking moving from a rearview mirror to the steering wheel, helping you guide your business forward because it's there in context of the decisions that you're making moving forward.

Charles Gustine: This is going to be something that we think will be part of the secret sauce of what makes the expertise engine extra powerful: that main, specific context that moves beyond what a generic LLM can do. With the agreement that we have just entered into, this exclusive partnership, SPI Inside is going to be something that's very powerful. It's going to be something that SPI is going to take to more categories in the market. Kantata is going to be the only PSA provider that is able to surface this independent data set within our PSA and train our AI on it.

This is going to be really powerful for our customers because it will supercharge what the expertise engine can do, what the accelerators could do, and therefore what you can do with Kantata. It doesn't end there. Wake-up calls shouldn't end in embedded signals. When you see a wake-up call in the system, you should be able to follow up on it. When benchmarking surfaces a gap, we want our customers to be able to go deeper with SPI, get that hands-on conversation, and receive an independent, unbiased, comprehensive PS maturity assessment across these pillars and levels.

As part of this partnership, we will offer an exclusive discounted rate on the SPI scorecard that's available to Kantata customers. This will equip businesses to drive prioritization, decisiveness, and conviction. I'm going to do one more poll: would you like to take advantage of an independent SPI maturity assessment and exclusive Kantata customer discount? While that poll is coming up and people are answering it, Connor, we have a minute left. Tell people what they can expect from that kind of engagement.

Connor Budden: Yeah, I think most people, when they come to us to do a scorecard, their expectation is they want to confirm a gut feeling or take that business in a specific direction. What they usually leave with is something that is driving revenue growth and a plan to tackle ABC, XYZ when they start to look at the results of that scorecard. It really gives them a bit of direction to be like, okay, what I already had planned was right, and I'm going in that direction anyway, so it gives them that conviction. Or, it's actually, these are the things I should be focusing on. Let's go get some revenue.

Charles Gustine: Which all ties back to revenue growth. That's right. Really quickly, the last thing I want to say is the SPI 2026 benchmark survey is now open. Use this QR code on the screen to take that survey, especially if you're considering doing one of these maturity assessments. This is a great first step because you're gathering and processing the data that SPI will use to do the maturity assessment. We encourage Kantata customers to get out there and do this. If you do it, even if you don't go into a maturity assessment, you'll get the full report. You'll receive an executive insights report that looks at the pillars at a high level. We encourage everyone to participate in SPI's 2026 Benchmark survey.

Brent Trimble: If you enjoyed this podcast, let us know by giving the show a five-star review on your favorite podcast platform and leaving a comment. If you haven't already subscribed to the show, you could do so anywhere you get podcasts on any podcast app. To learn more about the power of Kantata’s purpose-built technology, go to kantata.com. Thanks again for listening.