One of the innocent pleasures of life in the analyst relations business is the opportunities it sometimes offers to watch respected specialists crossing swords in public. The allegations, the counterblasts, the challenges, the clicking of heels and the slapping down and picking up of gauntlets – there’s nothing quite like a good duel to settle an argument.

So the public tussle last October between Michael Rasmussen and the Gartner French Caldwell has been one of the bright spots of recent times. Rasmussen is a specialist consultant and ex-Forrester analyst in the GRC (governance, risk management, and compliance) area and even labels himself a “GRC pundit”. His opponent, French Caldwell, is also a recognized expert in the field. Caldwell is a Gartner Fellow with a reputation for plain speaking and a career background as a senior officer on nuclear submarines.

These two doughty adversaries have come to blows over a “rant” (his own word) that Rasmussen wrote in October following the publication of the 2012 Gartner Enterprise GRC Magic Quadrant (“MQ”), on which Caldwell and a colleague had lavished six months’ work.

Rasmussen’s reaction to the new MQ was fairly stinging (“It is best for the compost pile, to be used as fertilizer for the garden next spring”), while Caldwell’s response (“Where’s the integrity, the transparency?”) clearly accused his rival of hitting below the belt.

Yet, unlike many instances where analysts square up to each other, there really were some serious issues at stake here.

Just a bit too magic

Rasmussen’s original complaint was that organizations might be led to make misinformed technology decisions as a result of reading the new Magic Quadrant. In his blog, the Magic Quadrant stood accused of being just that – “magic” – and of being “a mile wide and an inch deep,” with no transparency or clarity about how vendors were scored.

“The truth is,” he went on, “the MQ does not really help you identify and select GRC vendors that are the right fit for your business.”

One key argument was that the best vendor for a particular organization’s needs might well fall outside the MQ’s Leaders area, or even be excluded altogether from the Magic Quadrant, simply through not meeting one of the inclusion criteria. These might include fail factors like geography (where a vendor only has a presence in one part of the world) or revenue (where a specialist vendor’s overall sales fall short of the threshold the analyst has chosen to apply). After all, for a European GRC buyer, good coverage in Europe may be all the vendor needs to offer, geographically. And for a customer needing help with compliance management, the lack of offline audit capabilities may not matter a jot.

Given the way clients I meet regularly use the relevant Magic Quadrants as their short cut to an RFP short list, I’d say this was a valid question to raise, reflecting a genuine area of concern.

But the central point underlying all the unease that Rasmussen was voicing was that, unlike Forrester, Gartner does not disclose its criteria and its grading process in a way that would allow users to dig into the detail and examine a vendor’s score on each item. Since organizations’ needs in the GRC area vary so much, he claimed, what Gartner had produced was “absolutely useless” in helping them select a vendor for an RFP.

From the vendor’s point of view, Rasmussen felt it was unfair that vendors had to go through a scripted demo process that did not allow them to show their strengths.

“A vendor may have an absolutely amazing differentiator, but if it is off script you have to kick and scream to get even passing attention,” he claimed, accusing Gartner of “myopic vision.”

It’s not all about the demo

All this was just too much for Caldwell to take lying down. The two-hour scripted demo, he pointed out, politely but firmly, was all about the need to “compare apples to apples” and set a baseline of capabilities for evaluation.

“We listen to them for the rest of the year – but for two hours of one day, they follow our script,” the Gartner man wrote in his blog. “The demo script is fairly open. Any vendor who can’t find a place in it to demonstrate their best differentiators is slipping up.”

When his critic complained about the rigidity of the MQ’s scripted demos, French Caldwell was quick to point out that these were just one part of a larger, broader process.

“Everything we learn about the vendors over the course of the year is considered in the MQ evaluation – not just the two-hour demo, or the vendor questionnaire,” he wrote.

This is a really key point, as it confirms that the firm that does not invest, directly or indirectly, in building a relationship with the analyst throughout the year will be at a disadvantage. In other words, while the formal assessment matters, interactions with analysts need to be making an important and cumulative contribution during the months between MQs.

On shifting sands

When Rasmussen raised the issue of lack of transparency, Caldwell countered the accusation by pointing to the 12 criteria by which vendors are assessed, under headings such as “product service”, “sales execution” and “innovation.”

From my own standpoint, this does not look like a strong argument. We see several worrying issues emerging from the dozens of assessments The Skills Connection is involved in, across many different sectors, in the course of a year – and inconsistency is certainly one of them.

Quite frankly, the larger Magic Quadrant process French refers to is riddled with inconsistencies. And one of the big surprises that came out of this debate was the later contribution made by Ian Bertram, global manager of the Gartner Analytics and Business Intelligence Research team and Head of Research for Asia/Pacific.

According to The Register, Bertram told its reporter that the research methods employed for MQs could be flexible, “so long as each team is consistent in the way it assesses the vendors it considers.”

If that is not a misquote, it is extremely significant.

It means the goalposts can be moved from one MQ category to the next, or from one year to the next, without the vendors involved knowing what’s happening. It means that having shown up well in one year’s Magic Quadrant is absolutely no guarantee of a good showing in the next. It means, in effect, that no single vendor is ever likely to accumulate enough experience of enough MQs to have an overall view of what is most likely to be required to earn a good assessment.

Commercially, that uncertainty is good news in the short term for The Skills Connection, as it is bound to nudge vendors towards choosing to work with a specialist firm whose people have direct experience of hundreds of varying MQ assessments. But how can it be good for the industry? Consistency has to be part of the framework within which the analysts, vendors, and customers create and use MQs.

Apples and pomegranates

In most MQs, there will be a survey, though this may range from 12 very general questions to more than a hundred highly detailed and specific queries. Often, but not always, there will be a demo or a briefing. Sometimes the core of the assessment is the briefing. But the fact is that the format and rules of engagement can shift in unannounced and unpredictable ways, as Ian Bertram implies, from year to year.

Analyst responsibilities change, MQs split and proliferate, and even though the 12 criteria French Caldwell mentions stay the same, the crucial sub-criteria that underlie them are often altered without anyone on the outside being made aware of it.

We’ve seen this recently with a Skills Connection client whose dot moved inexplicably downwards, despite the vendor’s bumper year in both absolute and relative terms. We challenged the assessment and even escalated it to a senior level at Gartner, only to be told that the categories and criteria were the same as before, but the sub-criteria – and the weightings applied to them – had been modified. That, it appeared, was considered an adequate answer. We didn’t feel it was.

In some ways, we should all be grateful to Michael R and French C. The public clash between them has fulfilled a useful function in dragging important issues into the public arena and highlighting areas where Gartner needs to tighten up its approach.

At the moment, though, for all French Caldwell’s protestations about transparency, vendors are all too likely to find themselves dealing with an analyst who is – wittingly or unwittingly – comparing apples with pomegranates. And that really does not help anybody.

Are we on target? Are you dissatisfied with the Gartner Magic Quadrants as an aid to identifying suitable vendors? Or are you a vendor that struggles to show your worth within today’s research formats? Have your say, and send us any practical tips you’ve discovered that we can share with our readers.