Calendar-reducedPreparing for your analyst assessment is like preparing to win a new customer. When you’re approaching a sales prospect, you start by thinking about the company’s buying cycle, as it relates to your products and services.

Analyst assessments are no different. They have their own cycle, only it’s an annual, repeating cycle. Like your buyers, the analysts have different needs at different times within that cycle. But one thing is always true: they always need the information you give them in a form they can make use of at that specific point.

To illustrate the analyst’s assessment cycle in action, let’s look at the monster of all assessment processes, the Gartner Magic Quadrant (or, more typically now, the combination of Magic Quadrant and Critical Capabilities).

Putting in the hard yards

If you’ve been involved in this ritual, you will know that the starting gun goes off when an email arrives, inviting you to the party.

While the whole cycle is quite long (averaging 5-6 months), your actively invited participation is usually pretty short and sharp – one to two months, at most. This is a time of frenzied activity for you, and relatively little for the analyst. Collating information for surveys, finding references, building briefing decks and preparing increasingly sophisticated demonstrations takes a major effort, with some larger companies investing time worth hundreds of thousands of dollars.

The analyst has more than just your MQ to think about

Once the submission is in, most companies breathe a sigh of relief and abandon analyst engagement to focus back on their core business. But that’s not what the market leaders do. They change gear, but they don’t take their foot off the gas. They settle back into a steady routine of analyst inquiry, email updates and other communication, keeping up the flow of information while taking care not to appear too desperate or pushy.

During this next period, among all the other activities the analyst is committed to (and I mean “committed”, by a pretty aggressive scorecard), he or she will be pulling the threads together and starting to draft copy. It’s a period of contemplation and analysis, but there’s still a lot going on. It may not be a time of formal supplier engagement, but the analyst is not going to have the luxury of retreating to a hermit’s cave and thinking about nothing but the Magic Quadrant. There’s still a busy schedule, with other deadlines to meet and other deliverables to produce.

If you’ve got an objection, stick to the facts

The next visible stage occurs when the first draft of the assessment is circulated for review. As a supplier, you will have five days to make a formal response in writing, plus, typically, one 30-minute slot to discuss factual concerns with the analyst and make sure the published result accurately reflects the true facts. All too frequently, though, facts go out the window and we see this dialogue descend into bitter wrangling over opinions. Time and again we see the CEO involved load his gun and try to shoot the analyst down in flames. But when it’s a matter of opinion, there can only be one winner. For the analyst, there’s only one opinion that matters.

The interval from draft to publication may be anything from one week to a month. When it’s longer, that’s generally because of the level of hysteria the draft has provoked. Many suppliers will call for escalation, rattle whatever sabres they can find and demand a recount.

But there are actually only two things that can derail the process – clear factual errors, or evidence that the analysts have failed to run a fair assessment. From what I saw first-hand during many years at Gartner, I can tell you that threats and bluster, even from major vendors brandishing their big contracts, don’t lead to assessments being changed. In my time as an analyst, I saw Gartner wave goodbye to at least three very lucrative contracts because of its principled refusal to bow to this kind of pressure.

Publication may not be the end of the story

Finally, then, the assessment is published. Like it or loathe it, this is usually the close of the cycle and the end of all the arguments about it.

But it ain’t necessarily so. Two other factors may still change the published copy that buyers get to read.

The first is a new type of extension to the MQ itself, introduced in the last two years – an MQ Context piece. Because many assessments are very broad, or even universal (global, cross-industry, and spanning buyers of all shapes and sizes), it is recognised that there are sometimes special circumstances that need further discussion.

Increasingly, these days, you will see MQs with an extra section, added after initial publication, that explores the issues and relevant suppliers for a specific geography or industry. If your focus is on a particular niche, this is an opportunity you should grasp. Make sure you discuss the distinct importance of your niche with the analyst, with a view to exploring the market value of a relevant Context piece.

The final reason for a change to the published assessment is the recognition that there has been a significant factual error. Ultimately, Gartner is always bound by the facts. So even when the MQ has been published, it can still be changed. If you can demonstrate a material error or a critical change in the relevant facts, Gartner will republish its note with a notice drawing attention to the changes that have been made. If something is truly wrong, you should never just abandon your case.

The assessment process is a repeating cycle of regular phases. Smart suppliers start early, stay involved, maintain contact with the analysts and engage appropriately in each different phase. By providing the right information at the right time, they help the analysts – and help themselves as well.