Why GenAI alone won’t cut it for MQ and Wave submissions

LinkedIn is flooded these days with stuff about GenAI tools to solve your every problem. The promise is enticing. Whatever the task, don’t sweat it. Just press the button and sit back. Job done.

Writing responses for analyst assessments like Gartner Magic Quadrants and Forrester Waves is one of those jobs where AI certainly should be able to help.

But it’s interesting to note that Gartner is now including a caution against relying on GenAI in every MQ welcome pack.

The warning’s not there for nothing. Gartner is seeing a lot of really poor AI-generated submissions. And from what we’ve observed when we’ve tested various GenAI approaches with our clients, I’m not a bit surprised.

Hallucination

First off, we all know that AI can hallucinate. And for any analyst, seeing fiction is the reddest of red flags. Once the analysts encounter one piece of made-up nonsense, doubt creeps in. From that point on, they will always hesitate to believe the rest of the information you’re providing.

But hallucination is only the start. The second issue is the type of information the AI will be relying on.

Focus on strategic differentiation

MQ scores are assessed on the basis of differentiation, objective evidence and clarity of vision and strategy. But almost everything the AI has access to will be either marketing material or client documentation. The required level of up-to-date information, differentiated insight, competitive knowledge and strategic planning data is very unlikely to be available in sources that are open to access by any LLM (even privately).

Automation bias

What’s worse is that there is also a third snag. And this time it’s not just about technology; it’s about us pesky humans.

We have a cognitive bias issue with AI-generated responses.

We are too ready to believe them.

This gullibility in the face of AI’s smoothly-honed output is called ‘automation bias’, and it’s becoming a big problem.

We believe what the robots tell us. We fail to double check it like we should. It looks right, so we take it at face value. A string of academic studies has shown that we are up to seven times more inclined to trust what AI says than what another human tells us.

Human-first hybrid approach

If you let AI write that first pass answer and it’s not 100% correct, you could be in trouble. Chances are, what you submit will either be inaccurate or, worse still, pure fantasy.

So use GenAI, by all means. A hybrid approach can save time and resources. But your hybrid approach must be human-first. And you don’t want to be having to make it up as you go along—especially when there’s the opportunity to learn from other people’s mistakes and experiences, as shared in our webinar “Where are you on the Analyst Engagement AI Maturity Curve”.

Check out our Knowledge Bank to access all our resources around successful analyst engagement.