Do independent vendor rankings tell the whole story? Should they influence your decision making?
Just being included on the list is a feat - keep in mind these are the top 10 WCM vendors on the market, according to some of the industry's leading analysts.
Surely all vendors included are proud to be in such fine company, and will work hard to remain so.
But as independent as these reports may seem - do they tell the whole story about vendors?
Are they viable as tools for buyers and decision makers, or should they be viewed as high-level snapshots?
Here's the problem: When you apply a uniform criteria set to a group that is anything but uniform, your conclusions are based on flawed assumptions.
Assumption 1: Each vendor has been correctly evaluated
What is "correct", anyway? Depends on who's asking - the vendor may see things differently from the researcher.
It's worth mentioning that both Gartner and Forrester have inclusion criteria that dictate whether a vendor even qualifies for evaluation.
These effectively exclude products that lack a certain revenue level, market share, geographic presence or number of customer references - even though many of these would be a good fit for lots of customers in the mid-to-enterprise market.
But even for the big boys, rankings may be affected by the scoring methodology used:
- Quantative scoring (like Forrester's comprehensive criteria spreadsheets) creates a very black and white approach to criteria - they are either met or not.
Add up the points, and you get a total score. Compare the total scores for all vendors, and you have your rankings.
This way it's easy to determine who comes out top - even though the margins may be measured in decimal points.
- Qualitative scoring (which Gartner claim to use) looks at the quality and uniqueness of each vendor's offering (in addition to quantative criteria).
This way, two vendors may meet a particular criteria differently, but still both score highly.
Though Gartner reveal their high-level criteria, weightings, and expectations for each quadrant, they actually don't reveal the scorecards of each vendor, just the most obvious strengths and weaknesses.
Researchers gather information from various sources - their prior knowledge, public information, product demos, surveys with the vendor, and customer references. With the ever-changing technology landscape and the complex ecosystems of each vendor, there's a lot of ground to cover.
When a researcher might say, "Oh, based on my mental model, here's a gap in their offering", someone more familiar with the platform might reply, "No, that's covered, you're just looking in the wrong place".
For example, a researcher might deduct points because feature X seemingly isn't in the core product, when in fact it might be common knowledge that feature X is easily achieved by using add-on Y.
Both Gartner and Forrester allow vendors to read drafts to argue their case or point out factual errors. In fairness, it's unreasonable to expect an evaluation to catch all such nuances, but they obviously can affect the outcome.
Assumption 2: All customers have the same business needs
These rankings are based on the researcher's conception of what a WCM ecosystem should offer - not the business needs of an actual, individual customer.
Yes, they make valid points about strengths and weaknesses of each vendor, but these should be individually assessed by potential buyers against their own business needs, not taken at face value.
If you're looking for a new WCM solution, of course you'll be interested in which vendors made the list. And granted, among these top vendors, you are likely to find one that covers your business needs.
But for many customers (even mid-to-enterprise scale), several of the vendors on the list will:
- be more complex systems than they needed
- require more organizational changes than they imagined
- end up more costly than they anticipated
Researchers tend to favor enterprisey or suite-like systems, which are usually marketed as (*cough*) all-in-one solutions.
In reality though, the majority of customers don't require anything that complex. Most organizations aren't at a digital maturity level where they can exploit the capabilities of such suites.
Using EPiServer as an example:
- Even though personalisation has been hot for years (and EPiServer has a quite capable segmentation engine built right into the core product), most customers we integrate for lack the knowledge or resources to tie the available technology into their marketing and content strategies.
- While Digital Asset Management (DAM) is one of the key features touted in the WCM field, most customers we integrate for require little more than the basic, built-in asset management in the core product. Some use an external add-on like ImageVault, or integrate with an existing Sharepoint installation, but that's about as frisky as they get in terms of DAM.
I could go on.
(There are exceptions, of course, where customers expect more than a particular platform can deliver, as summed up nicely by Tony Byrne recently.)
Assumption 3: All vendors aim to be clones of each other
As mentioned, criteria are determined based on preconceptions of what a WCM ecosystem should offer. Does that mean all vendors should adjust their strategies to emulate those who best match the researcher's expectations?
Of course not. But marketing is a fickle beast.
To stand out - or even just stay relevant - in a crowded space, vendors cannot afford to distance themselves entirely from buzzword-ridden marketing hype.
Vague terms like "experience management", "cloud" and "customer journey" will be thrown around by everyone, regardless of how big a part those concepts play in the actual product.
The WCM / CXM field mainly consists of two types of players;
- Best-of-breed solutions which offer one or more strong, distinguishable core product(s), capable of interoperability and data sharing.
In addition, such solutions tend to either acquire or support integration of market leading (best-of-breed) components that complement the core capabilities.
The main upside to this approach is flexibility, and the main downside is more reliance on customizations and integrations.
EPiServer and Ektron (pre-merger) are typical representatives in this category.
- All-in-one-suite solutions which offer a full range of vendor-branded components for most WCM/CXM needs.
Whether components are homegrown or acquired, they are all shaped into a homogenous ecosystem with seamless interfaces and interoperability.
On the upside, customers get a consistent user experience and only one vendor to deal with. While on the downside, customers run the risk of vendor lock-in.
Adobe and Sitecore are typical representatives in this category.
For a company like EPiServer, their strengths have always been a robust and extentable API, and an ecosystem of complementing add-on components allowing customers to tailor their solution according to their needs. In the new era following the merger with Ektron, EPiServer will continue to build on this, while also reinforcing their cloud capabilities.
It would make no sense for EPiServer to want to transition into an all-branded suite platform.
Marketing-wise, the current trend is that vendors want to appear agile like a best-of-breed system, but feature-rich like an all-in-one-suite. As a result, vendors may appear homogenous at first glance - when in reality they have wildly different strategies, and reasons for having them.
By all means, use vendor rankings as input when compiling a shortlist of platforms to research further. Use them for marketing purposes to show you are on the right path (as a vendor, or as a customer). But don't base your decision making on them.
For buyers, there is no substitute for a thorough vendor selection process.
For vendors, there is no substitute for building on your strengths to make you stand out in the crowd.