If you follow anything or anyone in the Observability space, you’ll know that the report was published this month. Every vendor and their dog is shouting about their position so I won’t go through those. Most vendors are offering complimentary copies so go read it for yourself (Here’s a link to Honeycomb’s offer as they sponsor our newsletter). What I want to talk about is how Gartner are choosing and defining the category as I’m not convinced it’s the right way (Who’d have thought I would have opinions right?).
The category is already becoming bloated in my opinion. Right now, beyond understanding applications, Observability tools are expected to be able to understand and represent Cloud Infrastructure (i.e. a Service Map), integrate with Service Management tools (hook into stuff like Jira et al) and also they must be a security analysis tool looking for vulnerabilities. They’re also throwing in Business KPIs too, which is more in the client side analytics space than in the Application side. This is no longer an Observability tool, it’s a suite of tools.
One thing I would say here is that there’s a difference between the tool, and the data. To me, those are all separate tools, but they should all be able to act on the same data. I do like that they acknowledge that the ability to perform exploratory analysis against the telemetry data is a core part of Observability, this gives me hope that tools will embrace this over static dashboards.
It wouldn’t surprise me if next year the report says that you must have AI and Machine learning to be considered a leader. The report does do a good job of analysing the different vendors out there, however, I’d caution that a lot of the responses (positive and negative) don’t provide a massive amount of depth when it comes to real world usage of those systems.