What’s New: Service Explorer
We’re excited to announce that Observe’s Service Explorer is generally available today!
Many customers rely on troubleshooting workflows that are oriented around the microservices that run their critical business applications: dashboards that provide at-a-glance health of services, service maps that visualize the detailed communication flows between microservices, and interactive views that provide rich context into the internal state of a running service.
Observe’s Service Explorer provides these capabilities out of the box and much more, enabling powerful service-centric troubleshooting workflows that align developers, SREs, and executives around unified views of service health, and equip troubleshooters to answer detailed questions about microservice and database performance that arise during incident response.
Opentelemetry-native service and database discovery out-of-the-box
Unlike traditional Application Performance Management (APM) vendors, who heavily invested in their own proprietary agents and then later bolted on OpenTelemetry support as a second-class user experience, Service Explorer is OpenTelemetry-native.
Service Explorer uses common metadata on OpenTelemetry spans to identify the microservices and databases in your system, meaning no additional configuration effort is required to use Service Explorer when your services are already instrumented with OpenTelemetry.
RED metrics for each service and database are automatically created from the underlying OpenTelemetry data, which the Service Explorer uses to power its out-of-the-box service health views. These metrics are also available in Metric Explorer so you can add them to custom dashboards.
Service Explorer’s other features, such as deployment markers, Kubernetes infrastructure correlations, and error tracking, are all powered by OpenTelemetry data, making Observe the first OpenTelemetry-native APM provider.
Deep inspection of service performance and upstream/ downstream communication
Service Explorer includes several workflows to help drill down into service performance, correlate deployments and dependencies, and view underlying telemetry signals such as logs and traces to get to the root cause of problems during an incident or to identify potential optimizations to improve baseline performance.
Correlate service health with new deploys: when inspecting a service or endpoint, view deployment markers on RED metric charts to correlate spikes in latency or errors to a new deploy. Use the Deployments tab to see all active versions of your service or endpoint, view RED metrics grouped by deploy to spot anomalous deploys, and pivot to traces related to a specific deploy.
Correlate service health with Kubernetes infrastructure: view service or endpoint performance over time broken down by Kubernetes pod. Pivot to traces running on a particular pod to isolate the pod as a potential performance bottleneck.
Correlate service health with downstream dependencies, and view the blast radius: use the dependency map to visualize the communication flow between your service and its upstream callers & downstream dependencies. View the throughput of upstream dependencies and errors/latency of downstream dependencies to quickly spot potential hotspots elsewhere in your system that may be affecting or affected by your service.
Pivot to logs, metrics, and traces in context when troubleshooting services or edges in a service map:
Quickly view slow or erroring traces: often the most interesting traces are slow or erroring traces. When inspecting a service or endpoint, use the Traces tab to view the slowest traces, error traces, or both slow and erroring traces that flow through the currently-inspected service or endpoint:
Save interesting findings to a Worksheet: most charts (including service dependency maps) in the Service Explorer can be added to Worksheets, which can be used as notebooks during an incident or simply to extend the out-of-the-box functionality by building custom views on the data.
Full flexibility, great economics, and industry-leading trace retention
I’ve worked with customers that wanted to use the out-of-the-box service map visualizations in their observability product to build a map of business services and their interactions with each other, but were hamstrung by the inflexibility in those products – the rich user experiences in those products were tightly coupled with the data that came from the proprietary agent.The service map visualization in Service Explorer is available in custom dashboards and Worksheets, and can be used with any data, not just OpenTelemetry data, using the power of OPAL, our data-processing language. In addition to building business service maps, our customers are building service maps in our platform using log and metric data alone:
We also have customers who use the included visualizations, such as the directed graph, for completely new use cases like customer journeys: canonical representations of how customers interact with the system, tied directly to the underlying telemetry data for customer-centric troubleshooting:
These customers are able to align executives, SREs and on-call folks, and development teams around the customer. As one customer put it, “it doesn’t matter if all our infra monitors are green if the customer is unhappy!”
All of these new industry-first capabilities, combined with Observe’s industry-leading 13-month trace retention and unparalleled economics, is made possible through an architecture that’s built on a data lake, with schema-on-demand for effortless transformation on any data at petabyte scale. Come take it for a spin yourself with a free trial!