The Observability Cloud: It’s All About the Data

By Jeremy BurtonNovember 9, 2022

Not only is today “Launch Day”, but we also just passed the 5th anniversary of Observe’s founding. This makes it a great time to pause and reflect on just how far we’ve come and, more importantly, where Observe is heading.

When we started out, we believed that the only way to eliminate the complexity involved in troubleshooting distributed applications was to solve the inherent data problem that existed in almost every organization. The biggest problem was that data was siloed and each silo needed to be analyzed by a specialized tool.  And, usually, a big-brained human was required to correlate what each tool was saying to determine a root cause when troubleshooting.

What we did not foresee back then, but we now see in 90% of our customer interactions, is that the data required to troubleshoot applications has grown 40% year over year since. And because incumbent tool vendors typically charge for volume of data ingested, the cost is crippling.

We figured customers didn’t need another siloed tool, but rather an entirely new approach. Along with it, they needed an entirely new architecture – one that would change the economics of deployment by an order of magnitude or more. The culmination of our work over the past five years has resulted in what we announced today, The Observability Cloud.

We figured customers didn’t need another siloed tool, but rather an entirely new approach.

Observe’s entirely new approach is focused on eliminating data silos and being able to quickly access relevant context while troubleshooting. Since day one we’ve been able to ingest any kind of event data into a single database – or Data Lake, as we call it today. We then started building the necessary language constructs needed to curate event data (logs initially, followed by metrics) into the Data Graph which are critical to delivering relevant context. Next, we built visualizations, implemented alerting, and delivered pre-built Data Apps for popular services like AWS and Kubernetes so teams could get up and running quickly. And today, we’re introducing full support for distributed tracing based on OpenTelemetry.

It’s been a long, but necessary, journey. We wanted to meet users where they were today and provide a roadmap to observability. If they have millions of lines of code instrumented with logs, no problem – we’ll make them easier to work with. If they have thousands of custom metrics, we’ll take those and link them to their logs. And if this can all be ultimately done via fresh new OpenTelemetry instrumentation, awesome. But if I had to guess, logs and metrics will be kicking around for many years to come.

And our entirely new architecture? Well, we suspected the use of AWS’s S3 to store data along with elastic compute and usage-based pricing would deliver game-changing economics. To prove that we’ve recently performed a series of benchmarks comparing Observe to market-leading offerings for common use cases such as Host Monitoring and Kubernetes Log Analytics. The results are quite impressive with a 2-3x cost advantage even at modest host counts, and ingest volumes stretching to 10x, or more, at scale.

Observe cloud scalability

Clearly, with usage-based pricing, your mileage may vary. Over the past two years, our customers have asked us for not just transparency in what drives their usage, but also for guard rails and controls to help manage it. With that in mind, we’re proud to announce today a slew of new features geared towards putting cost control back in the hands of the customer. And we won’t be asking any of them to filter, sample, or eliminate any of the precious data they need to troubleshoot problems.


Thank you for following us on our journey over the last few years. I hope you were able to join our live event earlier, if not you can watch the replay here. And if we can help get you on the road to observability, and maybe save a little money along the way, then please reach out to us.