Unless you’re living under a rock, you’ve probably heard of Datadog, one of the most loved and used observability platforms currently available.
Recently, we at Finout, set out on the journey to shine a bright light on Datadog's pricing model, and allow organizations to better understand them, prepare for the end-of-month invoice, and optimize their bottom line.
The main reason for us to go down this rabbit hole is that the Datadog platform lacks observability on its out cost (yep, I see the irony here), making it quite challenging to reason about its cost, optimize it, and be alerted on the ominous increase.
In the upcoming series of posts, I’ll cover
- Why should you take an interest in your Datadog costs?
- How Datadog pricing models work, and the lessons learned as a Datadog user.
- The approaches we tried to crack the Datadog cost/usage model.
- What can we do to get Datadog costs under control?
This article, the first one in the series will try to answer the basic question of them all; Why you should care about your Datadog costs.
Firstly, a disclaimer - Datadog is an AMAZING product. And although throughout this series I'm going to cover how costly it is, it’s also very valuable, and the observability gained by it is exceptionally good. You pay a lot but also get quite a lot in return.
I feel it’s important to explicitly mention it before we dive in.
Table of contents
- Datadog Pricing Model 101
- Why Should We Even Care About Datadog Costs?
- It’s Not Your Core Competence
- Scalable - Throughput and $ Wise
- Implicitly Increases With Infrastructure Scaling
- Difficult To Forecast
- So Why Not Just Use Datadog Usage Dashboards?
- To Care Or Not To Care
- Next Up
Datadog Pricing Model 101
Datadog has numerous products, for various use cases, and most of them have add-ons with additional functionality, capabilities, and cost. But the general rule of thumb you need to understand is that the more you use, the more you pay:
You are billed on the hosts to monitor or profile, You are billed on the volume (GB) of logs you are sending and indexing, etc.
If you know, or have a hunch how much you’re going to use - you can:
- Commit to a usage for relevant Datadog products, and get a decent discount.
- Prepay a “Base fee” - An upfront payment, then get discounted prices for any used products, and then any usage gets billed from that prepaid amount. It remains debit cards in a way.
But the key here, is you need to know how much to commit to, and then basically wait and pray you won’t overuse it.
Why Should We Even Care About Datadog Costs?
If you’re asking that, you are either:
- Very optimized - And therefore a real kudos for you! I know how complex achieving this is.
- Still at a small scale, therefore your Datadog costs are limited.
- You never used Datadog, and therefore don’t understand the Datadog pricing model and in turn, know how easy it is to get to a very large invoice.
Let’s break it down, and understand what makes Datadog different for your main cloud provider.
It’s Not Your Core Competence
First, let’s agree that a typical Datadog invoice can easily get to 4-10% of your cloud provider invoice - and since we know how expensive your cloud provider is, you can now understand how expensive Datadog is too.
Datadog is NOT your core competence, it’s a supporting tool - a very good one, and a very important one, but a supporting tool nonetheless. It’s not the EC2 instances that keep your business available or the underlying storage that stores the data that gives you a competitive edge, it’s an observability platform that helps your operations organization keep the lights on and the engine going.
You are paying, because it’s not your core competence - you’re paying because you want to focus on building your core value.
That being said, we should be vigilant on how much we pay on services that are outside our core competence too, and since there’s always room for “more monitoring”, and more monitoring means more cost, where do we draw the line?
Scalable - Throughput and $ Wise
It’s crazily scaleable, therefore anything you’ll throw at it will be ingested, stored, available for you to query, and then alerted on. Sounds amazing right?
Well, it’s indeed amazing, and then - anything you throw at it also gets billed.
And that’s what makes it so easy to get out of hand - launching a new service that sends too many logs can easily be the difference between $100 a month to $1500 a month. This can be due to developer error, or just since the service you’ve just launched serves so much traffic that even the minimal necessary logging causes excessive costs (and this happened to me, and I've written about our log cost optimization effort).
Implicitly Increases With Infrastructure Scaling
As the business grows you add more infrastructure instances (EC2 instances, serverless function, etc), and you implicitly add additional resources to be monitored, which in turn increases the Datadog billed amount.
This is the expected behavior - you just need to keep your unit economics in check, i.e. the increase in monitoring cost is indeed in linear correlation to the business and infrastructure costs.
Difficult To Forecast
Lastly, it’s quite hard to provision usage, and therefore forecast the expected cost.
Ask your average developer what their services’s throughput, and what’s the volume of logs each request generates - I doubt they could give you the number, And even if they knew - there’s still the process of translating usage to estimated cost.
Alternatively - what’s the expected number of hosts we’re going to use to support the Black Friday sale or the Super Bowl halftime? And should we commit to this number or our average regular usage?
What are the expected additional services we’re about to develop - how many compute resources, serverless, log, and metrics they are going to use?
And since those numbers are hard to come up by, the commitment we buy from Datadog is also hard to come by. Most of the time, the de-facto way to do it is to start using the product with some (or no commitment) and adjust the commitment according to the actual usage (which is forever changing).
So Why Not Just Use Datadog Usage Dashboards?
It’s true, that Datadog offers usage metering, where you can have a general understanding of your stance and usage.
But does the average developer/product owner understand what 4TB of 7 days retention indexed logs mean in terms of cost and operability of the service?
Can we do with 3.5TB of 3 days retention?
And how much will we save if we go for it?
Do they understand how this can be optimized? and should this even be?
Do they understand how much it costs, which portion of it is committed up-front, and which gets billed on a pay-per-use pricing model?
To Care Or Not To Care
We pay for Datadog so someone else will handle the headache of managing such a complex platform, but when the Datadog expense gets to be a headache of its own - this is the time to regroup and rethink your strategy.
Through the right usage analysis that turns into usage optimizations and better commitment allocation, it’s possible to save a large chunk of the end-of-month bill. A chunk that can be in the ranges of tens and even hundreds of thousands of dollars a year.
And since either way when your bill gets painful enough, you’re going to do the maths and calculate your commitments from time to time - why not do it better, and save more?
Next Up
This post serves as an introduction to the challenges and the importance of the topic. You can find the link to the original blog post here.
In the next post, we’ll talk about the Datadog pricing model and its pitfalls.
Click here to continue to the next blog post.
As always, thoughts and comments are welcome on Twitter at @cherkaskyb
Read More About Datadog Costs
How Much Does Datadog Cost?
Understanding Datadog's pricing model is crucial when evaluating it as a solution. Explore the various factors that influence Datadog's pricing and gain insights into its cost structure. Additionally, discover effective considerations for managing usage-based pricing tools like Datadog within the context of FinOps.
Read more: How Much Does Datadog Cost?
Part II: The Magic That Is In Datadog Pricing
In the second part of the blog series written by our talented Software Engineer, Boris Cherkasky, we cover how in general Datadog products get billed, and uncover the factors that sometimes lead to unexpected end-of-month invoices.
Read more: Part II: The Magic That Is In Datadog Pricing
Part III: Data Puppy - Shrinking Data Dog Costs
In the third part of the blog series written by our talented Software Engineer, Boris Cherkasky, you will discover the key factors to consider for effectively managing your Datadog costs. Boris will guide you through the process of uncovering the hidden potential for Datadog optimization, enabling you to make the most out of this powerful platform.
Read more: Part III: Data Puppy - Shrinking Data Dog Costs
Discover the intricacies of Datadog pricing, explore key features such as debug, custom metrics, and synthetic monitoring, and provide strategies to optimize costs without compromising on functionality.
Read more: Datadog Pricing Explained
Datadog Debug Pricing
Datadog Debug offers developers the remarkable ability to streamline bug resolution and optimize application performance. To fully harness the potential of this invaluable tool, it is important to grasp its pricing structure, evaluate the value of its advanced features for your specific debugging requirements, and identify key elements that influence Debug pricing.
In this blog post, we dive deep into these essential aspects, providing you with the knowledge needed to make informed decisions and leverage Datadog Debug effectively for enhanced development workflows.
Read more: Understanding Datadog Debug Pricing
Datadog Custom Metrics Pricing
Datadog custom metrics empower businesses to capture and analyze application-specific data points, tailored to their unique use cases. The true potential of Datadog custom metrics lies in the precise insights they offer into application performance. Therefore, comprehending the product's pricing structure and evaluating the value of advanced features becomes crucial in making informed decisions to optimize costs effectively.
Read more: Understanding Datadog Custom Metrics Pricing
Datadog Synthetic Pricing
Integrating Datadog Synthetic Monitoring into your monitoring and observability strategy is a vital step for organizations seeking to proactively monitor and optimize their applications, while ensuring exceptional user experiences and mitigating risks.
In this blog, we will dive into the Datadog Synthetic pricing structure and explore the key factors that influence these costs. By understanding these aspects, you will be equipped to make informed decisions and leverage the full potential of Datadog Synthetic Monitoring.
Read more: Understanding Datadog Synthetic Pricing
Optimizing Datadog Costs
Discover effective cost optimization strategies for utilizing Datadog to its full potential without incurring unnecessary expenses. By implementing these best practices, organizations can achieve maximum efficiency with Datadog while ensuring a high level of observability. Learn how to reduce monitoring costs without compromising on the quality of insights and monitoring capabilities.
Read more: Optimizing Datadog Costs