Optimize your Microsoft Sentinel pricing

Koos Goossens
6 min readFeb 2, 2022

… and get the most out of capacity reservation discounts!

Update November 18th 2022

I've more recently updated the PowerShell script mentioned in this blog post and created a fully automated auto-scale solution based on a Github Actions workflow. If you're finished reading this article, please take a look at the sequel as well. Because the sequel is always better, right? 😉

Introduction

That Microsoft Sentinel is based on Azure Log Analytics is no surprise for most people anymore. But I am a little bit surprised that many still don’t fully understand how Sentinel is priced, and that they’re aware that you still also need to pay for Log Analytics.

In this article I hope to clarify some confusion and help you leverage different pricing tiers to get the most out of the discounts available.

Wait, what?!

Yes, you read that correctly. When deploying Microsoft Sentinel you’re billed for every gigabyte you ingest into Sentinel on top of the costs you generate for ingesting that same gigabyte in the underlying Log Analytics Workspace.

Microsoft Sentinel isn’t realy a stand-alone Azure resource in itself. It’s actually a solution (SecurityInsights) you enable within an Azure Log Analytics workspace. You still pay for data ingestion into the workspace, and a separate fee for the additional Sentinel functionality.

Schematic to illustrate the additional Sentinel ingest fee on top of the already existing log analytics fee

How much does Sentinel cost?

This is hands-down the single most difficult question to answer when customers ask me this. But obviously very understandable that they do as part of the project.

  1. The biggest impact on total costs will come down to how much data you’re planning to ingest into Sentinel.
  2. You’re billed for data ingestion (per GB / month) and there are several pricing tiers with their respective discounts available.
  3. Data retention is the last part of this equation. You get three months of data retention for free, once the Sentinel solution is enabled. For every additional month you want to retain your data longer, you’re billed accordingly (per GB / month extra retention) with a limit of 730 days.

For the sake of this article I’m going to focus on the second item of this list and want to explain pricing tiers. Besides pay-per-GB there are several capacity reservation options available, each with their respective discounts. And by leveraging the most optimal pricing tiers this will bring down costs.

The key takeaway here is that both Sentinel and Log Analytics each have their own pricing tiers (with different discount rates) and the threshold for getting the most out of each, isn’t as straightforward as each respective name would suggest.

Both Sentinel and Log Analytics each have their own pricing tier options

Calculate pricing tier threshold values

“Please tell me–How much is enough? “— Skyler White

Microsoft Sentinel and Log Analytics (Azure Monitor) each have their respective pricing overview pages. There you’ll find the standard rates for the Azure region you use, and will display them in your local currency.

You’ll notice that the capacity reservation options are displayed as per day instead of per GB. This is because once you opt for one of the capacity reservation pricing tiers, you’re actually paying Microsoft in advance (reserving capacity) and get a discount as a reward.

If we bring down the daily prices of each capacity reservation tier to the base per GB price, we can determine what the optimal threshold is to increase your pricing tier.

Note that most organizations might receive discounts on these prices listed by Microsoft. This can be either because they have an engagement with a Microsoft Partner which acts as a Cloud Service Provider (CSP), or they might have an Enterprise Agreement (EA) with Microsoft directly.

I’ve created a simple Excel sheet which you can use to calculate your own pricing tier thresholds based on the actual prices you pay for each tier.

Provide your own rates and this sheet will calculate for you when to upgrade your pricing tier to the next level.

Screenshot of the Excel sheet which can calculate tier thresholds for you based on your own prices

You can download this Excel calculation sheet from my Github page.

Because you’ll receive higher discounts for the Sentinel part of the costs, you’ll notice that the thresholds for upgrading to the next level is probably much lower as well. So, it’s actually a good thing that you can control the pricing tiers of Log Analytics and Sentinel separately.

We’ll see an example of this in the demo below.

Get-AzSentinelPriceRecommendation.ps1

The next step is to determine the average daily ingest rate for your (Sentinel) workspaces to see where we can save some money.

This PowerShell script will:

  • First loop through all your subscriptions and find all workspaces deployed.
  • Next, it will perform a KQL query against each workspace to determine the average daily data ingest based on the last month.
  • These results are then compared with a fixed table of thresholds (set at the beginning of the script) to determine what the optimal pricing tier is.
  • Lastly, it will check if the Sentinel solution is enabled on the workspace and will repeat the comparison but now with a different table with different threshold values.
  • All results will be gathered in an overview and will automatically be exported as a CSV at the end.

The thresholds for these pricing tiers currently in the script are determined based on “list” prices as of February 2nd 2022 based on the West Europe region. Please use the Excel sheet mentioned earlier to determine thresholds that suit your environment best.

This particular example highlights a workspace generating a daily average of 165 GB of data ingest. Based on the rates for this environment its recommended to upgrade to a capacity reservation level for Log Analytics and Sentinel to 100 GB and 200 GB respectively.

You can download this PowerShell script from my Github page.

ARM deployment

Most larger organizations leverage infrastructure-as-code principles to deploy their Azure resources based on ARM- or Bicep templates for example.

Microsoft documentation defines how to deploy the correct pricing “sku” for your Log Analytics workspace, but information about Sentinel’s “sku” is nowhere to be found!

Well let me help you with that. Luckily the sku can indeed be provided as part of the properties parameter inside the /solutions section of your template:

ARM template for Log Analytics with Microsoft Sentinel solution. Both with pay-per-GB pricing tier

If you’d rather use capacity reservation you can resort to the ARM template below. Note that an extra parameter capacityReservationLevel is now required:

ARM template for Log Analytics with Microsoft Sentinel with different capacity reservation pricing tiers

Dynamic SKU?

I hear you think “OK, that’s great and all. But what if I want a template to be used for multiple different workspaces each with a different SKU?”

Yes, you are right. Because of the fact that we need an extra capacityReservationLevel parameter, once we step up from pay-per-GB, we need to make the whole SKU property dynamic.

This can be solved by using additional variables in your template where the content is based on the outcome of an if statement.

Let me explain:

dynamic variables where the contents are dependent on the outcome of an if statement

In the example above the variable sku (for Log Analytics) will either contain parameters name or both name and capacityreservationLevel depending on the value of pricingTier.

If pricingTier equals capacityreservation both parameters will be used. There’s also an else (when pricingTier equals pergb2018) where only the first will be used.

The reason behind the toLower() function is because you’ll never know if someone will provide the pricingTier parameter with or without lowercase/uppercase characters. You still want the dynamic variable to work even if someone would provide cApAcItYrEsErVaTiOn as a parameter value. 😎

Functions json() and concat() are obviously used to make everything work within the JSON format.

Defining the Log Analytics workspace and Sentinel solution within your resources section will change a bit as well. You’ll no longer be referring to static template parameters anymore, but use the dynamic variables instead:

The complete ARM template with different parameter files can be found on my Github page as well

Conclusion

I hope by sharing my insights and experiences others will benefit by optimizing their cost strategy. I hope a lot of costs will be saved by doing so! I’ve seen some very nice examples in the field already. 💰💰💰

This might create new room within your budgets for new opportunities to perhaps onboard additional log sources and make your environment even safer that it currently is.

Stay safe!

— Koos

--

--

Koos Goossens

Microsoft Security MVP | Photographer | Watch nerd | Pinball enthusiast | BBQ Grillmaster