You dont have javascript enabled! Please enable it! Optimize Microsoft Sentinel Log Retention With Azure Data Explorer - CHARBEL NEMNOM - MVP | MCT | CCSP | CISM - Cloud & CyberSecurity

Optimize Microsoft Sentinel Log Retention With Azure Data Explorer

27 Min. Read

Once you ingest data into Sentinel, you can retain that data in Sentinel for 90 days at no additional cost. With Microsoft Sentinel, you pay upfront at ingestion time for “most” of the services that you use in Azure, including the first 90 days of retention. You can also purchase interactive tier retention beyond 90 days up to two full years.

If you want to be able to do queries and create dashboards on your data inside of Sentinel, you can only look back two years into your logs, but what about if you want to look back into your logs “interactively” beyond two years at a much cheaper cost?

In this comprehensive article, we will look at how to optimize Microsoft Sentinel log retention with Azure Data Explorer (ADX).

Introduction

Microsoft Sentinel lets you store data for long-term retention using the new archival tier. The archival tier lets you store that data in an archival or kind of a cold backup that lets you restore that data when you need it. You can keep it to meet your regulatory requirement standards to store that data for a longer period (up to 7 years or 12 years) at a much cheaper cost point to lower the retention cost while still meeting organizational or regulatory requirement standards.

The benefit of the archival tier is easy to configure if you’re running a small to medium organization and you don’t have highly trained IT staff and those IT staff are getting pulled in a hundred different directions, simplification is important.

The archive option is very easy to set up inside your log analytics workspace, and you can set different standards for each table inside your workspace. You might have anywhere from 80 to over 100 tables in a typical Sentinel environment, you can go into each of those tables and adjust that standard as needed for up to seven or twelve years combined as shown in the figure below.

Note: We have also seen that you can keep data in the archived tier for up to 12 years if you want. This was officially announced at Ignite 2023 and documented at the data retention and archive in the Azure Monitor logs page. If you don’t see that in your environment yet, you might need to fill out a support request with Microsoft to extend the archive tier retention to 12 years.

Change data retention settings on a single table
Change data retention settings on a single table

The cost of the archival tier is quite affordable for most customers that are using it. The downside of this native solution is that even though you can’t have two years of interactive or hot retention inside of Sentinel, most customers can’t afford more than one year, because that data set gets bigger and bigger month over month. After about the one-year mark, it just starts to become prohibitively expensive. So even though, Sentinel does support two years of hot retention, and for most customers that we work with is one year or less, somewhere between 90 days and one year.

Related: We have seen that between April 2021 and June 2023, organizations should conduct log scans (if possible) to identify any evidence about the security concern associated with Storm-0558. So, increasing your log retention for a year or two would be ideal and will provide the ability to go back and possibly find the time of initial access and indicators of a compromise.

To access the archive tier, you have to do a restoration job or a saved search, and those search and restore jobs cost money, they are NOT free. And if they’re done poorly, they can be expensive because you pay per query (KQL) base search. Customers may feel hesitation to take advantage of their archive data set unless they’re only using it in special circumstances, so they might feel that that data is maybe too cold for their needs, and then for very large customers, that archival can still become an expensive line item on their bill when they’re collecting like many terabytes of data per day.

Before you start to restore an archived log table from the archive tier, you should also be aware of the following limitations:

  • Minimum of 2 days of log data to restore.
  • The upper limit of 60TB per single restore.
  • Up to 4 restores per table per week.
  • Up to 2 restore processes in a workspace can be concurrently running.
  • One active restore at a time per table.

To learn more, see Restore logs in Azure Monitor.

So why you might want to look at Azure Data Explorer for long-term log retention?

Several reasons make Azure Data Explorer a great alternative for long-term retention solutions. For example, if you want hot tier retention for more than two years, or if you want hot tier retention by doing active queries for more than the free 90 days if you have a massive data set, or if you need more than seven or twelve years of archival. Additionally, Azure Data Explorer (ADX) adds some capabilities that aren’t available with the Azure monitor log analytics solution, things as changing the compute tier for your ingestion and improving query time activity.

Related: Optimize log ingestion and access in Microsoft Sentinel.

Ingestion and Retention Pricing

Let’s take a sample pricing example for Switzerland’s North region, of course, these numbers will change depending on the region you are using, and depending on when you check these on different pricing documentation or pricing calculator.

Switzerland NorthPricing
Log ingestion per GB-ingestedCHF 5.51
Basic Logs ingestion per GB-ingestedCHF 1.09
Basic Logs search queries per GBCHF 0.007
Interactive data retention beyond 90 days per GBCHF 0.132
Data Archive per GBCHF 0.025
Search Jobs per GBCHF 0.00621
Data Restores per GBCHF 0.125
Log Analytics Data Export per GBCHF 0.132

So based on the example above, we are looking at about CHF 5.5 per GB to ingest data into Sentinel and Log Analytics, so the biggest cost for the Sentinel of course is data ingestion.

Then we do have a less expensive ingestion tier called Basic logs that is roughly CHF 1.00 per GB, it’s significantly less expensive, but it’s also very restricted on how that data can be used. You can only store it for eight days, you can’t do alert rules, and there are limited KQL query operators that you can use. Additionally, if you want to search data in the Basic Logs within eight days, then you pay CHF 0.007 per GB for interactive search.

So business case behind using Basic logs for security data is pretty weak. We don’t see this being used a lot and that’s where maybe Azure Data Explorer (ADX) can fill that gap as well.

Related: See how to optimize costs in Microsoft Sentinel.

Once you get past the included free 90 days, the data retention would cost CHF 0.13 per GB on the commercial side for extended retention (up to two years). Just consider that the bill is for the month, for the full volume of data. So if you’re collecting maybe 1 TB a day, you’re looking at 365 terabytes a year. That data set gets bigger and bigger each month. The hot tier data retention is at CHF 0.13 per month that price can add up, especially if you store data for more than a year in the interactive (Analytics plan) or hot space.

The archival tier is significantly cheaper, it’s at CHF 0.025 per GB range, but for big customers that can still add up quite a bit. And then to search (CHF 0.00621) or restore (CHF 0.125) per GB data from that archive tier incurs a cost as well, it is possible to create a restore (search) job that can cost thousands of Swiss francs if you restore a massive chunk of that data set. When customers have petabytes of archive data, if they need to restore a percentage of that, it can be a large volume of data.

Lastly, there is a cost to export the data out of Log Analytics workspace, so you pay to ingest, you pay to archive internally, and you also pay to export if you’re going to send that data out to destinations such as Azure Blob Storage or Event Hub for a long-term archival solution. You are billed at CHF 0.132 per GB of data exported from Azure Monitor Log Analytics. The billing for Data Export has started on July 1st, 2023.

Azure Data Explorer Integration

To integrate Azure Data Explorer with Microsoft Sentinel, there are generally four options that you might see documented.

Azure Data Explorer (ADX) and Azure Blob storage have long been recommended as long-term archival solutions for Sentinel. Microsoft recommended using ADX and Blob storage for long-term retention, but in practice, that option was not being used very often and not very well documented by Microsoft before the introduction of the archival tier in the Log Analytics workspace.

We can send data from Sentinel to ADX via an Event Hub and we’ll look at that option in more detail below. You can also send your data into Azure Data Lake Storage Gen2 (Azure Blob storage) with a very similar export capability, and ADX can read from and ingest data from Blob storage, so you could put ADX on top of that.

The third option, we’ve seen documentation that suggests that you could simultaneously send data to Sentinel and ADX at the same time, this could be two different data connectors. But this is maybe a limited use case, more theoretical, and the fourth option is that you could also send data directly into ADX, skipping Sentinel ingesting entirely for certain data sets.

With the high price of ingestion directly into Sentinel, customers have to make a choice, right? Do we collect “log A” or not, and if we collect “log A” it’s too expensive? Maybe we choose not to collect it, so we see there’s a desire out there and other competing solutions that give you the option to collect everything and sort it out later.

Sentinel with Event Hub and ADX

We can export the data from the Sentinel workspace from log analytics into an Event Hub, and an ADX connector can then pull up Event Hub data as shown in the diagram below, this is one of the two kind of competing ways to use ADX for long-term archival.

We want to highlight that long-term archival doesn’t have to be the only business case for using ADX but it seems to be the primary.

Export from Log Analytics workspace to Event Hub
Export from Log Analytics workspace to Event Hub

In large data sets that are retained for a long period, ADX can be cheaper, but for smaller data sets that aren’t retained for an extended period (less than two years, for example), if this is purely a price-based decision, the archive tier option is likely going to be cheaper inside of the Log Analytics workspace, but the price is not necessarily the only driving factor here, but for large data sets that are stored for a long period, ADX can be cheaper.

For ADX, you can pay for the integration monthly or upfront with reserved instances (1 or 3 years) to reduce costs, the price for ADX is around CHF 0.25 per GB to export the data into ADX, but then you pay a lot less to retain the data in ADX, so there’s a tipping point where ADX becomes a cost-effective solution over the archive tier in some cases.

You can also actively query the data in ADX for the entire life of the data sets, so you could put 100 years of data in ADX, and query across the entire data set, even in its own hot and cold tier, ADX internally has its own hot and cold, but that’s more of a performance designation, then an access.

Some customers want hot access to their data, but one point to think about, is that currently, we don’t have an AI solution that you can integrate with ADX, but we imagine that wouldn’t be too far-fetched in the coming years with OpenAI, or ChatGPT, a large language model that you can then throw at however many years of data that you’ve already collected, and this kind of keeps it in that accessible form where it could be attached to an OpenAI with an API connection or something like that.

To price Azure Data Explorer integration with Sentinel can be a challenge, we have data export cost from the Log Analytics workspace, there’s an Event Hub cost and then there’s the cost of your ADX infrastructure. Those costs can be a little bit difficult to predict or to map out in advance if you’re trying to do a cost comparison against the archive tier (check the following section for cost calculations).

The configuration or integration can be complex at least initially when you set it up (we’ll cover the configuration in more detail below). Another point worth mentioning is that we can’t use alert rules and user and entity behavior analytics (UEBA) against Azure Data Explorer, but alert rules (Analytics) in Sentinel can only look back 14 days inside of the workspace, so you might be surprised to learn this, but it’s not a limitation, and if you need to create an alert rule on ADX data, you can use a Logic App as a workaround.

The last point to mention here as this might be misunderstood by many customers, is that you’re going to store data in Sentinel for a year and then store the next couple of years of data in ADX. No, you stream the data simultaneously at ingestion time to Sentinel and ADX, you will have a full copy in both, it’s continuous export, so you might have 90 days in Sentinel and two years in ADX, and those two years in ADX will overlap with the 90 days in Sentinel.

Sentinel with Blob Storage and ADX

The second competing option to do this export for long-term archival that’s available is to use Log Analytics workspace data export to send the logs into Azure Blob storage first, and then use ADX to either query or ingest the data from Blob storage as shown in the diagram below.

Export from Log Analytics workspace to Azure Blob Storage
Export from Log Analytics workspace to Azure Blob Storage

In this architecture, Azure Data Explorer stores its back-end data in Blob storage and ADX has a native ability to ingest and interact with Blob storage data.

This option might be somewhat less expensive because you’re not paying for an Event Hub, but Blob storage is not free as well, and if you’re going to bring the whole Sentinel logs over in ADX, then you’re probably going to need a Premium Event Hub and not Standard to handle the necessary volume. Additionally, this integration option might be a little bit more complex than an Event Hub-based solution versus dealing with Blob storage directly.

The Blob storage solution has the benefit that you can query directly, you don’t even have to bring the data into ADX, you can query in ADX directly to the Blob storage as a table (as a schema), and interact with the data that way, or you can bring that data into the actual ADX database if you’d like. So there’s some flexibility using Blob storage. As for the Event Hub solution, it goes from the Event Hub, connector from ADX into the table. We are not suggesting that one way is better than the other. You can choose one option or the other based on your use case.

Sentinel and ADX in Parallel or Direct

In addition to Event Hub and Blob storage of the more commonly used solutions for long-term archival, you could do ADX and Sentinel in parallel. You send directly from your data sources into Sentinel and ADX at the same time as shown in the diagram below.

Sentinel and ADX in Parallel
Sentinel and ADX in Parallel

ADX can ingest much of the same data that Sentinel can directly, and there might be some limitations where you might find a data set that Sentinel ingests either natively or through an agent or API that might be hard to do directly into ADX today.

For at least some of your data sets, this is a potential option where you send the data into Sentinel separately using a separate Sentinel connector for say 90 days, and then simultaneously with a separate connector, you send the data into ADX, which may skip some of those ingestion hurdles, but you’re probably still going to have to go through an Event Hub (more on this below), so you might encounter some of the same complexity challenges in terms of bringing the data into ADX.

Another interesting option that you could consider as well, is sending certain data sets directly to ADX, completely bypassing Sentinel as shown in the diagram below. For example, you have large data sets for Microsoft Entra ID (formerly Azure Active Directory) sign-in logs or the back-end logs from the Microsoft 365 Defender portal that have already been analyzed by that particular security tool, it’s already been alerted on, the back-end data is more an archival and maybe a hunting data set.

Sending data sets directly into ADX
Sending data sets directly into ADX

In this case, it would make a lot of sense except for maybe some of the potential Machine Learning (ML) benefits, like user and entity behavior analytics (UEBA) benefits you might get, but for example, non-interactive Microsoft Entra ID sign-in logs or some of the larger tables on the Microsoft 365 Defender side, sending that data directly to ADX if you’re already using it as a long term archival solution, can give you hot hunting archival data at a much cheaper cost.

Then you might have other data sets that you have considered using Basic logs for, that were just large expensive data sets that you wanted for archival and reporting purposes, and then the tooling that Sentinel brings doesn’t add a lot of value to that pure archival data sets, so it would make sense to just go directly into ADX if you have an ADX cluster to leverage.

For the remainder of this article, we will look at how to integrate Sentinel with ADX using Event Hub for long-term archival solutions. Just to be clear upfront, this is not very easy to set up but not complex either. So, we’ll go through all the required steps in detail to integrate this solution successfully.

Prerequisites

To follow this article, you need to have the following:

1) Azure subscription — If you don’t have an Azure subscription, you can create a free one here.

2) Log Analytics workspace — To create a new workspace, follow the instructions to create a Log Analytics workspace. A Microsoft Sentinel workspace is required to ingest data into Log Analytics. You also must have read and write permissions on this workspace, you need at least contributor rights.

3) Microsoft Sentinel — To enable Microsoft Sentinel at no additional cost on an Azure Monitor Log Analytics workspace for the first 31 days, follow the instructions here. Once Microsoft Sentinel is enabled on your workspace, every GB of data ingested into the workspace can be retained at no charge (free) for the first 90 days.

4) Event Hub Namespace — This could be a Standard Tier (Limited to 10 Event Hubs per namespace) or Premium Tier (up to 100 Event Hubs per processing unit), so if you need to export more than 10 tables from the Log Analytics workspace, you can use a Premium Tier or you can create more than one Standard Event Hub namespace with 10 event hub topics each. Remember that you need one event hub per table (more on this below).

5) Create data export rules on the Log Analytics workspace to Event Hub Namespace (more on this below).

6) Azure Data Explorer (ADX) cluster and database (more on this below).

7) Create Target tables, Raw tables, and table mapping on Azure Data Explorer (more on this below).

8) Create Data connections between Event Hub and Azure Data Explorer (more on this below).

9) Optionally, modify the retention policy for the target table in ADX. The default retention policy is 100 years, which might be too much in most cases. In this example, we will set the retention on the database level to 365 days and not to individual tables (more on this below).

The high-level architecture once this solution is fully deployed should look like the following:

Azure Data Explorer and Event Hub Architecture
Azure Data Explorer and Event Hub Architecture

Assuming you have all the prerequisites in place, take the following steps:

Microsoft Sentinel Log Retention With ADX

The following sections will describe all the required steps in detail to integrate Microsoft Sentinel with ADX:

Create Event Hub Namespace

First, you need to create an Event Hub Namespace in your subscription, for this example, we will create a Standard Event Hub Namespace which can have a maximum of 10 Event Hub topics.

To automate this process, you could use the following Azure PowerShell code to create an Event Hub namespace (Standard).

Launch PowerShell with administrator privilege and then copy and paste the entire code below and then run the following command: New-EventHubNamespace -resourceGroup "arg-weu-ehn-sentinel" -Region "West Europe".

Note: If you have more than 10 tables, you need to create an Event Hub namespace for every 10 tables.

<#
.SYNOPSIS
Create Event Hub Namspace.

.DESCRIPTION
Creates Event Hub namespaces (Standard) with Auto-Inflate and Zone Redundancy.

.NOTES
File Name : New-EventHubNamespace.ps1
Author    : Microsoft MVP/MCT - Charbel Nemnom
Version   : 1.0
Date      : 30-October-2023
Updated   : 31-October-2023
Requires  : PowerShell 7.3.x (Core)
Module    : Az Module

.LINK
To provide feedback or for further assistance please visit:
 https://charbelnemnom.com 

.EXAMPLE
New-EventHubNamespace -ResourceGroup <arg-weu-ehn-sentinel> -Region <West Europe>
This example will connect to your Azure account using the Subscription Id you selected, and then it will create a new Event Hub Namespace.
#>

function Write-Log {
    <#
    .DESCRIPTION 
    Write-Log is used to write information to a log file and to the console.
    
    .PARAMETER Severity
    parameter specifies the severity of the log message. Values can be: Information, Warning, or Error. 
    #>

    [CmdletBinding()]
    param(
        [parameter()]
        [ValidateNotNullOrEmpty()]
        [string]$Message,
        [string]$LogFileName,
 
        [parameter()]
        [ValidateNotNullOrEmpty()]
        [ValidateSet('Information', 'Warning', 'Error')]
        [string]$Severity = 'Information'
    )
    # Write the message out to the correct channel											  
    switch ($Severity) {
        "Information" { Write-Host $Message -ForegroundColor Green }
        "Warning" { Write-Host $Message -ForegroundColor Yellow }
        "Error" { Write-Host $Message -ForegroundColor Red }
    } 											  
    try {
        [PSCustomObject]@{
            Time     = (Get-Date -f g)
            Message  = $Message
            Severity = $Severity
        } | Export-Csv -Path "$PSScriptRoot\$LogFileName" -Append -NoTypeInformation -Force
    }
    catch {
        Write-Error "An error occurred in Write-Log() method" -ErrorAction SilentlyContinue		
    }    
}

function New-EventHubNamespace {
    <#
    .DESCRIPTION 
    New-EventHubNamespace is used to create an Event Hub namespace.
    
    .PARAMETER Region
    Parameter specifices the Azure region name.

    .PARAMETER ResourceGroup
    Parameter specifices the Resource Group name.
    #> 
    [CmdletBinding()]
    param (        
        [parameter(Mandatory = $true, HelpMessage = "Enter the desired Azure region for the Event Hub Namespace.")]
        $region,

        [parameter(Mandatory = $true, HelpMessage = "Enter a new resource group name for the Event Hub Namespace.")]
        [string]$resourceGroup
    )

    #! Install Az Module If Needed
    function Install-Module-If-Needed {
        param([string]$ModuleName) 
        if (Get-Module -ListAvailable -Name $ModuleName -Verbose:$false) {
            Write-Host "Module '$($ModuleName)' already exists, continue..." -ForegroundColor Green
        } 
        else {
            Write-Host "Module '$($ModuleName)' does not exist, installing..." -ForegroundColor Yellow
            Install-Module $ModuleName -Force  -AllowClobber -ErrorAction Stop
            Write-Host "Module '$($ModuleName)' installed." -ForegroundColor Green
        }
    }

    #! Install Az Accounts Module If Needed
    Install-Module-If-Needed Az.Accounts
    
    #! Install Az Event Hub Module If Needed
    Install-Module-If-Needed Az.EventHub

    $Context = Get-AzContext    

    if (!$Context) {
        Connect-AzAccount
        $Context = Get-AzContext
    }
    
    $subscriptionId = (Get-AzSubscription | Out-Gridview -PassThru -Title 'Select Azure Subscription').id
    If (!$subscriptionId) {
        Write-Log -Message "Please select your desired Azure Subscription!" -LogFileName EventHubNamespaceLog -Severity Warning          
        break
    }
    Set-AzContext -Subscription $subscriptionId | Out-Null

    Register-AzResourceProvider -ProviderNamespace Microsoft.EventHub | Out-Null
    
    $ehubLocations = Get-AzResourceProvider -ProviderNamespace Microsoft.EventHub
    $ehubLocation = $ehubLocations.ResourceTypes.Where{ ($_.ResourceTypeName -eq 'namespaces') }.Locations
    If ($region -notin $ehubLocation) {
        Write-Log -Message "The Azure Region specified: [$region] does not exist, please choose one of the following available regions: $ehubLocation" `
            -LogFileName EventHubNamespaceLog -Severity Warning          
        break
    }

    Write-Verbose "Executing: New-AzResourceGroup –Name $resourceGroup –Location $region"
    New-AzResourceGroup –Name $resourceGroup –Location $region    

    $randomNumber = Get-Random
    $EventHubNamespaceName = "Sentinel-SecurityTables-$($randomNumber)"        
                    
    Write-Verbose "Executing: New-AzEventHubNamespace -ResourceGroupName $resourceGroup -NamespaceName $EventHubNamespaceName `
                -Location $region -SkuName Standard -SkuCapacity 12 -EnableAutoInflate -MaximumThroughputUnits 20"

    try {                    
        #Create Event Hub NameSpace
        Write-Log -Message "Create a new Event Hub Namespace: $EventHubNamespaceName in resource group: $resourceGroup" -LogFileName EventHubNamespaceLog -Severity Information
        Set-Item Env:\SuppressAzurePowerShellBreakingChangeWarnings "true"
        $ResultEventHubNS = New-AzEventHubNamespace -ResourceGroupName $resourceGroup `
            -NamespaceName $EventHubNamespaceName `
            -Location $region `
            -SkuName "Standard" `
            -SkuCapacity 12 `
            -EnableAutoInflate `
            -MaximumThroughputUnits 20 `
            -ZoneRedundant
        
        if ($ResultEventHubNS.ProvisioningState.Trim().ToLower() -eq "succeeded") {                        
            Write-Log -Message "$EventHubNamespaceName created successfully" -LogFileName EventHubNamespaceLog -Severity Information
        }                
    }
    catch {                    
        Write-Log -Message "$($_.ErrorDetails.Message)" -LogFileName EventHubNamespaceLog -Severity Error                  
        Write-Log -Message "$($_.InvocationInfo.Line)" -LogFileName EventHubNamespaceLog -Severity Error
    }
}

.EXAMPLE

New-EventHubNamespace -resourceGroup "arg-weu-ehn-sentinel" -Region "West Europe"

Once you run the PowerShell code above, you will see a new Event Hub Namespace created with Auto-Inflate and Availability Zones (Zone Redundancy) enabled as shown in the figure below. The Auto-Inflate feature in Event Hubs will automatically scale up and increase the number of throughput units up to 40 to meet your usage needs.

Create a New Event Hub Namespace
Create a New Event Hub Namespace

Related: Check the steps for Azure Event Hubs Capacity Planning.

Create Data Export Rules

The next step that we need to do, is to create data export rules on the Log Analytics workspace used by Sentinel to the Event Hub Namespace that we created in the previous steps.

Data export in a Log Analytics workspace lets you continuously export data per selected tables in your workspace. You can export to an Azure Storage Account or Azure Event Hubs as the data arrives in an Azure Monitor pipeline.

For this example, we will export to an Event Hub. Please note that the Log Analytics workspace supports 10 data export rules targeting 10 different Event Hub namespaces (E.g. you can export 100 Sentinel tables using 10 different data export rules).

Open the Azure Portal and then browse to the Log Analytics workspace that is used by Sentinel.

Next, go to the Data export blade under the Settings section and then select + New export rule as shown in the figure below.

Create a new export rule
Create a new export rule

On the Basics tab, you need to enter a rule name, as a best practice, give the export rule name the same name as the Event Hub Namespace, so you can easily identify which rule is mapped to which Event Hub Namespace. Click Next to continue.

Export rule name
Export rule name

Next, on the Source tab, you need to filter by Solutions, in this case, “Microsoft Sentinel“, “Security and Audit“, and “Log Management” which we’ll export to Azure Data Explorer for long-term retention.

Remember that you can select up to a maximum of 10 tables per export rule if you are using a Standard Event Hub Namespace. As a best practice, you could group similar log-type Tables in the same export rule (E.g. CommonSecurityLog, SecurityEvent, IdentityLogonEvents, SysLogCloudAppEvents, AzureActivity, OfficeActivity, SignInLogs, AADNonInteractiveUserSignInLogs, AADRiskyUsers, etc.).

Please note that not all tables in Log Analytics are supported in data export currently, check the list of supported tables. Click Next to continue.

Select the source table to export
Select the source table to export

On the Destination tab, you need to select the Subscription where you created the Event Hub Namespace in the previous step, please note that the Log Analytics workspace and the Event Hub Namespace must be in the same Region (In this example, West Europe).

Next, you need to select the Event Hub Namespace, you can also select an Event Hub Name to send data from all tables in the rule to a specific Event Hub topic. However, it’s recommended to leave this option unselected and let the export rule automatically create each table in the selected Event Hub Namespace. That’s an important point to remember. Click Next to continue.

Select destination type
Select destination type

On the Review + Create tab, review all the options and then select Create.

Create export rule
Create export rule

Note: If you want to export more than 10 tables to Event Hub, then you need to repeat the same steps described above to create another export rule (You can define up to 10 enabled export rules in your workspace, and each can include a maximum of 10 tables. You can create more rules in the workspace if you have export rules in disabled state). You can also use PowerShell, Azure CLI, or the REST API to create export rules (Check how to create or update data export rules programmatically).

Once you create and enable the export rule in Log Analytics, if you switch back to the Event Hub Namespace, you will see the 10 Event Hubs (am- followed by the table name) automatically get created in about 10 minutes, it will just show up under Event Hubs/Overview blade as shown in the figure below.

Event Hubs mapping to Table names
Event Hubs mapping to Table names

This is what you need to connect to the Azure Data Explorer (ADX) side, more on this below.

Create ADX Cluster and Database

In the next step, we need to create an Azure Data Explorer (ADX) cluster and database. You can think of the ADX database as a Log Analytics workspace in ADX terminology.

To automate this process, you could use the following Azure PowerShell code to create a Standard Azure Data Explorer (ADX) Cluster and Database. The ADX cluster will be created with the following options. You can adjust the cluster size as needed:

  • Compute-optimized: Extra Small 2 vCPUs and 16 GB Memory
  • Compute specifications: Standard_E2ads_v5
  • Manual scale: 2 instance counts
  • Streaming ingestion: enabled
  • Double encryption: enabled
  • Availability zones: enabled

For more guidance around ADX sizing, you can check the official documentation page (Select an SKU for your Azure Data Explorer cluster).

The ADX database will be created with the following options. You can adjust the hot cache and extend the data retention as needed:

  • ADX database name: Sentinel-ADX-DB
  • Soft Delete Period: 365 days — In this case, the first 90 days will be stored in both the Log Analytics workspace and ADX, and the remaining 275 days will stored in ADX. If you don’t specify SoftDeletePeriod, the default is 100 years.
  • Hot Cache Period: 7 days — The time the data should be kept in the cache for fast queries. Hot cache data is kept in local SSD storage for faster query performance, while cold data is stored in reliable storage, which is cheaper but slower to access.
  • Kind: Read and Write

In this example, the last 7 days of data will be on the cluster SSD and the additional 358 days of data will be stored in Azure blob storage. You can run queries on the full 365 days of data.

Launch PowerShell with administrator privilege and then copy and paste the entire code below and then run the following command: New-ADXClusterDB -resourceGroup "arg-weu-adx-sentinel" -Region "West Europe".

Please note that creating a new Azure Data Explorer (ADX) cluster will take around 25 minutes to complete. So, you can grab a cup of coffee and then come back later!

<#
.SYNOPSIS
Create ADX Cluster and Database.

.DESCRIPTION
Creates Azure Data Explorer Cluster (Standard) and Database.

.NOTES
File Name : New-ADXClusterDB.ps1
Author    : Microsoft MVP/MCT - Charbel Nemnom
Version   : 1.0
Date      : 30-October-2023
Updated   : 31-October-2023
Requires  : PowerShell 7.3.x (Core)
Module    : Az Module

.LINK
To provide feedback or for further assistance please visit:
 https://charbelnemnom.com 

.EXAMPLE
New-ADXClusterDB -ResourceGroup <arg-weu-adx-sentinel> -Region <West Europe>
This example will connect to your Azure account using the Subscription Id you selected, and then it will create a new Azure Data Explorer Cluster and Database.
#>

function Write-Log {
    <#
    .DESCRIPTION 
    Write-Log is used to write information to a log file and to the console.
    
    .PARAMETER Severity
    parameter specifies the severity of the log message. Values can be: Information, Warning, or Error. 
    #>

    [CmdletBinding()]
    param(
        [parameter()]
        [ValidateNotNullOrEmpty()]
        [string]$Message,
        [string]$LogFileName,
 
        [parameter()]
        [ValidateNotNullOrEmpty()]
        [ValidateSet('Information', 'Warning', 'Error')]
        [string]$Severity = 'Information'
    )
    # Write the message out to the correct channel											  
    switch ($Severity) {
        "Information" { Write-Host $Message -ForegroundColor Green }
        "Warning" { Write-Host $Message -ForegroundColor Yellow }
        "Error" { Write-Host $Message -ForegroundColor Red }
    } 											  
    try {
        [PSCustomObject]@{
            Time     = (Get-Date -f g)
            Message  = $Message
            Severity = $Severity
        } | Export-Csv -Path "$PSScriptRoot\$LogFileName" -Append -NoTypeInformation -Force
    }
    catch {
        Write-Error "An error occurred in Write-Log() method" -ErrorAction SilentlyContinue		
    }    
}

function New-ADXClusterDB {
    <#
    .DESCRIPTION 
    New-ADXClusterDB is used to create an ADX Cluster.
    
    .PARAMETER Region
    Parameter specifices the Azure region name.

    .PARAMETER ResourceGroup
    Parameter specifices the Resource Group name.
    #> 
    [CmdletBinding()]
    param (        
        [parameter(Mandatory = $true, HelpMessage = "Enter the desired Azure region for the ADX Cluster.")]
        $region,

        [parameter(Mandatory = $true, HelpMessage = "Enter a new resource group name for the ADX Cluster.")]
        [string]$resourceGroup
    )

    #! Install Az Module If Needed
    function Install-Module-If-Needed {
        param([string]$ModuleName) 
        if (Get-Module -ListAvailable -Name $ModuleName -Verbose:$false) {
            Write-Host "Module '$($ModuleName)' already exists, continue..." -ForegroundColor Green
        } 
        else {
            Write-Host "Module '$($ModuleName)' does not exist, installing..." -ForegroundColor Yellow
            Install-Module $ModuleName -Force  -AllowClobber -ErrorAction Stop
            Write-Host "Module '$($ModuleName)' installed." -ForegroundColor Green
        }
    }

    #! Install Az Accounts Module If Needed
    Install-Module-If-Needed Az.Accounts

    #! Install Az Event Hub Module If Needed
    Install-Module-If-Needed Az.Kusto

    $Context = Get-AzContext    

    if (!$Context) {
        Connect-AzAccount
        $Context = Get-AzContext
    }

    $subscriptionId = (Get-AzSubscription | Out-Gridview -PassThru -Title 'Select Azure Subscription').id
    If (!$subscriptionId) {
        Write-Log -Message "Please select your desired Azure Subscription!" -LogFileName ADXClusterLog -Severity Warning          
        break
    }
    Set-AzContext -Subscription $subscriptionId | Out-Null

    Register-AzResourceProvider -ProviderNamespace Microsoft.Kusto | Out-Null

    $adxLocations = Get-AzResourceProvider -ProviderNamespace Microsoft.Kusto
    $adxLocation = $adxLocations.ResourceTypes.Where{ ($_.ResourceTypeName -eq 'clusters') }.Locations
    If ($region -notin $adxLocation) {
        Write-Log -Message "The Azure Region specified: [$region] does not exist, please choose one of the following available regions: $adxLocation" `
            -LogFileName ADXClusterLog -Severity Warning          
        break
    }

    Write-Verbose "Executing: New-AzResourceGroup –Name $resourceGroup –Location $region"
    New-AzResourceGroup –Name $resourceGroup –Location $region    

    $randomNumber = Get-Random -Maximum 100
    $ADXClusterName = "Sentinel-ADX-$($randomNumber)"

    Write-Verbose "Executing: New-AzKustoCluster -ResourceGroupName $resourceGroup -Name $ADXClusterName `
                -Location $region -SkuTier Standard -SkuCapacity 2 -SkuName Standard_E2ads_v5"

    try {                    
        #Create Azure Data Explorer Cluster
        Write-Log -Message "Create a new ADX Cluster: $ADXClusterName in resource group: $resourceGroup" -LogFileName ADXClusterLog -Severity Information
        Set-Item Env:\SuppressAzurePowerShellBreakingChangeWarnings "true"
        $ResultADXCluster = New-AzKustoCluster -ResourceGroupName $resourceGroup `
            -Name $ADXClusterName `
            -Location $region `
            -SkuTier "Standard" `
            -SkuCapacity 2 `
            -SkuName "Standard_E2ads_v5" `
            -EnableStreamingIngest `
            -EnableDoubleEncryption                                                              
                    
        if ($ResultADXCluster.ProvisioningState.ToString() -eq "succeeded") {                        
            Write-Log -Message "$ADXClusterName created successfully" -LogFileName ADXClusterLog -Severity Information
        }                
    }
    catch {                    
        Write-Log -Message "$($_.ErrorDetails.Message)" -LogFileName ADXClusterLog -Severity Error                  
        Write-Log -Message "$($_.InvocationInfo.Line)" -LogFileName ADXClusterLog -Severity Error
    }

    try {                    
        #Create Azure Data Explorer Database
        $ADXClusterDB = "Sentinel-ADX-DB"
        Write-Log -Message "Create a new ADX Database: $ADXClusterDB in ADX Cluster: $ADXClusterName with 1 year data retention" -LogFileName ADXClusterLog -Severity Information
        Set-Item Env:\SuppressAzurePowerShellBreakingChangeWarnings "true"
        $ResultADXDB = New-AzKustoDatabase -ResourceGroupName $resourceGroup `
            -Location $region `
            -ClusterName $ADXClusterName `
            -Name $ADXClusterDB `
            -Kind ReadWrite `
            -SoftDeletePeriod 365:00:00:00 `
            -HotCachePeriod 7:00:00:00                                                             
                    
        if ($ResultADXDB.ProvisioningState.ToString() -eq "succeeded") {                        
            Write-Log -Message "$ADXClusterDB created successfully" -LogFileName ADXClusterLog -Severity Information
        }                
    }
    catch {                    
        Write-Log -Message "$($_.ErrorDetails.Message)" -LogFileName ADXClusterLog -Severity Error                  
        Write-Log -Message "$($_.InvocationInfo.Line)" -LogFileName ADXClusterLog -Severity Error
    }
}

.EXAMPLE

New-ADXClusterDB -resourceGroup "arg-weu-adx-sentinel" -Region "West Europe"
New-ADXClusterDB
New-ADXClusterDB

Once you run the PowerShell code above, you will see a new Azure Data Explorer cluster and database created as shown in the figure below.

New ADX Cluster and Database
New ADX Cluster and Database

Create Target Tables, Raw, and Mapping on ADX

The next step that we need to do, is to create a target table on Azure Data Explorer (ADX) that will have the same schema and name as the original one in Log Analytics/Sentinel (step 1). Then we create a raw table with the same name appended with Raw at the end because the data coming from Event Hub is ingested first to an intermediate table where the raw data is stored, manipulated, and expanded (step 2).

The details of creating target tables are documented officially on this page if you would like to do it manually through the Azure Data Explorer Web portal. In this example, we’ll automate this step and subsequent steps with PowerShell (more on this below).

Next, we create table mapping because the data format in the raw table is in JSON format, so data mapping is required. This defines how records will land in the raw events table as they come from Event Hub (step 3). The details of creating table mappings are documented officially on this page.

Next, we create an update policy and attach it to the raw records table. In this step, we create a function (update policy) and attach it to the destination table so the data is transformed at ingestion time. This step is only needed if you want to have the tables with the same schema and format as in the Log Analytics workspace (step 4). In our use case with Sentinel, yes we need to have the same schema and format as in the Log Analytics workspace. The details of creating an update policy for metric and log data are documented officially on this page.

Think of the update policy as a function that will be applied to all new data, the expanded data will then be ingested into the final table that will have the same schema as the original one in Log Analytics/Sentinel.

Last, we’ll set the retention on the raw table to 0 days because we want the data to be stored only in the properly formatted final table and deleted in the raw data table as soon as it’s transformed (step 5). The raw tables will not consume any storage and will NOT incur any additional cost in ADX.

As you can see, these five steps can be challenging and time-consuming to be done through the Azure portal. The good news is that Microsoft has published a great script that will help to put all of these steps together. You can get a copy of this script on GitHub.

The provided script will query the Log Analytics workspace tables, validate table names against data export-supported tables in Log Analytics, and then it will create a target table, raw table, and mapping in Azure Data Explorer.

The following least privileged permissions are required for the script to read the Log Analytics workspace tables and to create tables in Azure Data Explorer:

> Azure Log Analytics workspace ‘Read‘ permissions.
> Azure Data Explorer Database ‘User‘ permissions.

IMPORTANT: The script is taking into consideration that your Log Analytics workspace and Azure Data Explorer are created in the same Azure subscription. If your log Analytics workspace and Azure Data Explorer are deployed into two different Azure subscriptions as in our case, then you need to update the script by removing (deleting) the following line: “$SubscriptionId = $Context.Subscription.Id” and replacing it with the following command:

$subscriptionId = (Get-AzSubscription | Out-Gridview -PassThru -Title 'Select Azure Subscription for Log Analytics Workspace').id
If (!$subscriptionId) {
    Write-Log -Message "Select your desired Azure Subscription for Log Analytics Workspace!" -LogFileName $LogFileName -Severity Warning          
    break
}
Set-AzContext -Subscription $subscriptionId | Out-Null

Once you update and save the script locally on your machine, you can run it as follows:

.EXAMPLE

.\Create-LA-Tables-ADX.ps1 -LogAnalyticsResourceGroup "la-resgrp1" -LogAnalyticsWorkspaceName "law-1" `
-AdxResourceGroup "adx-rg1" -AdxClusterURL "https://adxcluster1.westeurope.kusto.windows.net" `
-AdxDBName "Sentinel-ADX-DB"
Select Azure Subscription for Log Analytics Workspace
Select Azure Subscription for Log Analytics Workspace

Next, the script will loop through all the tables in your Log Analytics Workspace and create the Raw and Mapping tables in the ADX cluster database. The output of the script looks like the following:

Successfully created Raw and Mapping tables in ADX
Successfully created Raw and Mapping tables in ADX

Please note that the script will log all messages (Information, Warning, or Errors) under the following file name: CreateADXTables_date_time.csv which can be found in the directory from which the script is being executed.

Once you finish running the script, you can verify the Target tables and Raw tables are showing in the ADX cluster database as follows. You will see the table name twice, the final table, and the raw table (E.g. AADNonInteractiveUserSignInLogs and AADNonInteractiveUserSignInLogsRaw)

Azure Data Explorer Database
Azure Data Explorer Database

Create Data Connections Between Event Hub and ADX

In the last step of this configuration, we need to create data connections between Event Hub and raw data tables in Azure Data Explorer. We need to tell ADX where and how to get the data from. In our case, it would be from Event Hub, specifying the target raw table, what is the data format (JSON), and the mapping to be applied which was created in the previous step (Create Target Tables, Raw, and Mapping on ADX).

Creating data connections can be easily done in Azure Data Explorer > Database > Data Connections blade.

In your Azure Data Explorer cluster portal, select Databases in the left menu.

In the Databases window, select your database (E.g. Sentinel-ADX-DB).

In the left menu, select Data Connections under Settings.

In the Data Connection window, select + Add Data Connection and then select Event Hub as shown in the figure below.

Create a new data connection in ADX
Create a new data connection in ADX

In the Create data connection window, you need to enter the following information:

  1. Data connection name: Enter a descriptive name, it’s recommended to name the connection similarly to the Event Hup topic (E.g. am-commonsecuritylog, am-securityevent, am-auditlogs, am-officeactivity, etc.)
  2. Event Hub namespace: Select the Event Hub namespace that was created in the previous step.
  3. Event Hub: Select the Event Hub topic that was automatically created by the data export rule in Log Analytics.
  4. Consumer group: The $Default consumer group.
Data connection name
Data connection name

In the Target table section at the bottom, you need to enter the following information:

  1. Table name: Select the raw table that was automatically created in the previous step (E.g. CommonSecurityLogRaw).
  2. Data format: Select JSON.
  3. Mapping name: Select the mapping that was also created in the previous (E.g. CommonSecurityLogRawMapping).
  4. Click Create.
Specify Raw Target table
Specify Raw Target table

Behind the scene, ADX will configure the Managed Identity and assign the “Azure Event Hubs Data Receiver” to ADX cluster identity for each Event Hub topic.

Assign Azure Event Hubs Data Receiver role
Assign Azure Event Hubs Data Receiver role

In this example, we will create four data connections for (CommonSecurityLog, SecurityEvent, OfficeActivity, AuditLogs) tables, so you need to repeat the same steps above as described to cover all your tables in the Event Hub.

Create all data connections in ADX
Create all data connections in ADX

Modify Table Retention Policy in ADX

This step is optional, you can change and modify the retention for target-specific tables in ADX. The default retention policy is 100 years, which might be too much in most cases. In our example, we already set the retention on the database level to 365 days (1 year) which will apply to all tables under the ADX database.

You can run the following command in ADX > Database > Query window, which will modify and attach a soft delete retention policy to be 2 years:

.alter-merge table <TableNameinADX> policy retention softdelete = 730d recoverability = disabled

The retention policy in ADX controls the mechanism that automatically removes data from tables. It’s used to remove data whose relevance is age-based. The retention policy can be configured for a specific table, or an entire database.

You can view the retention policy individually on each table by running the following command:

.show table CommonSecurityLog policy retention
View Table retention policy in ADX
View Table retention policy in ADX

Verify SOC Experience in Sentinel and ADX

Once the data connections between Event Hub and the raw data tables in ADX are created, we need to validate and verify that the logs are arriving from Sentinel to Event Hub and showing in ADX.

To validate the configuration, we need to generate some events. In our example, we are collecting network firewall logs in Common Event Format (CEF) from Fortinet into the “CommonSecurityLog” table.

First, let’s verify that the data export rule from Sentinel (Log Analytics workspace) is received in Event Hub. Yes, we see Incoming Requests, Outgoing Messages, and Throughput.

Event Hub data requests, messages, and throughput
Event Hub data requests, messages, and throughput

Next, we verify the data connections metric for the “am-CommonSecurityLog” data connection in ADX. We can also see Events received, and Events processed. So, far so good!

Azure Data Explorer connection metrics
Azure Data Explorer connection metrics

Last, if we query the “CommonSecurityLog | take 10” table directly in Sentinel, we see the following results.

Query the "CommonSecurityLog" table in Sentinel
Query the “CommonSecurityLog” table in Sentinel

Similarly, if we run the same KQL query from the ADX portal > Query, we see the same results.

Query the "CommonSecurityLog" table in Azure Data Explorer
Query the “CommonSecurityLog” table in Azure Data Explorer

For a full Sentinel experience, the SOC team does not need to switch between two portals, they can use “adx()” function syntax follows by the ADX cluster name, region, and database name to query data in Azure Data Explorer directly from Sentinel. Azure Monitor lets you query data in Azure Data Explorer from your Log Analytics workspace.

adx("adx-cluster-name.westeurope/ADX-Database").CommonSecurityLog
| take 10
Query data in Azure Data Explorer from Sentinel
Query data in Azure Data Explorer from Sentinel

Here is another cross-resource KQL query between ADX and Sentinel using the “DeviceEvents” and the “SecurityAlert” tables.

adx("adx-cluster-name.westeurope/ADX-Database").DeviceEvents
| where AccountName == "dczero$"
| join kind=leftouter
(
SecurityAlert
| where ProviderName == "MDATP"
| extend HostName_ = tostring(parse_json(Entities)[0].HostName)
)
on $left.AccountName == $right.HostName_

See Also: Microsoft Sentinel Hunting and ADX cross-resource queries support.

There you have it… Happy Hunting with Sentinel and ADX!

Cost Calculations

After all this lengthy setup, the million-dollar question is, how to calculate the cost difference for Microsoft Sentinel with Event Hub and ADX?

Let’s take a sample pricing example for Switzerland’s North region for 50 GB data ingested per day, of course, these numbers will change depending on the ingested data, the region you are using, and depending on when you check these on different pricing documentation or pricing calculators.

Switzerland NorthPrice per Month
Microsoft Sentinel log ingestion
1 GB = CHF 5.51
50 GB = CHF 275.5 /day
CHF 8,265
Log Analytics Data Export per GB
1 GB = CHF 0.132
50 GB = CHF 6.6 /day
CHF 198
Data retention in Log Analytics (90 days free)CHF 0.00
Event Hub Standard (Throughput Units) 12CHF 237.24
Event Hub Standard (Ingress) 12 million events per monthCHF 0.30
ADX Estimated data ingestion per day(50 GB)
ADX Total data retention (365 days)CHF 85.62
ADX Hot data days in SSD (7 days)CHF 6.51
ADX Data compression ratio(7x)
ADX SKU (2 x Standard_E2ads_v5) Engine VMs
3 years Reserved Instances and Availability Zones
CHF 93.83
ADX Data Management SKU (2 x D1v2) VMs
3 years Reserved Instances
CHF 36.92
Azure Data Explorer Markup
3 years Reserved Instances
CHF 203.03
Total Cost per MonthCHF 9,126.45

First, we have the ingested GB per day for Microsoft Sentinel and Log Analytics workspace, the combined price for both is CHF 5.51 per GB-ingested. We have the data export cost from Log Analytics to Event Hubs or Azure storage, the price is CHF 1.09 per GB exported (CHF 6.6 per day for 50 GB). The data retention in Log Analytics is set to 90 days free.

Next, we have the Event Hub Namespace cost (Basic/Standard/Premium/Dedicated), the price is based on the maximum throughput units (min. 1 and max. 40), as well as the Ingress events CHF 0.026 per million events for Standard SKU. In this example, we used the Event Hub Standard Tier with 12 throughput units and 12 million events at CHF 237.55 per month (1 Throughput Unit ~ 1 MB/s). If you have very high traffic volumes (>30K events/sec), then you might need to consider using a Dedicated Event Hub SKU.

Then we have the Azure Data Explorer cost breakdown as follows:

  • Estimated data ingested (collected) per day: 50 GB
  • Hot data (days of data kept in SSD drive for fast access): 7 days
  • Total data retention in days: 365 days
  • Data compression ratio: The average is 7x, but can vary between 3x and 20x
  • Compute Engine VMs with 3 years of Reserved Instances and Availability Zones enabled

You can also use the following Azure Data Explorer (Kusto) Cost Estimator to do the cost calculations for ADX based on your use case.

Azure Data Explorer (Kusto) Cost Estimator
Azure Data Explorer (Kusto) Cost Estimator

There you have it… Happy exporting Microsoft Sentinel Data to Azure Data Explorer!

Wrapping Up

In this article, we discussed all the integrated solutions that you can use with Microsoft Sentinel and Azure Data Explorer compared to the built-in archiving solution in the Log Analytics workspace, and then we walked through all the necessary steps to set up the integration with ADX using Event Hub for long-term data retention.

Once you go through these steps a couple of times, and use some of the automated solutions shared in this article, it certainly makes it a lot easier, and less error-prone to set up these connections. Hopefully, Microsoft will release a built-in solution in the future that will make this easier for us, but the advantage and the win at the end of that effort is the data is hot, so we can run KQL queries against ten years or 100 years’ worth of data no longer how long you have it stored in ADX.

You can do active queries, if you’re doing hunting investigations, if you’re doing historical research, if you’re going to put an OpenAI on top of this, then you can go back and look at the entire data set without having to do any restore or rehydration of that data.

You can design an ADX cluster to meet your specific performance requirements. You can put more into the compute side if you want higher query performance. If you don’t need up query performance, you can lower that performance tier or cost or maybe even scale up the tier to a higher performance when you need it and scale down to a low performance when you don’t potentially. Also on the back end, there are ways to make the data more resilient. You can look at more Geo-redundant data options, and use customer-managed encryption keys.

As we said, there could be quite a variety of different cost savings when using Azure Data Explorer. In some cases, you could circumvent the ingestion costs and Sentinel entirely for data sets that just don’t need to go through that pipeline and just go directly into ADX for archival, you can also save money in long-term retention if you have a large data set. Of course, you can go well beyond the seven/twelve-year limit imposed by the archive solution in Sentinel.

ADX can also ingest data from Blob storage. If you have other data sets that are in Blob storage that you want to access, ADX can help you get into those CSV or JSON files that might be sitting in Blob storage.

In summary, ADX is not easy to set up and it’s not easy to calculate the cost, but it’s well worth the effort, for this reason, we documented this process to help you in your journey to integrate Microsoft Sentinel and Azure Data Explorer.

__
Thank you for reading my blog.

If you have any questions or feedback, please leave a comment.

-Charbel Nemnom-

Photo of author
About the Author
Charbel Nemnom
Charbel Nemnom is a Senior Cloud Architect with 21+ years of IT experience. As a Swiss Certified Information Security Manager (ISM), CCSP, CISM, Microsoft MVP, and MCT, he excels in optimizing mission-critical enterprise systems. His extensive practical knowledge spans complex system design, network architecture, business continuity, and cloud security, establishing him as an authoritative and trustworthy expert in the field. Charbel frequently writes about Cloud, Cybersecurity, and IT Certifications.
Previous

Mastering Microsoft Teams Meetings – 2024 Comprehensive Guide

8 Steps – Implement Robust Data Protection Measures in Google Cloud

Next

Let us know what you think, or ask a question...