During Microsoft Ignite in November 2021, Azure Sentinel is now called Microsoft Sentinel.
In this article, we will share with you the top best practices for deploying Azure Sentinel in your organization.
Table of Contents
Microsoft Sentinel makes it easy to collect security data across your entire hybrid organization from devices, users, apps, servers, and any cloud. Using the power of artificial intelligence and machine learning, Azure Sentinel ensures that real threats are identified quickly and unleashes you from the burden of traditional security incident and event management solutions (SIEMs) by automating setting up, maintaining, and scaling infrastructure.
Microsoft Sentinel is a cloud-native Security Information Event Management (SIEM) and Security Orchestration Automated Response (SOAR) solution. Azure Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response.
So what are the top best practices that you want to be aware of when designing and deploying Azure Sentinel?
The first best practice is to set capacity reservations.
It is highly recommended to set capacity reservations. Once you understand how much data you are ingesting into Azure Sentinel, you can set a capacity reservation that allows Microsoft to give you a discount on the amount of data that you are ingesting. You can use capacity reservation to help reduce your costs with Azure Sentinel.
Now one important point to note here is, that once you apply a discounted tier (200 GB/day for example), it’s set and you cannot go lower than 200 gigabytes today for the next 30 days. So if you need to change it to something lower, you’ll have to wait 30 days. However, you can always raise it to the next level (300 GB/day for example) if you need without waiting for 30 days.
Log Analytics workspace design
Microsoft Sentinel uses a log analytics workspace underneath it to store your data.
So let’s talk about the different workspace designs that you can use with Azure Sentinel.
Single-Tenant (single workspace) design
The best practice is to use one single security workspace in your tenant, and now I say a security workspace. Because you may have multiple workspaces, some for operations data like performance and metrics of your Azure resources, or on-premises resources. But you want to have that one security workspace to bring in all your security data.
The advantage of this, it brings a central pane of glass. You can consolidate all your security logs into one place. You can query all that information very easily, so it allows you to join different data sets together.
It’s also supported by all the features in Azure Sentinel, you can apply Azure log analytics role-based access control (RBAC) to control data access in that workspace, and there is also RBAC for Azure Sentinel where you can control who can access Azure Sentinel things like incidents, data connectors, etc.
Now obviously there are some disadvantages to this design decision. Because you may need an environment where you need to break out data based on some governance standards.
For example, you may need to have data reside in a specific region due to some compliance standard that you have to maintain, and a single Log Analytics workspace may not allow that if you deploy that workspace in the US region, now your EU data is coming to the US and that might be out of compliance with your policy. Also because you would deploy that into a single region, there may be some bandwidth costs to send that data across the region to a central region. This is something that you want to be aware of.
Single-Tenant (regional workspace) design
Now you can also do regional workspace to support that data governance requirement, so this will help you meet those regulations.
Obviously, you won’t have any cross-region bandwidth costs, and you can also separate the workspace to add even more data control. So for example again the regional compliance requirement, maybe you only want EU users to see your EU data sources and so you can put those in a specific workspace region.
The disadvantage of this, you lose some central pane of glass (we’ll talk about multiple workspaces in a bit), but there are some things you can do to offset that with multi workspaces, and then you have to deploy your analytic rules to each Sentinel workspace, so if you have the same analytic rule you want to deploy in the US region and the EU region, then you would have to take that analytic rule and create it twice.
So what if you have multiple tenants that you want to manage?
Here it’s the same best practice to put a single workspace in each tenant.
So imagine you have five tenants, you would have five workspaces, one in each one. Now obviously the disadvantage of this, the analytic rules must be deployed multiple times across those workspaces. If you have one analytic rule, you need to deploy it to each tenant, so that might take you five times to deploy that. Now you can automate this with Infrastructure as Code (IaC) which makes it easier to deploy and manage.
Microsoft has also introduced a lot of features in the last year to support multi workspace design by leveraging Azure Lighthouse capabilities, so you can now see incidents across multiple workspaces. You can hunt across multiple workspaces. You can use queries across multiple workspaces, workbooks, and playbooks without connecting directly to that specific tenant.
But again, Microsoft recommends keeping one single security workspace per tenant.
Now some best practices around data connectors.
Today there are about 60+ connectors… and Microsoft is planning to have more coming soon, so keep an eye out for new connectors.
Microsoft always recommends following the order below when enabling data connectors:
1) Enable first-party connectors quickly, mainly because it’s very easy. You can open the connector page. Click enable, assuming you have the right permissions to that data source, like Microsoft Defender for Endpoint, or Microsoft Defender for Office 365, and then you can click apply and you are done.
2) Many of these connectors are free, things like Office 365, Azure activity log, Azure AD, and any of the first party security alerts from the Microsoft Defender stack (Microsoft 365 Defender / Azure Defender) are also free.
3) Next, because Azure Sentinel is in Azure and it’s quick to enable, you can leverage tools to deploy policy and configure Azure diagnostic logs for any of your services. Things like SQL or storage. You can create a policy, and put those policies in Azure, it will apply and configure all your resources to send logs to the Sentinel workspace.
4) Next, you can start connecting other cloud sources such as AWS and SaaS applications. Again it’s easy to configure, you can go to that cloud application assuming you have the right permissions, and then click connect on the Azure Sentinel data connectors page.
5) Next, deploy your Windows and Linux agent in Azure. This can be done with the Azure policy. There’s already a built-in policy that makes it easy so you can just take that policy, apply it to your environment, edit it as needed and that agent will get deployed and report into Microsoft Sentinel quickly.
6) Next, deploy Windows and Linux agents on-premises and other clouds to get the machines that you want to collect data from. This agent works on any machine. It doesn’t matter whether it’s virtual in Google GCP, in AWS or Azure, or on-premises.
7) Next, deploy your Common Event Format (CEF) collection. You will deploy a Linux machine to do CEF collection and you can configure those sources, things like firewalls (Fortinet, or Palo Alto’s for example) to send their data to that CEF collector.
8) Next, integrate any Threat Intelligence (TI) feed to Azure Sentinel, this can be done through open-source efforts such as STIX/TAXII (The Trusted Automated eXchange of Indicator Information) feeds or through the Microsoft security graph API feed.
9) Lastly, if you have some vendors that your work with, ask them to integrate directly with Microsoft Sentinel, it’s always easier to bring data from third-party solutions with direct integration.
Microsoft also announced a public preview of the new Azure Monitor agent. This will allow you to do some additional filtering, so keep an eye out for that. There will be more to do with Microsoft Sentinel with this new agent.
Difference between Azure log types
It’s important to know about the difference between a couple of Azure log types that you can connect to Azure Sentinel.
As you know that we have Azure activity logs and Azure diagnostic logs. The two things are similar, but they are different in some very important respects. So how do they differ:
- Azure Activity Logs log activity that is on the platform level. Think of Azure activity logs as the way that you find out about what happened to the Azure portal. So, if an administrator logs in, create a new resource that is going to be logged within the Azure activity logs.
- Diagnostic Logs on the other hand are anything that happens within Azure resources, so let’s say Azure Firewall or Web Application Firewall (WAF), any of the activities that are generated from within those services. For example, traffic passing through Azure firewall, or rule matches on Azure WAF. Those activities are all logged within diagnostic logs.
You can get diagnostic logs as resource-level logs, and activity logs as platform-level logs.
It’s important to understand that distinction because you should at least enable and connect both to Azure Sentinel (Log Analytics workspace) to get a full picture of what’s happening within your Azure environment.
The next best practice is Analytics. This is simple to deploy using some of the templates that Microsoft provides for the environment.
First and foremost, I recommend you enable any Microsoft incident rule. There is one rule enabled by default for the Fusion machine learning type rule, that’s enabled inside of every Sentinel instance. Fusion uses machine learning to look and see if two alerts are part of the same attack Kill Chain.
Then can you enable Microsoft incident rules, and these rules allow you to create incidents automatically anytime you receive a security alert from things like Microsoft Defender for Endpoint, Azure AD Identity Protection, Microsoft Defender for Office 365, Microsoft Defender for Cloud, etc.
Next, you can scroll through all the built-in ‘Rule templates‘ and look for ones that are interesting to your environment in your data sets and quickly enable those.
You can also check the GitHub community. Microsoft has a list of detection that might not be built into Azure Sentinel as a template, so there are some additional sources there you might want to look at.
And then lastly, you can build any custom detection that you might need for any missing use cases.
Data Visualization (workbooks)
The next best practice is Data Visualization.
Once you have your data in Azure Sentinel, you may want to monitor it in a manner to see how much data is coming in, what type of events you are having, and other things, so the best practice is to use built-in workbooks as a starting place.
You can take those workbooks and customize them if you have to as a next step, but the built-in workbooks will give you a lot of good sources as a really good starter. Microsoft just released a new workbook called ‘Security Operations Efficiency‘ workbook and this is great because it shows you how many incidents were created over time, what type of rate are you closing these incidents in terms of how long it’s taking? What type of closing is, and is it a false positive? Is it a true positive? You can see what kind of incidents you are getting, how they are being triaged, and how fast they are being triaged?
This will really help you to manage your Security Operations Center (SOC) efficiently.
You can build your own custom workbooks from scratch if you want.
Lastly, if you have some users that don’t log into the Azure portal such as CISO or CIO. You can also use PowerBI to connect to log analytics and pull that data in and build that dashboard outside of Azure if you need to.
SOAR in Microsoft Sentinel
So moving into the last best practice is Security Orchestration Automation and Response (SOAR).
Microsoft Sentinel uses Azure Logic Apps to help respond to incidents inside of Microsoft Sentinel, the team at Microsoft has built a nice integration where you can automatically run playbooks as part of an analytic rule.
You can also manually run those if you want to triage or enrich more data for an investigation that you are doing, and there are many templates in Microsoft Sentinel GitHub that you can use as a great starting point.
Check out how to create an automation Logic App playbook to monitor Azure AD emergency accounts with Azure Sentinel.
Finally, I highly encourage you to contribute and share those Playbooks back to the GitHub community here.
Microsoft Sentinel provides you with SIEM-as-a-service and SOAR-as-a-service for your SOC, which gives you a complete view across the organization; putting the cloud and large-scale intelligence from decades of Microsoft security experience to work. Following the best practices outlined in this article will help you eliminate security infrastructure setup and maintenance and provide you with scalability to meet your security needs—all while reducing costs and increasing visibility and control.
Additional resources I highly encourage you to check:
- Video recording: learn how to implement and manage Azure Sentinel effectively.
- Learn more about Azure Sentinel, check the official documentation from Microsoft.
- Learn about Analytics Rules, and check the official documentation from Microsoft.
- Learn about Playbooks, and check the Azure Sentinel’s GitHub page contributed by the community and Microsoft.
The power of Azure Sentinel comes from the ability to detect, investigate, and remediate threats. To do this, you must first ingest data in the form of alerts from different security providers, such as Azure Security Center or other Microsoft security services, as well as other third-party solutions.
Thank you for reading my blog.
If you have any questions or feedback, please leave a comment.