Azure Files offers shared storage for applications using the standard SMB 3.0 protocol. Microsoft Azure virtual machines and cloud services can share file data across application components via mounted shares, and on-premises applications can access file data in a share via the File storage API.
In this article, we will describe how to automatically move files between different Azure file share tiers and optimize storage costs.
Table of Contents
Introduction
Applications running on Azure virtual machines can mount a file storage share to access file data, just as a desktop application would mount a typical SMB share. Any number of Azure virtual machines or roles can mount and access the File Storage share simultaneously.
Microsoft also introduced the Azure File Sync service which allows you to centralize your file shares in Azure Files, whilst maintaining the compatibility of an on-premises file server with all the flexibility and performance benefits provide. For more information about Azure File Sync and how to get started, please check the following step-by-step guide.
At Microsoft Ignite in 2019, the Azure Files team announced a new storage tiers option for standard storage accounts in addition to the premium tier which will optimize cost and performance for your workload. The new 3 tiers are called ‘Transaction Optimized, ‘Hot‘, and ‘Cool‘.

Azure Files tiers overview
At the time of this writing, Azure Files offers four different tiers of storage called premium, transaction optimized, hot, and cool to allow you to tailor your shares to the performance and price requirements of your workload:
> Premium: Premium file shares are offered on solid-state disk (SSD) storage media and are useful for IO-intensive workloads, including hosting databases and high-performance computing (HPC).
> Transaction optimized (formerly known as standard): Transaction-optimized file shares are offered on rotational hard disk (HDD) storage media and are useful for general-purpose file shares. Low transaction charges make this tier ideal for larger sets of files with high churn. This tier of storage has historically been called “standard”, however, this refers to the storage media type rather than the tier itself (the hot and cool are also “standard” tiers because they are on standard storage hardware).
> Hot: Hot file shares are useful for most general-purpose workloads, including for lifting and shifting an on-premises file share to Azure, and especially with Azure File Sync.
> Cool: Cool file shares offer cost-efficient storage optimized for online archive storage scenarios. This tier is more useful for lightly used file shares, where data is to be stored for long-term access without compromising the capability of instant online access to the data.
Azure Files Lifecycle Management
One of the most requested features by a lot of customers is to automatically manage the data lifecycle of Azure Files in a very similar way as we do for Azure Blob storage today. Unfortunately, this option is not available natively on the platform yet and the Azure Files team is aware of this highly-demanded request.
I have been approached by many readers and followers who asked me if there is a way to move files older than X number of days/years from Azure file share to blob container, or move files from hot to cool Azure file shares tier on a certain policy, for example, if the file is not modified for the last 90 days, it can be moved from hot to cool file share tier.
The good news is, after looking deeply at these different scenarios, there is a way to move files from Azure file share to blob container or between different Azure file shares based on the “CreationTime“, “LastAccessTime“, “LastWriteTime“, “ChangeTime“, and “Last-Modified” time. Microsoft has updated the List REST API to show the SMB timestamps.
CreationTime : 2022-02-15T18:12:04.2915728Z
LastAccessTime : 2022-02-15T18:12:04.2915728Z
LastWriteTime : 2022-02-15T18:12:04.2915728Z
ChangeTime : 2022-02-15T18:16:21.0029861Z
Last-Modified : Tue, 15 Feb 2022 18:16:21 GMT
In this article, we will share with you how to automatically move files between different Azure file share tiers, so you can optimize costs and activate the data lifecycle of your Azure Files in a very similar way to Azure Blob storage.
For this article, we will make use of the AzCopy tool, which is a command-line utility that you can use to sync, copy or remove files to or from a storage account. We will also use Azure Container Instances to simplify and automate the move between the different tiers by creating a runbook in Azure Automation, which will run as part of the container. In this way, we can run the container on a simple schedule to move the data and only get billed for the time the container was used.
Warning! Please note that the AzCopy tool does not delete automatically the source files after we sync or copy them to the desired storage account (file share or blob container). Hence, we need to use the AzCopy remove command after the AzCopy sync operation is completed. I want to mention that the AzCopy sync operation is thread-blocking, so it’s safe to use the AzCopy remove command at the end of the AzCopy sync operation. Microsoft has decided not to add the “–delete-source” flag for a while — They should never be in the business of deleting a customer’s data (you can find more details here).
Prerequisites
To follow this article, you need to have the following:
1) Azure subscription – If you don’t have an Azure subscription, you can create a free one here.
2) You need to have at least one storage account either in the same region and the same subscription or in different regions and subscriptions.
3) You also need to create two Azure file shares tiers (i.e. Hot and Cool) either in the same account or across two different storage accounts, in the same region and same subscription, or in different regions and subscriptions.
4) Lastly, you need to have some files in the first Azure file share (which is considered the primary active share). You can add these files directly in the portal or by mounting the share, or you can sync data from Windows Servers to the file share using Azure File Sync.
Assuming you have all the prerequisites in place, take now the following steps:
Get started
First, we need to create an Azure automation account that will help you to automate the copy and remove files without user interaction. This will also make sure to respect the security access of your storage account without exposing access keys to users.
Microsoft recommends using Managed Identities for the Automation accounts instead of using Run As accounts. Managed identity would be more secure and offers ease of use since it doesn’t require any credentials to be stored. Azure Automation support for Managed Identities is now generally available.
Create Automation Account
When you create an Automation Account, it creates a new service principal user in Azure Active Directory (Azure AD) by default. Next, you must assign the appropriate (Azure RBAC) role to allow access to the Azure storage account for the service principal at the resource group, subscription, or at management group level.
In this example, we’ve assigned the Storage Account Key Operator Service (RBAC Role) to the automation account (managed identity) at the management group level only, and we’ve assigned the Azure Contributor role at the resource group level where the storage account is created.
Always keep in mind and follow the principle of least privilege and carefully assign permissions only required to execute your runbook.

Using a managed identity instead of the Automation Run As account makes management simpler. You don’t have to renew the certificate used by the Automation Run As account. Additionally, you don’t have to specify the Run As connection object in your runbook code. You can access resources using your Automation account’s managed identity from a runbook without creating certificates, connections, Run As accounts, etc.
Open the Azure portal, and click All services found in the upper left-hand corner. In the list of resources, type Automation. As you begin typing, the list filters based on your input. Select Automation Accounts.
Click +Add. Enter the automation account name, choose the right subscription, resource group, and location, and then click Create.
Modules
In your list of Automation Accounts, select the account that you created in the previous step. Then from your Automation account, select Modules under Shared Resources.
The good news is, starting in September 2021, automation accounts will now have Az modules by default installed. You don’t need to import the modules from the gallery as we used to do. Please note that you can also update the modules to the latest Az version from the modules blade as shown in the figure below.

The most common PowerShell modules are provided by default in each Automation account. See the Default modules imported on this page. As the Azure team updates the Azure modules regularly, changes can occur with the included cmdlets.
Create PowerShell Runbook
In this step, you can create multiple Runbooks based on which VM you want to restore to a secondary region. PowerShell Runbooks are based on Windows PowerShell. You directly edit the code of the Runbook using the text editor in the Azure portal. You can also use any offline text editor such as Visual Studio Code and import the Runbook into Azure Automation.
From your automation account, select Runbooks under Process Automation. Click the ‘+ Create a runbook‘ button to open the Create a runbook blade as shown in the figure below.

In this example, we will create a Runbook to move files from the hot (file share) to the cool (file share) tier if the file is not modified in the hot tier for 90 days. You can also be creative as much as you want and cover different scenarios, the logic is the same.
Edit the Runbook
Once you have the Runbook created, you need to edit the Runbook, then write or add the script to choose which Azure file shares tiers you want to sync and move the files. Of course, you can create scripts that suit your environment.
As mentioned earlier, in this example, we will create a Runbook to synchronize and copy all data from one Azure file share (hot tier) to another Azure file share (cool tier) which can be in a different storage account and region. And to maintain a high level of security, we WON’T use the storage account keys. Instead, we will create a time-limit SAS token URI for each file share (source and destination), and the SAS token will expire automatically after 1 day. So, if you regenerate your storage account keys in the future, the automation process won’t break.
The automation runbook is as follows:
<#
.DESCRIPTION
A Runbook example that automatically copy and move files between different Azure file shares tiers on a customer-defined number of days and schedule.
It does so by leveraging AzCopy with the copy parameter, which is similar to Robocopy /COPY.
Only the files which are not accessed for X number of days will be copied to the destination file share, and then deleted from the source file share.
In this example, AzCopy is running in a Container inside an Azure Container Instance using Service Principal in Azure AD.
.NOTES
Filename : Move-AzureFileSharesTiers
Author : Charbel Nemnom (Microsoft MVP/MCT)
Version : 1.0
Date : 14-February-2022
Updated : 16-February-2022
Tested : Az.ContainerInstance PowerShell module version 2.1 and above
.LINK
To provide feedback or for further assistance please visit:
https://charbelnemnom.com
#>
Param (
[Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
[String] $sourceAzureSubscriptionId,
[Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
[String] $sourceStorageAccountRG,
[Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
[String] $targetStorageAccountRG,
[Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
[String] $sourceStorageAccountName,
[Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
[String] $targetStorageAccountName,
[Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
[String] $sourceStorageFileShareName,
[Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
[String] $targetStorageFileShareName,
[Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
[Int] $numberOfDays
)
# Ensures you do not inherit an AzContext in your runbook
Disable-AzContextAutosave -Scope Process
# Connect to Azure with system-assigned managed identity (automation account)
Connect-AzAccount -Identity
# Set Azure Subscription context
Set-AzContext -Subscription $sourceAzureSubscriptionId
#! Source Storage Account (hot file share)
# Get Source Storage Account Key
$sourceStorageAccountKey = (Get-AzStorageAccountKey -ResourceGroupName $sourceStorageAccountRG -Name $sourceStorageAccountName).Value[0]
# Set Azure Storage Context
$sourceContext = New-AzStorageContext -StorageAccountKey $sourceStorageAccountKey -StorageAccountName $sourceStorageAccountName
# Generate source file share SAS URI Token with read, delete, and list permission
$sourceShareSASURI = New-AzStorageAccountSASToken -Context $sourceContext `
-Service File -ResourceType Service,Container,Object -ExpiryTime(get-date).AddDays(1) -Permission "rdl"
$sourceShareSASURI = $sourceShareSASURI.Split('?')[-1]
# List Directories and Files on the source hot file share
$URI = "https://$sourceStorageAccountName.file.core.windows.net/$($sourceStorageFileShareName)?comp=list&restype=directory&include=timestamps&$($sourceShareSASURI)"
$response = Invoke-RestMethod $URI -Method 'GET'
# Fix XML Response body
$fixedXML = $response.Replace('<?xml version="1.0" encoding="utf-8"?>','<?xml version=''1.0'' encoding=''UTF-8''?>')
$doc = New-Object xml
$doc = [xml]$fixedXML
if($doc.FirstChild.NodeType -eq 'XmlDeclaration') {
$doc.FirstChild.Encoding = $null
}
#! TARGET Storage Account (cool file share)
# Get Target Storage Account Key
$targetStorageAccountKey = (Get-AzStorageAccountKey -ResourceGroupName $targetStorageAccountRG -Name $targetStorageAccountName).Value[0]
# Set Target Azure Storage Context
$destinationContext = New-AzStorageContext -StorageAccountKey $targetStorageAccountKey -StorageAccountName $targetStorageAccountName
# Generate target SAS URI with read, write, delete, create, and list permission
$targetShareSASURI = New-AzStorageShareSASToken -Context $destinationContext `
-ExpiryTime(get-date).AddDays(1) -ShareName $targetStorageFileShareName -Permission "rwdcl"
# Construct old date based on the desired number of days
$old = ((Get-Date).ToUniversalTime()).adddays(-$numberOfDays)
$oldDate = $old.ToString("yyyy-MM-ddTHH:mm:ss.fffffffK")
# The container image (peterdavehello/azcopy:latest) is publicly available on Docker Hub and has the latest AzCopy version installed
# You could also create your own private container image and use it instead
# When you create a new container instance, the default compute resources are set to 1vCPU and 1.5GB RAM
# We recommend starting with 2vCPU and 4GB memory for larger file shares (E.g. 3TB)
# You may need to adjust the CPU and memory based on the size and churn of your file share
# The container will be created in the $location variable based on the source storage account location. Adjust if needed.
$location = (Get-AzResourceGroup -Name $sourceStorageAccountRG).location
# Container Group Name
$containerGroupName = $sourceStorageFileShareName + "-azcopy-job"
# Set the AZCOPY_BUFFER_GB value at 2 GB which would prevent the container from crashing.
$envVars = New-AzContainerInstanceEnvironmentVariableObject -Name "AZCOPY_BUFFER_GB" -Value "2"
foreach ($file in $doc.EnumerationResults.entries.file) {
# If the file LastAccessTime is less than or equal to the number of days, then move it to the cool tier
if ($file.properties.LastAccessTime -le $oldDate) {
$sourceFile = "https://$sourceStorageAccountName.file.core.windows.net/$sourceStorageFileShareName/$($file.name)?$($sourceShareSASURI)"
$targetFile = "https://$targetStorageAccountName.file.core.windows.net/$targetStorageFileShareName/$($file.name)$($targetShareSASURI)"
# Copy the file to the target file share
$command1 = "azcopy","copy",$sourceFile,$targetFile,"--preserve-smb-info","--preserve-smb-permissions"
# Remove the file from the source file share
$command2 = "azcopy","remove",$sourceFile
# Create Azure Container Instance Object for $command1
$container = New-AzContainerInstanceObject `
-Name $containerGroupName `
-Image "peterdavehello/azcopy:latest" `
-RequestCpu 2 -RequestMemoryInGb 4 `
-Command $command1 -EnvironmentVariable $envVars
# Create Azure Container Group and copy the file to the target file share cool tier
$containerGroup = New-AzContainerGroup -ResourceGroupName $sourceStorageAccountRG -Name $containerGroupName `
-Container $container -OsType Linux -Location $location -RestartPolicy never
# Recreate Azure Container Instance Object for $command2
$container = New-AzContainerInstanceObject `
-Name $containerGroupName `
-Image "peterdavehello/azcopy:latest" `
-RequestCpu 2 -RequestMemoryInGb 4 `
-Command $command2 -EnvironmentVariable $envVars
# Recreate the same Azure Container Group and remove the file from the source file share hot tier
$containerGroup = New-AzContainerGroup -ResourceGroupName $sourceStorageAccountRG -Name $containerGroupName `
-Container $container -OsType Linux -Location $location -RestartPolicy never
}
}
Write-Output ("")
Save the script in the CMDLETS pane as shown in the figure below.

Then test the runbook using the “Test pane” and fill in all the required parameters to verify it’s working as intended before you publish it.
On the Test page, you need to supply the following 8 parameters manually and then click the Start button to test the automation script.
- SOURCEAZURESUBSCRIPTIONID
- SOURCESTORAGEACCOUNTRG
- TARGETSTORAGEACCOUNTRG
- SOURCESTORAGEACCOUNTNAME
- TARGETSTORAGEACCOUNTNAME
- SOURCESTORAGEFILESHARENAME
- TARGETSTORAGEFILESHARENAME
- NUMBEROFDAYS
Once the test is completed successfully, publish the Runbook by clicking Publish. This is a very important step.
Schedule the Runbook
In the final step, you need to schedule the Runbook to run based on your desired time to copy and move the files from the hot file share to the cool file share tier.
Within the same Runbook that you created in the previous step, select Schedules and then click + Add Schedule.
So, if you need to schedule the Runbook to run once a day, then you need to create the following schedule with Recur every 1 Day with Set expiration to No and then click “Create“. You can also run it on-demand if you wish to do so.

While scheduling the Runbook, you need to pass on the required parameters for the PowerShell script to run successfully. In this scenario, you need to specify the following 8 parameters:
- Azure Subscription ID where the active or primary file share is created.
- Source Storage Resource Group name where the primary storage account is created.
- Target Storage Resource Group name where the secondary storage account is created. This could be the same storage account.
- Source Storage Account name. This could be the same storage account.
- Target Storage Account name. This could be the same storage account.
- Source Azure File Share name that you want to copy from (hot tier).
- Target the Azure File Share name that you want to move to (cool tier).
- Number of Days. Customer-defined number of days that the file in the hot tier has not been modified or accessed.
The automation script takes those parameters as input as shown in the figure below.

Once done, click OK twice.
Test and monitor the Runbook
In this section, we will test the Runbook and request on-demand storage and copy the data from the Azure file share hot tier to a cool tier. This scenario simulates when a certain number of files were last accessed/modified 90 days and are not accessed anymore, the files will be copied to the Azure file share cool tier and removed from the hot tier automatically.
Browse to the recently created Runbook, and on the overview page click the “Start” button. Enter the required parameters as input and then click “OK“.
The job will kick in, and after a short period of time, you will see the output and logs under the “Output” to verify that the copy job finished successfully as shown in the figure below.

You can also monitor the success or failure of these schedules on the “Jobs” page of Runbooks under Resources as shown in the figure below.

You can see the next run schedule under the “Schedules” page, in my example, the Runbook will run every day at 10:00 PM, and so on…

That’s it there you have it!
This is version 1.0, if you have any feedback or changes that everyone should receive, please feel free to leave a comment below.
Summary
In this article, we shared with you how to automatically move files between different Azure file share tiers, so you can optimize and reduce storage costs and activate the data lifecycle of your Azure Files in a very similar way to Azure Blob storage.
Please note that continuously moving files between different file share tiers will incur additional costs because of the I/O operations (Transactions).
The access tier determines the price and in some cases also the performance of a file share. Premium file shares are not available when the storage is created with standard performance. You need to create a premium file storage account for those. The main differentiation of the remaining tiers is the cost for storage at rest and transaction.
We recommend starting with the Hot tier for Azure File Sync and then adjusting based on your workload needs. Azure Files and Azure File Sync give you the ability to share files without the need to deploy the underlying server infrastructure and provide several benefits when building an Azure-based application.
We hope that Microsoft will develop a data lifecycle for Azure Files in a very similar way as we have for Azure Blob storage, but until this capability will become available, we could use Azure Container Instances to simplify and automate the move between the different tiers by creating a runbook in Azure Automation, which will run as part of the container.
Learn more
- How to sync between Azure file share and Azure blob container.
- How to transfer data from one Azure storage account to another account in a different Azure subscription.
- How to sync between Azure blob storage and between Azure file share(s).
- How to sync between two Azure File Shares for Disaster Recovery.
- How Azure Backup Integrates with Azure File Sync – Part I.
- How Azure Backup Integrates with Azure File Sync – Part II.
Do you want to learn more about Azure Storage including Azure Blobs and Azure File Shares? Make sure to check my recently published online course here: Azure Storage Essential Training.
__
Thank you for reading my blog.
If you have any questions or feedback, please leave a comment.
-Charbel Nemnom-
I have tested this script – I am getting an issue that this script is only moving files from the base folder. The script is not moving files from folders or subfolders.
Can you give me a script in which I can move directories/subdirectories as well in cold storage?
Hello Muhammad, thanks for testing this script and providing feedback!
Yes, the limitation that you mentioned is in the API side of Azure Files and not in this script.
Today, the list directories and files operation returns a list of files or directories under the specified share or directory.
It lists the contents ONLY for a single level of the directory hierarchy.
Hopefully, Microsoft will address this limitation in the next version of this API.
I’ll update this article as soon as I have any news.
Great writing Charbel!!
Just like many others, I’m indeed waiting for an integrated solution from Microsoft. Till then I’ll use your solution. Thank you so much!
Cheers, Marco
Thank you Marco for the feedback, much appreciated!