You dont have javascript enabled! Please enable it!

Sync Between Azure File Share and Azure Blob Container

11 Min. Read

Updated – 01/02/2022 – Starting with the AZCopy version 10.13.0 and later, Microsoft added sync support between Azure Blob <-> Azure Files instead of only copy. The automation tool was updated to support this new scenario.

Updated – 27/12/2021 – The automation tool was updated and tested with the latest Az.ContainerInstance module version 2.1 and above.

Updated – 28/07/2021 – The automation tool was updated to take into consideration the container soft delete feature which is enabled by default for Blob storage with a 7-day retention period. Please check this section for more details.

In this article, we will share with you how to sync and copy between an Azure file share and an Azure blob container.

Introduction

A while ago, we wrote about how to copy data from one Azure storage account in one subscription to another storage account in a different Azure subscription. And how to sync between Azure Blob Storage and between Azure File Share(s).

You are storing data in Azure file share, and you have a line of business application (LOB) that can read only from a blob container and not from SMB file share. In another scenario, you are leveraging Azure File Sync (AFS) which is synced to Azure file share and you need to have the data stored in an Azure blob container. You might also have other scenarios, please leave a comment below.

For these kinds of scenarios, you have a couple of options, at the time of this writing, you could use Azure Databox Gateway which can sync with Blobs. There are also other tools that you could use like AzCopy, Azure Batch, and Azure Data Factory that can help you move data back and forth. However, using these tools comes with some fidelity loss that you want to be aware of such as (permissions and timestamps like the last modified time will be lost/changed).

For the purpose of this article, we will make use of the AzCopy tool which is a command-line utility that you can use to copy/sync blobs or files to/from a storage account, and I will use Azure Container Instances to simplify and automate the AzCopy in Runbook which will run as part of the container. In this way, we can run the container on a simple schedule to copy the data and only get billed for the time the container was used.

If you are new to the AzCopy tool, then make sure to check the get started document from Microsoft here. The good news is, that Microsoft added sync support for AzCopy starting with version 10.13.0 and later between Azure Files and Azure Blob. Thus, we will leverage the sync support for AzCopy to sync data from an Azure file share to an Azure blob container instead of copying data.

Prerequisites

To follow this article, you need to have the following:

1) Azure subscription – If you don’t have an Azure subscription, you can create a free one here.

2) You need to have one or two different storage accounts either in the same region, same subscription or in different regions and subscriptions.

3) You also need to create at least one container in the blob storage and one Azure File Share in the same storage account, or across two different storage accounts.

4) Last, you need to have some files in Azure file share, or you can sync on-premises servers with Azure File Sync to Azure file share.

Get started

First, we need to create an Azure automation account that will help you to automate the synchronization and the copy process without user interaction. This will also make sure to respect the security access of your storage account without exposing access keys to users.

Create Automation Account

In this step, I will create an Azure automation resource with a Run As account. Run As accounts in Azure Automation are used to provide authentication for managing resources in Azure with the Azure cmdlets. When you create a Run As account, it creates a new service principal user in Azure Active Directory (Azure AD) and assigns the Contributor role to the service principal at the subscription level.

Open the Azure Portal, and click All services found in the upper left-hand corner. In the list of resources, type Automation. As you begin typing, the list filters based on your input. Select Automation Accounts.

Click +Add. Enter the automation account name, choose the right subscription, resource group, and location, and then click Create.

Sync Between Azure File Share and Azure Blob Container 1

Updated – 12/11/2021 – You can now create an Azure automation account with a Managed Identity. Microsoft recommends using Managed Identities for the Automation accounts instead of using Run As accounts. Managed identity would be more secure and offers ease of use since it doesn’t require any credentials to be stored. Azure Automation support for Managed Identities is now generally available. When you create a new Automation Account, System assigned identity is enabled. We highly recommend moving away from the Run As account to Managed Identity, we kept both documented in this article as a reference.

When you create an Automation Account with Managed Identity, it creates a new service principal user in Azure Active Directory (Azure AD) by default. Next, you must assign the appropriate (Azure RBAC) Contributor role to allow access to Azure Storage for the service principal at the resource group level. If you have two different subscriptions and two different resource groups, then you must assign the RBAC Contributor role for the service principal on the source and target resource group.

Always keep in mind and follow the principle of least privilege and carefully assign permissions only required to execute your runbook.

Import modules from Gallery

In the next step, you need to import the required modules from the Modules gallery. In your list of Automation Accounts, select the account that you created in the previous step. Then from your automation account, select Modules under Shared Resources. Click the Browse Gallery button to open the Browse Gallery page. You need to import the following modules from the Modules gallery in the order given below:

  1. Az.Accounts
  2. Az.ContainerInstance
  3. Az.Storage

Sync Between Azure File Share and Azure Blob Container 2

Updated – 12/11/2021 – Starting in September 2021, automation accounts will now have Az modules by default installed. You don’t need to import the modules from the gallery as shown in the figure above. Please note that you can also update the modules to the latest Az version from the modules blade as shown in the figure below.

Update Az Modules
Update Az Modules

At the time of this writing, AzCopy is still not part of the Azure Automation Runbook. For this reason, we will be creating an Azure Container instance with AzCopy as part of the container so we can automate the entire synchronization and copy process.

What you should know

If you are using the Az.ContainerInstance PowerShell module version 2.0 or later, you might be facing the following issue:

New-AzContainerGroup : Cannot process argument transformation on parameter ‘ImageRegistryCredential’. Cannot convert value “peterdavehello/azcopy:latest” to type “Microsoft.Azure.PowerShell.Cmdlets.ContainerInstance.Models.Api20210301.IImageRegistryCredential[]”.

After a long investigation, this is a known issue after Microsoft updated the Az.ContainerInstance PowerShell module to version 2.0 and later.

Updated – 27/12/2021 – The script below has been updated and tested with the latest Az.ContainerInstance module version 2.1 and above.

Create PowerShell Runbook

In this step, you can create multiple Runbooks based on which set of Azure file shares you want to sync/copy to the Azure blob container. PowerShell Runbooks are based on Windows PowerShell. You directly edit the code of the Runbook using the text editor in the Azure portal. You can also use any offline text editor such as Visual Studio Code and import the Runbook into Azure Automation.

From your automation account, select Runbooks under Process Automation. Click the ‘+ Create a runbook‘ button to open the Create a runbook blade.

Sync Between Azure File Share and Azure Blob Container 3

In this example, we will create a Runbook to copy all the files and directories changes from a specific Azure file share to a specific blob container. You can also be creative as much as you want and cover multiple Azure File Shares / Blob Containers / Directories, etc.

Edit the Runbook

Once you have the Runbook created, you need to edit the Runbook, then write or add the script to choose which Azure File Share you want to sync and copy data to the Azure blob container. Of course, you can create scripts that suit your environment.

As mentioned earlier, in this example, I will create a Runbook to read and check all the files and directories in a specific Azure File Share Name, and then copy the data over to a specific blob container. And to maintain a high level of security, I will NOT use the storage account keys, instead, I will create a time limit SAS token URI for each service individually (file share and blob container), and the SAS token will expire automatically after 60 minutes. So, if you regenerate your storage account keys in the future, the automation process won’t break.

If you have soft delete enabled on blob storage (which is the default now), you must add the “–overwrite=ifSourceNewer” option to the “azcopy” command, otherwise, it would overwrite identical/unchanged files by default and rapidly balloon out your storage costs. The script was updated to take into consideration the container soft delete feature and the latest Az.ContainerInstance PowerShell module version 2.1 and above.

Please note that you can also update the parameter section and copy between storage accounts across different subscriptions.

The script is as follows:

<#
.DESCRIPTION
A Runbook example which continuously check for files and directories changes in recursive mode
for a specific Azure File Share and then sync data to blob container by leveraging AzCopy tool
which is running in a Container inside an Azure Container Instances using Service Principal in Azure AD.

.NOTES
Filename : Sync-FileShareToBlobContainer
Author   : Charbel Nemnom
Version  : 2.2
Date     : 13-January-2021
Updated  : 20-April-2022
Tested   : Az.ContainerInstance PowerShell module version 2.1 and above

.LINK
To provide feedback or for further assistance please visit:
https://charbelnemnom.com 
#>

Param (
    [Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
    [String] $AzureSubscriptionId,
    [Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
    [String] $storageAccountRG,
    [Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
    [String] $storageAccountName,    
    [Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
    [String] $storageContainerName,
    [Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
    [String] $storageFileShareName
)

# Ensures you do not inherit an AzContext in your runbook
Disable-AzContextAutosave -Scope Process

# Connect to Azure with system-assigned managed identity (automation account)
Connect-AzAccount -Identity

# SOURCE Azure Subscription
Set-AzContext -Subscription $AzureSubscriptionId

# Get Storage Account Key
$storageAccountKey = (Get-AzStorageAccountKey -ResourceGroupName $storageAccountRG -AccountName $storageAccountName).Value[0]

# Set AzStorageContext
$destinationContext = New-AzStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey

# Generate Container SAS URI Token which is valid for 60 minutes ONLY with read, write, create, and delete permission
# If you want to change the target (BlobContainer -> AzureFileShare), then make sure to update the -Permission parameter to (rl)
$blobContainerSASURI = New-AzStorageContainerSASToken -Context $destinationContext `
 -ExpiryTime(get-date).AddSeconds(3600) -FullUri -Name $storageContainerName -Permission rwldc

# Generate File Share SAS URI Token which is valid for 60 minutes ONLY with read and list permission
# If you want to change the target (BlobContainer -> AzureFileShare), then make sure to add (rwldc) to the -Permission parameter
$fileShareSASURI = New-AzStorageShareSASToken -Context $destinationContext `
 -ExpiryTime(get-date).AddSeconds(3600) -FullUri -ShareName $storageFileShareName -Permission rl

# Choose the following syntax if you want to Sync instead of Copy
$command = "azcopy","sync",$fileShareSASURI,$blobContainerSASURI,"--recursive=true","--delete-destination=true"

# Choose the following syntax if you want to Copy only
# The copy command consumes less memory and incurs less billing costs because a copy operation doesn't have to index the source or destination prior to moving files.
# $command = "azcopy","copy",$fileShareSASURI,$blobContainerSASURI,"--recursive=true","--overwrite=ifSourceNewer"

# Container Group Name
$jobName = $storageAccountName + "-" + $storageFileShareName + "-azcopy-job"

# Set the AZCOPY_BUFFER_GB value at 2 GB which would prevent the container from crashing.
$envVars = New-AzContainerInstanceEnvironmentVariableObject -Name "AZCOPY_BUFFER_GB" -Value "2"

# Create Azure Container Instance Object and run the AzCopy job
# The container image (peterdavehello/azcopy:latest) is publicly available on Docker Hub and has the latest AzCopy version installed
# You could also create your own private container image and use it instead
# When you create a new container instance, the default compute resources are set to 1vCPU and 1.5GB RAM
# We recommend starting with 2 vCPU and 4 GB memory for large file shares (E.g. 3TB)
# You may need to adjust the CPU and memory based on the size and churn of your file share
$container = New-AzContainerInstanceObject -Name $jobName -Image "peterdavehello/azcopy:latest" `
 -RequestCpu 2 -RequestMemoryInGb 4 -Command $command -EnvironmentVariable $envVars

# The container will be created in the $location variable based on the storage account location. Adjust if needed.
$location = (Get-AzResourceGroup -Name $storageAccountRG).location
$containerGroup = New-AzContainerGroup -ResourceGroupName $storageAccountRG -Name $jobName `
 -Container $container -OsType Linux -Location $location -RestartPolicy never

Write-Output ("")

Save the script in the CMDLETS pane as shown in the figure below.

Sync Between Azure File Share and Azure Blob Container 4

Then test the script using the “Test pane” to verify it’s working as intended before you publish it.

Once the test is completed successfully, publish the Runbook by clicking Publish. This is a very important step.

Schedule the Runbook

In the final step, you need to schedule the Runbook to run based on your desired time to copy the changes from Azure file share to Azure blob container.

Within the same Runbook that you created in the previous step, select Schedules and then click + Add schedule.

So, if you need to schedule the Runbook to run every three hours, then you need to create the following schedule with Recur every 3 Hours with Set expiration to No. You can also run it on-demand if you wish to do so.

Sync Between Azure File Share and Azure Blob Container 5

While scheduling the Runbook, you can configure and pass the required parameters for the PowerShell Script.

In this example, we need to specify the Azure Subscription ID, Resource Group Name, Storage Account Name, Azure Blob Container Name, and the Azure File Share Name that I want to copy over. The sample script takes those parameters as input.

Sync Between Azure File Share and Azure Blob Container 6

Once done, click OK twice.

Test the Runbook

In this quick demo, I will test the Runbook and request on-demand storage sync to copy the data from an Azure file share to an Azure blob container. This scenario simulates when a user adds or modifies files directly in Azure File Share and/or Azure File Sync, and then copies the data to the Azure blob container automatically.

Monitor the Runbook

You can monitor the success or failure of these schedules using the “Jobs” page of Runbooks under Resources. You can also see the next run schedule, in my example, the Runbook will run every 3 hours, and so forth…

Sync Between Azure File Share and Azure Blob Container 7

That’s it there you have it!

This is still version 2.1, if you have any feedback or changes that everyone should receive, please feel free to leave a comment below.

How it works…

When the runbook runs for the first time, a new container will be created and then terminated. The container will perform the copy batch job, which is not meant to run for a long time. So this container runs to complete the copy command and then stops. To prevent constant restart on completion, I have added the “-RestartPolicy Never” on the container which means it doesn’t restart when finished. This is a great way to run a batch copy job.

When the runbook runs for the second time and so on, it will start the existing container instead of creating a new container, and then run the command which includes the updated file share and blob container SAS URI Token which is valid for 30 minutes only. In this way, you get billed for the time the container was used, and to make sure you don’t expose your storage account access keys.

Please note that you may need to increase the SAS URI Token expiry time based on the amount of data you have to copy. The SAS must be valid throughout the whole job duration since we need it to interact with the service. I would suggest padding the expiration a bit just to be safe.

There’s more…

The previous steps were described how to automate and sync from an Azure file share to an Azure blob container using the AzCopy tool. Another useful scenario is to reverse this process. In other words, you could copy or sync from Azure Blob storage to Azure file shares and vice versa.

  • Azure Blob (SAS or public) <-> Azure Files (SAS)

To make this happen, you need to specify the source as a Blob URL and the destination as a File URL as shown in the following example:

azcopy sync "https://[storageaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]" "https://[storageaccount].file.core.windows.net/[filesharename]/[path/to/directory]?[SAS]" --recursive

For more details, please check the following step-by-step guide on how to copy from Azure Blob storage to Azure file share.

Summary

In this article, we showed you how to sync from an Azure file share to an Azure blob container using the AzCopy tool running in a container. In this way, we can the run container with sync and copy jobs on a simple schedule and only get billed for the time the container is used.

At the time of this writing, if you deleted some files from the Azure file share, they won’t be deleted from the blob container automatically. This is a copy job and not a synchronization solution. I hope that Microsoft will update the AzCopy tool to include sync functionality so we can maintain the status between file share and blob container.

Starting with the AZCopy version 10.13.0 and later, Microsoft added sync support between Azure Blob <-> Azure Files instead of copy. The automation tool was updated to support that scenario. So if you deleted some files from the Azure file share, they will be deleted from the blob container automatically too.

The sync command differs from the copy command in several ways as follows:

  • By default, the recursive flag is true and sync copies all subdirectories. Sync only copies the top-level files inside a directory if the recursive flag is false.
  • If the deleteDestination flag is set to true or prompt, then the sync will delete files and blobs at the destination that are not present at the source.

Do you want to learn more about Azure Storage including Azure Blobs and Azure File Shares? Make sure to check my recently published online course here: Azure Storage Essential Training.

We hope you find this guide useful.

__
Thank you for reading my blog.

If you have any questions or feedback, please leave a comment.

-Charbel Nemnom-

Photo of author
About the Author
Charbel Nemnom
Charbel Nemnom is a Senior Cloud Architect, Swiss Certified ICT Security Expert, Certified Cloud Security Professional (CCSP), Certified Information Security Manager (CISM), Microsoft Most Valuable Professional (MVP), and Microsoft Certified Trainer (MCT). He has over 20 years of broad IT experience serving on and guiding technical teams to optimize the performance of mission-critical enterprise systems with extensive practical knowledge of complex systems build, network design, business continuity, and cloud security.

Related Posts

Previous

How To Export and Backup Azure Policy Definitions

How to Back up and Restore Azure Managed Disks

Next

96 thoughts on “Sync Between Azure File Share and Azure Blob Container”

Leave a comment...

  1. Hello Eduar,
    Yes sure, you can do that.
    After you generate the file Share SAS URI Token, you need to update the $fileShareSASURI variable to add the entire file share and folder structure.
    Hope it helps!

  2. Hello Jannik, thanks for the comment and feedback!
    You should check the permissions that you gave for the Automation Account.
    You can now create an Azure automation account with Managed Identity.
    You must assign the appropriate RBAC role (i.e. Contributor) to allow access to Azure Storage for the service principal at the resource group level.
    If you have two different subscriptions and two different resource groups, then you must assign the RBAC Contributor role for the service principal on the source and target resource group.
    Hope it helps!

  3. Failed
    Cannot bind argument to parameter ‘SubscriptionId’ because it is an empty string.

    My only code for subscriptionid as stated on your code is:

    Select-AzSubscription -SubscriptionId $AzureSubscriptionId

  4. Hello Lew, thanks for the feedback!
    I have updated the script, please use this line code instead:

    # SOURCE Azure Subscription
    Set-AzContext -Subcription $AzureSubscriptionId

    Let me know if it works for you.

  5. If I removed the last part of the script, I don’t encounter the error about SubscriptionID.

  6. Hello Lew, please check the permissions that you gave for the Automation Account.
    You can now create an Azure automation account with Managed Identity.
    You must assign the appropriate RBAC role (i.e. Contributor) to allow access to Azure Storage for the service principal at the resource group level.
    If you have two different subscriptions and two different resource groups, then you must assign the RBAC Contributor role for the service principal on the source and target resource group.
    Please also make sure that the managed identity has Read permissions only at the subscription level.
    Hope it helps!

  7. If you are using Managed Identities, do you still need the SAS token? I am assuming you do because of the ACI, but am wondering if it could leverage a managed identity.

  8. Hello Erik, thanks for the comment and great question!
    Yes, you are right, I am using SAS token for short time because of the ACI.
    AzCopy does support Managed Identity, and recently Microsoft added managed identity support for ACI (in preview).
    When this article was written, neither the Automation Account nor ACI supported managed identity.
    In this case, the Azure Automation account has a Managed Identity and you need to enable Managed Identity for the ACI too (reference).
    Then, you need to make sure to set the appropriate RBAC roles for both managed identities in the resource group.
    Last, you need to update the script to remove the SAS token section, and then use the azcopy login –identity before you start copying/syncing.
    This is a great option if you plan to use AzCopy inside of a script that runs without SAS token or user interaction, and the script runs from ACI or VM.
    Note: Currently you can’t use a managed identity in a container group deployed to a virtual network.
    Hope it helps!

  9. Hello Sir, first of all, great article, congratulations !!
    I am trying to implement your run book to copy files from a blob container into a file share, however, I’m getting the following error, would you please help me out?

    System.Management.Automation.ParameterBindingException: A parameter cannot be found that matches parameter name ‘Subcription’.

  10. Hello Rodolfo, thanks for the comment and reporting about this issue!
    In fact, the error is from my side, I misspelled the parameter ‘Subscription’, and the ‘s’ was missing.
    I updated the script, please copy it again and it should work now :)

  11. Hello,

    It seems the “–delete-destination=true” option doesn’t work. I can still see the file there even after giving the right permissions.

    What else could it be?

    Thanks!

  12. Hello George, thanks for the comment!
    Could you please describe the full scenario that you are trying to do?
    Are you syncing between Azure file share and Blob container, or the opposite?
    I believe you already gave the service principal contributor access to the destination storage account, right?
    Thanks!

  13. Hi Charbel,

    I have found the culprit.
    I added the options ” –overwrite=ifSourceNewer” where the sync option is and that was causing issues with the script.

    your solution is great and is a major component in my logic apps SFTP solution. I have created a logic app to check for files over SFTP from one of our customers and it copies the files directly to our Azure files share. The limitation with this is that there is no module for doing the opposite – upload to customer’s servers from our Azure file share. Unfortunately, only the Blob storage module is supported. This is where your solution can sync from our Azure file to Blob storage and let the logic app check for any new files after the sync has completed.

    Way more elegant this way.

    Thanks again!

  14. Hello George, thanks for the comment and feedback!
    I am happy to hear that my solution helped you to complement your Logic App solution.
    Just a quick note, the option/parameter “–overwrite=ifSourceNewer” with AzCopy Sync is NOT supported, it works only with AzCopy Copy.
    Cheers,

  15. Hello Charbel, I am testing with one command ie azure context command as below:

    Param (
    [Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()]
    [String] $AzureSubscriptionId

    )
    # SOURCE Azure Subscription
    Set-AzContext -Subscription $AzureSubscriptionId

    But getting error as specify valid subscription/tenant id, could you please help me here?

  16. Hello Kumar, please check the permissions that you gave for the Automation Account (service principal with Managed Identity).
    You must assign the appropriate RBAC role (i.e. Contributor) to allow access to Azure Storage for the service principal at the resource group level.
    If you have two different subscriptions and two different resource groups, then you must assign the RBAC Contributor role for the service principal on the source and target resource group.
    Also, make sure that the managed identity has Read permissions only at the subscription level, so you can set the Context.
    Hope it helps!

  17. Hi, and thanks for the excellent guide!

    I’m getting this error message when trying to test the runbook:
    “System.Management.Automation.CommandNotFoundException: The term ‘azcopy’ is not recognized as the name of a cmdlet, function, script file, or operable program.”

    What could I be doing wrong? Unfortunately, I haven’t found anyone else with this exact problem when searching.

  18. Hello Gustav, thanks for the feedback!
    In regards to your issue. Can you confirm that you follow exactly what is documented here in this article?
    Can you reverify the syntax that you copied including single quotes and double quotes?
    Thanks!

  19. Hi,

    Thank you for this guide, it seems that it would do exactly what I need but I can’t get past a specific error.

    I have it authenticating, getting keys, and even creating the container, although I had to add some reformatting to the command as it was replacing & with \\u0026 and ‘ with \\u0027, which is something to do with JSON conversion I think.

    The error I am getting is from the container itself,
    Error: Failed to start container, Error response: to create containerd task: failed to create container 56da5c60e0a96dd1718bab1f561bf5480c04a894a1dc1718c30024bf8fa9ea28: guest RPC failure: failed to create container: failed to run runc create/exec call for container 56da5c60e0a96dd1718bab1f561bf5480c04a894a1dc1718c30024bf8fa9ea28 with exit status 1: container_linux.go:380: starting container process caused: exec: “azcopy sync stat azcopy sync no such file or directory: unknown

    I thought it was the command itself as it looks like it can’t see either the source or destination, but the command works perfectly outside of a container, so then I assumed it might be the container image being used, so I tried another image with the exact same result.

    I think it is something to do with the way the command is being passed to the container being miss formatted or something now, but I have no idea how to proceed.

  20. Hello Richard, thank you for the comment and for sharing your experience!
    As you described, it looks like the issue is related to the container image and not to the command.
    The command is correct.
    What I suggest is, to try the following 2 images instead of the latest and see which one works.

    # Version 10
    $container = New-AzContainerInstanceObject -Name $jobName -Image "peterdavehello/azcopy:10" ` 
    -RequestCpu 2 -RequestMemoryInGb 4 -Command $command -EnvironmentVariable $envVars 
    
    # Or Version 10.14
    $container = New-AzContainerInstanceObject -Name $jobName -Image "peterdavehello/azcopy:10.14" ` 
    -RequestCpu 2 -RequestMemoryInGb 4 -Command $command -EnvironmentVariable $envVars

    Let me know if it works for you.
    Thanks!

Let me know what you think, or ask a question...

error: Alert: The content of this website is copyrighted from being plagiarized! You can copy from the 'Code Blocks' in 'Black' by selecting the Code. Thank You!