Reduce DPM Storage Consumption by enabling Deduplication on Modern Backup Storage?

14 Min. Read

In this article, we will share with you how to reduce DPM storage consumption by enabling deduplication on Modern Backup Storage.

Updated 29/11/2016: [Script updated to create a simple storage pool for DPM.]

Updated 09/02/2017: [Announcing backups of SQL Server 2016 and SharePoint 2016 with DPM 2016 Update Rollup 2. DPMDB can also be hosted on SQL Server 2016].

Updated 10/07/2017: [Over the last couple of months, we received several comments about DPM and Dedup. We’ve updated this post by answering all those questions. Please see the Frequently asked questions (FAQ) section at the end of this post].

In this article, we will describe how to reduce DPM storage consumption by enabling data deduplication on DPM Modern Backup Storage (MBS).

Modern Backup Storage (MBS)

Microsoft released Windows Server 2016 and System Center 2016 for the public (GA). In the previous post, we covered how to install DPM 2016 on Windows Server 2016 and SQL Server 2016. Please make sure to check the previous article to get an overview of what’s new in DPM 2016.

In System Center Data Protection Manager 2016, Microsoft announced a new backup option called Modern Backup Storage (MBS). MBS helps achieve 50% storage savings using ReFS technology, and 3 times faster backup using ReFS block cloning, which uses Allocate-on-write, as opposed to copy-on-write used by Volume Snapshot in DPM 2012 R2. Modern Backup Storage (MBS) helps also achieve much more efficient storage utilization using Workload Aware Storage.

Workload Aware Storage enables you to configure DPM to store your backups on a high or low performant volume based on the workloads. DPM 2016 uses by default Modern Backup Storage, please note that you can still use the old disks storage technology used by DPM 2012/R2, but I advise you to start using the new Modern Backup Storage.

How does DPM MBS Works

DPM leverages Windows Server 2016 ReFS capabilities to provide Modern Backup Storage. When you add a volume, DPM formats the storage into a ReFS volume and store the backups on multiple VHDXs, each VHDX is 1.2GB in size. Let’s suppose you are backing up a SQL database with 10 blocks, DPM will place the VHDX into a common chunk store on the ReFS storage volume.

On the next recovery point, DPM creates a ReFS clone pointing to the original VHDX and the common chuck store as well. When some of the blocks are changed for the backup, DPM transfers the new blocks and writes them into the cloned VHDX using Allocate-on-write technology, then ReFS writes the new blocks into the chunk store, and the new clone VHDX will point to these blocks of the new data.

Data Deduplication Overview

Data Deduplication, often called Dedup for short, is a feature introduced in Windows Server 2012/R2 and enhanced in Windows Server 2016. Data Deduplication helps reduce redundant data on storage costs. When enabled, Data Deduplication optimizes free space on a volume by examining the data on the volume by looking for duplicated portions on the volume. Data Deduplication optimizes redundancies without compromising data fidelity or integrity.

A quick overview about the Data Deduplication new enhancements in Window Server 2016 is the following:

1) You can use now larger volumes up to 64TB, as opposed to 10TB in Windows Server 2012 R2.

2) Windows Server 2016 runs multiple threads in parallel using multiple I/O queues on a single volume, resulting in increase performance, as opposed to a single-threaded job and I/O queue for each volume in Windows Server 2012 R2.

3) You can use file sizes up to 1TB, those files are a good candidate for dedup.

4) Virtualized backup awareness using the new “Backup” usage type to configure dedup on DPM storage which is our focus in today’s article.

DPM and Dedup

Using dedup with DPM can result in large savings on the storage backend. The amount of space saved by dedup can vary depending on the type of data being backed up (i.e. SQL database, virtual machines, virtual desktop environments, etc.) A large amount of saving is when you back up VMs, the storage-saving will vary between 50% to 90%.

As discussed earlier, Modern Backup Storage (MBS) relies on ReFS technology, however, Microsoft does not support data deduplication in Window Server 2016 with ReFS volumes. Dedup is also not supported on Storage Spaces Direct (S2D), because S2D relies on ReFS technology. The following screenshot shows “Configure Data Deduplication” is disabled for ReFS volume.


Reduce DPM Storage

In this section, we will configure data deduplication for DPM Modern Backup Storage.

In this example, we have DPM 2016 running on Windows Server 2016 in a virtual machine on Windows Server 2016 Hyper-V host and store backup data to VHDXs. Those VHDXs are stored locally on a Hyper-V host under a separate NTFS Parity Volume. Please note that the same will apply to the newer version of DPM (2019 or 2022) running on Windows Server 2019 or Windows Server 2022.

Please note that you can also store the VHDXs files to a shared folder on SMB 3.1 Scale-out File Server (SOFS) with Storage Spaces and data deduplication enabled.

The Clustered Shared Volume (CSV) on the Scale-out File Server must be formatted with NTFS based file system in order to enable data deduplication.

Step 1: Set up dedup on NTFS volumes

First things first, you need to make sure you have installed the Data deduplication feature and rebooted your host.


Since in this example we are using local NTFS volume and not Storage Spaces, we will start by formatting the volume with 64 KB allocation units and large NTFS File Record Segment (FRS) to work better with dedup.

Open Windows PowerShell and run the following command:

Format-Volume –DriveLetter Q –FileSystem NTFS –NewFileSystemLabel “NTFS Dedup” –AllocationUnitSize 64KB –UseLargeFRS –Force –Verbose


In this example, I am using a 9 TB NTFS volume. But you can go ahead and use larger volumes as needed, up to a maximum of 64 TB.

Step 2: Enable dedup on NTFS volumes

In the next step, we need to enable dedup for each NTFS volume that will store the VHDXs. In Windows Server 2016, Microsoft introduced a new usage type called “Backup” which minimizes the number of PowerShell commands used, we can enable dedup for Backup target with just one command:

Enable-DedupVolume -Volume Q: -UsageType Backup –verbose


Step 3: Prepare DPM Storage

DPM 2016 is allocating the storage using VHDX files residing on the deduplicated volume, in this example, it’s 9 TB. We will create 12 dynamic VHDX files with 1TB of size on the dedup volume and then attach them to DPM. Note that 3TB of overprovisioning of storage is done to take advantage of the storage savings produced by dedup. As dedup produces additional storage savings, new VHDX files can be created on the same volume to consume saved space.

In SC DPM 2016, Microsoft tested the DPM server with up to 80X1TB VHDXs files attached to it. However, this is not a limit, some users have reported they have tested up to 150X1TB VHDXs attached to a single DPM server. But please remember to spread the VHDXs across multiple virtual SCSI controllers of the DPM-VM so that the VM will be performant.

A quick reminder: How many SCSI controllers can you add to a VM? The maximum is 4 SCSI controllers, and how many VHDX each controller can support? 64 virtual hard disks per controller, so the total will be 256 VHDXs. It would be nice to test 255X1TB VHDX with DPM storage and reserve one disk for the OS.

Open Windows PowerShell and run the following commands to create 12 virtual hard disks, shutdown the DPM server, add 3 SCSI controllers and then add the created virtual hard disks to the DPM server (In this example, 4 VHDX files will be attached to each SCSI controller).

# Variables
 $VM = "DPM2016"

# Create 12 X 1TB Virtual Disks
 1..12 | % {New-VHD -Path Q:\DPM-MBS$_.vhdx -Dynamic -SizeBytes 1TB -PhysicalSectorSizeBytes 4096 -AsJob} 

# Add 3 SCSI Controllers
 $VMScsi = Get-VMScsiController -VMName $VM
 If ($VMScsi.Count -ne 4) {
 Stop-VM -Name $VM -Passthru
 1..3 | % {Add-VMScsiController -VMName $VM -Verbose}

# Attach 4 Virtual Disks on each SCSI Controller
 1..4 | % {
 Add-VMHardDiskDrive -VMName $VM -Path "Q:\DPM-MBS$_.vhdx" -ControllerType SCSI -ControllerNumber 1
 5..8 | % {
 Add-VMHardDiskDrive -VMName $VM -Path "Q:\DPM-MBS$_.vhdx" -ControllerType SCSI -ControllerNumber 2
 9..12 | % {
 Add-VMHardDiskDrive -VMName $VM -Path "Q:\DPM-MBS$_.vhdx" -ControllerType SCSI -ControllerNumber 3

# Start DPM VM
 If (Get-VM -Name $VM | ? {$_.State -ne "Running"}) {
 Start-VM -Name $VM -Passthru


Step 4: Configure Storage and Enable Modern Backup Storage

The following steps will be performed inside the DPM virtual machine, we will create first a Simple Storage Space that will aggregate all the 12X1TB disk and then add the volume to DPM 2016 as Modern Backup Storage.

You may wonder if Simple Storage Space is a single point of failure in the guest? Please remember that those VHDXs are residing on a Scale-out File Server (SoFS) or on Storage Spaces Direct (S2D), you can maintain the resiliency at the storage level, but not inside the guest as it is not supported by storage spaces. Also, Simple Storage Space is suggested inside the guest so that you can extend DPM volume(s) if needed in the future.

In this example, we are using parity storage on the host level.

Login to the DPM server and run the following commands:

# Variables
$Pool1 = "DPM Storage Pool"
$vd1 = "Simple DPM vDisk01"

New-StoragePool –FriendlyName $Pool1 –StorageSubsystemFriendlyName "Windows Storage*" -PhysicalDisks (Get-PhysicalDisk -CanPool $True)

Get-StoragePool $Pool1 | Get-PhysicalDisk | Sort Size | FT FriendlyName, Size, MediaType, HealthStatus, OperationalStatus -AutoSize
Get-StoragePool $Pool1 | Get-PhysicalDisk | Where MediaType -eq "Unspecified" | Set-PhysicalDisk -MediaType HDD

# Create Simple Storage Space virtual disk
New-VirtualDisk -StoragePoolFriendlyName $Pool1 -FriendlyName $vd1 -ResiliencySettingName Simple -UseMaximumSize `
 -ProvisioningType Fixed -MediaType HDD -Interleave 256KB -NumberOfColums 1

# Format the volume with NTFS and a 64 KB allocation unit size
Get-VirtualDisk –FriendlyName $vd1 | Get-Disk |  Initialize-Disk –Passthru | `
 New-Partition –AssignDriveLetter –UseMaximumSize | Format-Volume –AllocationUnitSize 64KB `
 -FileSystem NTFS -NewFileSystemLabel "DPM Modern Storage" -Confirm:$false

# Check Partitions and Volumes
 Get-Partition | ? Size -gt 1TB | FT -AutoSize
 Get-Volume | ? FileSystemLabel -eq "DPM Modern Storage"  | FT -AutoSize

Here is the result in Server Manager:


Updated 29/11/2016: One important point to mention here,  in the script above, I created the simple storage space virtual disk with a Number of Columns equal to 1, because when a virtual disk is created without specifying the NumberOfColumns, storage spaces will set the NumberOfColumns property automatically according to the number of physical disks used to create the virtual disk and cannot be changed after the virtual disk is been created.

If for example, the NumberOfColumns is 8, you can only extend the simple fixed virtual disk is to add 8 X 1TB VHDXs to the storage pool!!! so for this reason, I specified the NumberofColumns equal to 1, so we can add 1 disk at a time in the future as needed.

Next, open the DPM console and browse to Management | Disk Storage and then click on +Add as shown in the following screenshot.


By default, DPM will format the volume with ReFS and add it to the Storage Pool to be able to use all the new savings with Modern Backup Storage. You can also give the volume a friendly name for easy recall later as shown in the next screenshot.


Please note that if you are upgrading from DPM 2012 R2 and you have protection groups created with that version, you will also see an option to “Add disks” to be used for those protection groups as shown in the above screenshot.

After adding the volume to DPM, the next step is to configure Workload-Aware Storage (WAS). This feature enables you to associate workloads with volumes, so when you configure protection groups, DPM will proactively select these volumes to store the associated workloads.

In this example, we need to backup Hyper-V virtual machines, this can be easily done with PowerShell as shown in the next screenshot. We will specify the DataSourceType as Hyper-V.


Please note that you can also associate volumes to FileSystem, Client, Exchange, SharePoint, SQL, VMware, All, SystemProtection, Hyper-V and Other workloads.

Step 5: Configure Protection Group with MBS

In the final step, we will create a protection group and start backing up our virtual machines using Modern Backup Storage and get the benefits of storage savings using both technologies (MBS and Dedup).

Open DPM console and browse to Protection then click on New. In the create protection wizard, simply select Servers, then select all virtual machines you want to protect. In this example, I am backing up a Hyper-V cluster as shown in the following screenshot.


Then select the protection method you need (short-term to disk, to Azure, or long-term using tape), then give the protection group a name. Click Next.


Select the short-term retention goal you need and then specify the recovery point schedule. In this example, it’s scheduled every day at 6.00 PM.


The next step is to review the disk storage allocation, here you can see the Total data size I have which is 2.5TB, the Disk storage is to be provisioned on DPM=5TB, you can review the Data size for each VM and the space to be provisioned in DPM storage.

Since we have configured Hyper-V Volume as DataSourceType to store the virtual machines that we need to backup, DPM will select this particular volume as the target storage automatically. However, you can also change the target storage for a particular data source and back it up on some other volumes if needed, you can do this by selecting the drop-down box as shown in the following screenshot.


Click Next and then select the replica creation method. I will select Now to happen automatically over the network.


Click Next and make sure to select Run a consistency check if a replica becomes inconsistent. Click Next, and then click on Create Group as shown in the next screenshot to start backing up your workload using Modern Backup Storage.


Step 6: Optimize DPM backup and Dedup scheduling

Please be aware that backup including DPM Consistent Check (CC) and deduplication operations are I/O intensive. If they were to run at the same time, additional overhead to switch between these operations could be costly and result in less data being backed up or deduplicated on a daily basis. Microsoft recommends configuring separate deduplication and backup/consistent schedule Window.

In this case, you can set up the DPM backup schedule window and  DPM Consistent Check (CC) window into the non-overlapping Dedup Window. In order to do so, open DPM Management Shell inside the guest and run the following command:

# Set DPM Backup and CC Window
$StarTime = "16:00"
$Duration = 12

$PGs = Get-DPMProtectionGroup -DPMServerName $env:COMPUTERNAME

Foreach ($PG in $PGs) {

$MPG = Get-DPMModifiableProtectionGroup -ProtectionGroup $PG

Set-DPMBackupWindow -ProtectionGroup $MPG -StartTime $StarTime -DurationInHours $Duration
Set-DPMConsistencyCheckWindow -ProtectionGroup $MPG -StartTime $StarTime -DurationInHours $Duration

Get-DPMBackupWindow -ProtectionGroup $MPG
Get-DPMConsistencyCheckWindow -ProtectionGroup $MPG


In this example, DPM is configured to backup virtual machines and run DPM Consistent Check (CC) window between 4:00 PM and 4:00 AM.

The final step is to configure deduplication to run on the host for the remaining 12 hours of the day which is from 4:00 AM to 4:00 PM.

A 12-hour deduplication Window starting at 4.00 AM after backup schedule / consistent check Window ends would be configured as follows from any individual Hyper-V host or SOFS/S2D cluster node:

# Disable default dedup schedule
 Set-DedupSchedule * -Enabled:$false

# Dedup start time and dedup duration after 12 hours of DPM backup Window
 $dedupStart = "4:00am"
 $dedupDuration = 12

# On Weekends, Garbage Collection and Scrubbing start one hour earlier than Optimization job.
# Once GC/scrubbing jobs complete, the remaining time is used for weekend optimization.
 $shortenedDuration = $dedupDuration -1
 $dedupShortenedStart = "5:00am"

# If the previous command disabled priority optimization schedule
# Re-enable it
 if ((Get-DedupSchedule -name PriorityOptimization -ErrorAction SilentlyContinue) -ne $null)
 Set-DedupSchedule -Name PriorityOptimization -Enabled:$true

# Set weekday and weekend optimization schedules
 New-DedupSchedule -Name DailyOptimization -Type Optimization -DurationHours $dedupDuration `
 -Memory 50 -Priority Normal -InputOutputThrottleLevel None -Start $dedupStart -Days Monday,Tuesday,Wednesday,Thursday,Friday

New-DedupSchedule -Name WeekendOptimization -Type Optimization -DurationHours $shortenedDuration `
 -Memory 50 -Priority Normal -InputOutputThrottleLevel None -Start $dedupShortenedStart -Days Saturday,Sunday

# Re-enable and modify Garbage Collection and Scrubbing schedules
 Set-DedupSchedule -Name WeeklyScrubbing -Enabled:$true -Memory 50 -DurationHours $dedupDuration `
 -Priority Normal -InputOutputThrottleLevel None -Start $dedupStart -StopWhenSystemBusy:$false -Days Sunday

Set-DedupSchedule -Name WeeklyGarbageCollection -Enabled:$true -Memory 50 -DurationHours $dedupDuration `
 -Priority Normal -InputOutputThrottleLevel None -Start $dedupStart -StopWhenSystemBusy:$false -Days Saturday

# Disable background optimization
 if ((Get-DedupSchedule -name BackgroundOptimization -ErrorAction SilentlyContinue) -ne $null)
Set-DedupSchedule -Name BackgroundOptimization -Enabled:$false

# Get updated dedup schedule
Get-DedupSchedule | FT -AutoSize

The output will look something like this:

Reduce DPM Storage Consumption by enabling Deduplication on Modern Backup Storage? 1

It’s very important to note that whenever the DPM backup Window is modified, it’s vital that you modify the deduplication window along with it so they don’t overlap.


DPM Modern Backup Storage (MBS) and Dedup are better together. With MBS you can save up to 50% of storage and with Dedup you can also save up 50%-90%.

Here are the results of storage-saving after two weeks of running DPM 2016 with Modern Backup Storage and Dedup enabled on NTFS volume in Windows Server 2016.

Reduce DPM Storage Consumption by enabling Deduplication on Modern Backup Storage? 2

We hope this post has been informative for you, and we would like to thank you for reading!


Learn more

Do you want to learn more about System Center Data Protection Manager and how to create a hybrid-cloud backup solution? Make sure to check my recently published book: Microsoft System Center Data Protection Manager Cookbook.

With this book (over 450 pages) on your side, you will master the world of backup with System Center Data Protection Manager and Microsoft Azure Backup Server deployment and management by learning tips, tricks, and best practices, especially when it comes to advanced-level tasks.


Q: Is Dedup on Microsoft Azure Backup Server (MABS) working the same as described in this post with DPM?

A: Yes, Dedup with MABS is fully supported using the same steps described in this post. MABS inherits the same functionality as DPM.

Q: Is it required to store the 1TB VHDX files on a Scale-out File Server (SOFS) or Storage Spaces Direct (S2D)?

A: No, you can just enable Dedup on the Hyper-V host’s drive that holds the VHDXs for the MABS/DPM server. The VHDXs can be stored locally on the Hyper-V host with NTFS volume and Dedup enabled. In fact, you can even leverage your existing NAS/SAN with iSCSI or FC and map the volume to the Hyper-V host. But please note that in this option, you will lose VM mobility since the storage is mapped directly into the Hyper-V host. You can also use Hyper-V cluster with traditional SAN storage and place those VHDXs files on a Cluster Shared Volume (CSV) with NTFS and Dedup enabled.

Q: Does the volume/drive have to be dedicated to DPM/MABS storage? What if it holds other VM VHDXs?

A: No, it’s not required to have a dedicated drive for DPM/MABS storage. You can put other VM VHDXs as well, but please note the following two caveats that you need to take into consideration when planning your backup storage:

a) Dedup on NTFS with Windows Server 2012 R2 and Windows Server 2016 supports only: General Purpose File Server, Virtual Desktop Infrastructure (VDI) workloads, and Virtualized Backup Applications such as (DPM/MABS). Dedup does not support General Purpose VM VHDXs.

b) If you share the dedicated drive for other use cases, this will impact the backup performance and will increase the IO on the disk. Plan accordingly.

Q: What is the recommended or optimal size for the VHDXs that get attached to the DPM/MABS VM?

A: Microsoft supports up to 1 TB of large files for Data Deduplication. Windows Server 2016 helps you scale up, with full support for 64TB volumes and full use of up to 1TB files. For this reason, I created several 1TB VHDXs files on a single NTFS volume.

Q: Can I set the ‘NumberOfColumns’ property through a GUI-initiated Storage Spaces Virtual Disk?

A: No, you can only set the ‘NumberOfColumns’ property using Windows PowerShell while creating the virtual disk as described in this article. Please remember to set the ‘NumberOfColumns‘ equal to 1, so you can add 1 or more VHDXs at a time in the future as needed.

Q: If you have multiple VHDXs attached to the DPM/MABS server, couldn’t you just select a single one, create the Storage Space Virtual Disk, and then be able to expand it with the additional disks post-creation?

A: Yes, you can do that. But remember that a single VHDX can support a maximum of up to 1TB of data, therefore the Storage Space Virtual Disk will be 1TB. So you need to be quick and add additional disks post-creation before DPM storage runs out of disk space.

Q: Can you elaborate on DPM Consistency Check (CC) schedule, Dedup Garbage Collection, and Dedup Scrubbing optimization. What are the defaults, and what are the best practices around these components?

A: As mentioned in this article, backup jobs including Consistent Check (CC) and deduplication operations are I/O intensive. If they were to run at the same time, additional overhead to switch between these operations could be costly and result in less data being backed up or deduplicated on a daily basis. In this case, you can set up the DPM/CC schedule Window into a non-overlapping Dedup Window. Please refer to Step 6: Optimize DPM backup and Dedup scheduling for more information.

Q: For the VM that’s running DPM, it’s a Generation 1 or Generation 2 Hyper-V VM?

A: The VM that running DPM is a Generation 2 Hyper-V VM.

Q: In the article, I used a separate NTFS Parity volume to hold the VHDXs that are attached to the DPM server. Can you explain what do you mean by the “Parity” term? Could it just be a dedicated LUN attached to the Hyper-V host?

A: Parity is a Resilient Setting used by Storage Spaces like RAID-5 or RAID-6. You could have a single Hyper-V host with local Direct Attached Storage (DAS) and then create Storage Spaces with a “Parity” virtual disk. The Parity setting is more storage efficient than Mirror but less performant. The Parity volume is recommended for Backup and Archive. You could also use a dedicated LUN attached to the Hyper-V host with RAID-5 or RAID-6 volume.

Q: How does the Protection Group short-term goal’s scheduled backup time to relate to the DPM Backup Schedule mentioned in Step 6? In that step, all Protection Group’s DPM Backup Window is set to the same time and duration. Why is that? You also set the Consistency Check Window to the same as well, why?

A: I’ve set DPM Backup Window and Consistency Check (CC) Window into non-overlap with Dedup Window on the Hyper-V host. Because if they were to run at the same time, additional I/O overhead will impact the storage, and switching between these operations could be costly and result in less data being backed up or deduplicated. You must set these operations with minimum to non-overlap time with the Dedup schedule.

Q: Is there a recommendation or best practice to set the Consistency Check (CC) Window to the same time and duration as the DPM Backup Window? Is there also a best practice for the recommended duration?

A: When Dedup is being enabled on the Hyper-V host, it’s recommended to set DPM Backup Window and Consistency Check (CC) Window into non-overlap time with Dedup schedule on the host. In this example, we set Backup jobs and CC to run at the same time (4.00 PM until 4.00 AM).

When CC is being scheduled to start at 4.00 PM and run for a duration maximum of 12 hours. This will make sure that DPM will check for inconsistent replicas at a specified time (4.00 PM) every day only and then run a consistency check if it finds one. This will free the storage IO for Dedup to complete during the day.

Q: Why is the Disk Storage to be provisioned on DPM is twice the size of the Total Disk Size on the host?

A: In this example, we provisioned 12 X 1 TB logical data for DPM storage while we have 9 TB actual disk size on the Hyper-V host, it’s NOT twice the size of Total disk size. As mentioned in this article, with Dedup the storage savings are very high for VHDX files and the approximate savings rate is a very large 60-90%+ range. Thus we can store more than the actual storage that we have 9TB.

In this example, we’ve provisioned (9 TB + 35% = 12 TB). This will guarantee that we can store 12 TB of data on top of 9 TB volume. If the savings percentage resulting from deduplication is high enough, all 12 VHDX files will be able to reach their maximum logical size but still fit in the 9 TB volume (potentially there might even be additional space to allocate additional VHDX files for the DPM server to use in the future).

Q: Why is it suggested to disable the Background Optimization deduplication schedule?

A: Data deduplication with Backup Optimization will run always at low priority and pause data deduplication when the system is busy to minimize the impact on system performance. It’s recommended to disable the Dynamic behavior of the Background Optimization job and have Dedup run on a specified schedule Window which will minimize the impact on Backup performance.

And as always we would love to hear your feedback and results. Please add your comment to this post and let us know about your experience with these configurations and, of course, any questions you may have.

Thank you for reading my blog.

If you have any questions or feedback, please leave a comment.

-Charbel Nemnom-

Related Posts


Download Script for VHD Evaluation for System Center 2016 #SysCtr #SysCtr2016

How To Create a Reusable Bootable USB Media To Deploy Nano Server on Physical Machine? #NanoServer


11 thoughts on “Reduce DPM Storage Consumption by enabling Deduplication on Modern Backup Storage?”

Leave a comment...

  1. When creating the Storage Pool and Virtual disk in step 4, is there a reason that you format it as NTFS? Or is it just because it’s simple and DPM is going to re-format it as ReFS anyway?

  2. Hello Steve,

    DPM will not see the volume you want to add unless it’s formatted either with NTFS or ReFS. And in either case, DPM will reformat the drive with ReFS.


  3. Hi Charbel, I followed the process you provided and everything worked great but I haven’t been able to add a new disk to the Virtual Disk. I was able to add the disk to the Storage Pool and then tried to extend the Virtual Disk but that failed. I wonder if you have the process or a script to add a new physical disk into the Storage Pool and Virtual Disk.

  4. Hello Gomez,

    Please note that in the article above, I’ve created the simple storage space virtual disk with Number of Column equal to “1”, because when a virtual disk is created without specifying the NumberOfColumns, storage spaces will set the NumberOfColumns property automatically according to the number of physical disks used to create the virtual disk and cannot be changed after the virtual disk has been created. If for example, the NumberOfColumns is 8, you can only extend the simple fixed virtual disk is to add 8 more 1TB VHDXs to the storage pool!!! so for this reason, I specified the NumberofColumns equal to 1, so we can add 1 disk at a time in the future as needed.

    Hope this helps!


  5. Hi Charbel, thanks for getting back with me. I did setup everything the way you did and it works better than I expected. The setup includes the number of columns to be 1 but I can’t find a way to extend the Storage Pool. Here is what I tried, I created a new 1TB disk and attached it to the iSCSI controller. This created a new Storage Pool called Primordial. Then, I ran this command to add the disk to the Storage Pool: Add-PhysicalDisk -PhysicalDisks (Get-StorageSubSystem -FriendlyName “Windows Storage on xxxx” | Get-PhysicalDisk | ? CanPool -NE $false) -StoragePoolFriendlyName “DPM Storage Pool01”. The disk was added to the Storage Pool successfully but things don’t look right. The Storage Pool Free Space went from Free Space 0 Bytes to Free Space 1,023 GBs. Also the new disk Media Type is Unknown even after running command (Get-StoragePool “DPM Storage Pool01” | Get-PhysicalDisk | Where MediaType -eq “Unknwon” | Set-PhysicalDisk -MediaType HDD). So, at this point, I ran the Add-PhysicalDisk command and that added the disk to the Virtual Disk but the capacity never changed. I tried to Extend command but that didn’t work either. So, my question is, what is the process to add a new virtual disk? Should it be added to the Storage Pool first? Should it be added to the Virtual Disk first? Should the disk be added to the Virtual Disk or should the Virtual Disk be extended? I think I understand the process fairly well but unfortunately, I haven’t been able to add a new disk that extends into the Storage Pool and the Virtual Disk.

  6. Hello Gomez,

    Here is the script that will extend the simple virtual disk for DPM Storage Pool.

    Before you run the script, make sure you created 1TB disk(s) and attached it to the SCSI controller.

    # Expand Simple Storage Space
    $Pool1 = “DPM Storage Pool”
    $vd1 = “Simple DPM vDisk01”
    $Pooldisks = Get-PhysicalDisk | Where {$_.CanPool -eq $True}
    Add-PhysicalDisk -PhysicalDisks $Pooldisks -StoragePoolFriendlyName $Pool1
    Get-StoragePool $Pool1 | Get-PhysicalDisk | Sort Size | FT FriendlyName, Size, MediaType, HealthStatus, OperationalStatus -AutoSize
    Get-StoragePool $Pool1 | Get-PhysicalDisk | Where MediaType -eq “Unspecified” | Set-PhysicalDisk -MediaType HDD
    Get-StoragePool $Pool1 | Get-PhysicalDisk | Sort MediaType | FT FriendlyName, MediaType,@{l=”Size(GB)”;e={$_.Size/1GB}} -AutoSize

    # Before Resizing the Existing Virtual Disk
    Get-VirtualDisk | FL NumberOfColumns

    Get-VirtualDiskSupportedSize -StoragePoolFriendlyName $Pool1 | FT @{l=”VirtualDiskSizeMin(GB)”;e={$_.VirtualDiskSizeMin/1GB}},

    Resize-VirtualDisk -FriendlyName $vd1 -usemaximum

    Get-VirtualDisk $vd1 | Get-Disk | Update-Disk

    Get-VirtualDisk $vd1 | Get-Disk | fl *

    # Extend The Volume
    $Volume = Get-Volume -FileSystemLabel HyperV_VMs_MBS
    $Partition = $Volume | Get-Partition
    $Disk = $Partition | Get-Disk
    $size = (Get-PartitionSupportedSize –DiskNumber $Disk.Number –PartitionNumber $Partition.PartitionNumber)
    Resize-Partition -DiskNumber $Disk.Number –PartitionNumber $Partition.PartitionNumber -Size $size.SizeMax

  7. Charbel, thanks a lot for putting this together. I ran into some issues during the first few runs but got most of those resolved. The only one I couldn’t resolved was this command: Resize-VirtualDisk -FriendlyName $vd1 -usemaximum. Powershell doesn’t recognize the -usemaximum parameter. So, since it didn’t add or extend the virtual disk, I added the disk with the Add-PhysicalDisk command but was never able to extend it. And even though I can see that the new disk belongs to the virtual disk, the virtual disk still shows the original size. That’s the only thing I am working on and I think once I figured that one out, I should be able to use it in production. Again, thanks a lot for putting time on this.

  8. Hi Charbel,
    I installed DPM2016 on a vm that has the disks on a CSV, which is located on a SAN. If I want to take advantage for the Modern Backup features, I have to enable dedup and all the stuff on the CSV or it’s ok if I do on the VHD that I add to DPM?

  9. Hello Michael,

    Please note that you need to enable Dedup on the “CSV” where all VHDXs reside for DPM.
    Please note that CSV should be formatted as NTFS with Large FRS as described in this post.

    Hope this helps!


Let me know what you think, or ask a question...

The content of this website is copyrighted from being plagiarized!

You can copy from the 'Code Blocks' in 'Black' by selecting the Code.

Please send your feedback to the author using this form for any 'Code' you like.

Thank you for visiting!