Abaxio Managed Security

How can we help you today?

You are here: News >> White Paper: Keeping the “Hot D/R Site Hot”


White Paper: Keeping the “Hot D/R Site Hot”


The success of an organization’s business continuity plan is determined by the speed of the disk scanning. The process of scanning for a partition’s block-level changes must complete in time to allow the data to be transferred off-site. Reducing this cycle time is the key to determining how “hot” the hot D/R site will remain.

A 2015 poll of 2,000+ executive and IT professionals revealed that less than half (45.5 percent) believed they are “very prepared” to recover their IT assets in the event of a disaster, and 50.5 percent are only “somewhat prepared”. Meanwhile, 64 percent stated that business continuity or disaster recovery is a compliance requirement for their organization. Why the lack of preparedness? Aside from the (possibly false) assumption that it’s cost prohibitive to maintain a secondary D/R site, the main reason is a technical barrier: Especially with large data sets, the backup cycle simply can not complete the cycle in time to keep up with the new changes. Shortening the backup cycle can be tweaked somewhat by other factors – bandwidth, for example. But even the fastest network gets bogged down with gigabytes or terabytes of data running across it.


Despite these formidable obstacles, recent field testing at Abaxio proves the limitations can be overcome. The required speed can be achieved through a combination of “deep” deduplication and delta-file change cataloguing. With ordinary deduplication, the scanning process ignores files modified since the last backup, which significantly reduces the cycle time of the backup. But small changes made to large files negate the advantages of this method. Deep deduplication creates efficiencies by 8-10x that of ordinary deduplication. It scans for changes made at the block level vs. at the file level with ordinary deduplication. Combined with compression that reaches 70% or higher, the result is a reduction in cycle time that begins to approache “conceivable” for effective BC/DR. This process is identical whether the system is physical, virtual (VMware® and Hyper-V included), and it’s functionality is application and operating system independent.


As confirmed by our own field testing, deep deduplication improves speed of the backup cycle by 8-10x. But cataloguing adds another dimension entirely to obtaining short backup cycle times. File-change cataloguing significantly reduces the duration of subsequent backup sessions in which large files of over 5 GB are involved. This is achieved by keeping a detailed journal of file content changes in-between backup sessions. When a new backup starts, there’s no longer a need to re-scan the entire partition, since the cataloguing process provides the exact address on the disk of all the changes since the previous cycle. The feature is available on Windows (all versions starting from Windows 7). It applies to multiple data sources: Files and Folders, Hyper-V, Microsoft Exchange, Microsoft SQL, and Microsoft SharePoint.


Prioritization ensures the most important data is protected and available first. Abaxio field testing also proves that the ability to prioritize during both backup and recovery improves effectiveness of a solution significantly. Stakeholders can truly fine-tune priorities, which drive fast, efficient restores, beginning with the initial backup. Like triage, the most important data is backed up first, and thus is kept in a ‘hotter’ state ready for recovery vs. data of less importance. This works well, not only for restores, but during data migrations. The most pressing data can be moved to the new system first, thereby maintaining more effective business continuity.


During recovery, we’ve also discovered the advantages of delta “slicing”. During recovery, the process of deduplication will employ three different techniques, based on the circumstances, such as file size and file type – static, dynamic, or a complete slicer.


Take all the technologies mentioned above – “Deep deduplication, compression, file-change cataloguing, prioritzation, and delta-slicing, and multiply their effect on the cycle time – by a factor of four. That’s because maintaining a hot D/R requires two-way communication between both the primary and hot D/R locations (see Fig.1).


Abaxio field testing has proven time and again that the methods employed above reduce the cycle time of backup jobs by a factor of 8-10x and more (depending on the particular client configuration) vs. solutions employing traditional technologies. Faster cycle times reduce storage requirements, and increase system and network reliability simultaneously. Bandwidth use becomes more efficient, which helps defer the need for network upgrades in lieu of increasing data load. As such, the cost to deliver truly a hot D/R site has decreased over the past fifteen years to the point it is now viable and within reach to most organizations of almost any size and budget. The proof, from a risk management perspective, is that insurance underwriters have begun to take notice. Abaxio, as an industry-first, has partnered with several A+ rated carriers, including AIG, Beazley, and Lloyds to package its services with up to $20M of coverage for its clients as a guarantee against any breach, hack or network downtime. Expect to see this trend continue in the coming years…