Azure storage is one of the foundational building block services of Azure. All the Azure and Microsoft services store data directly or indirectly using Azure Storage. It is a hyper scale service that holds >120 trillion objects and servers with over 19 million transactions per second that is highly available, durable, secure and open to being accessed from all platforms.

This article covers following

  1. How do I secure Azure Storage?
  2. How do I protect myself against
  3. accidental data loss?
  4. How do I plan disaster recovery?

Azure storage security overview and Secure your data in the cloud with Microsoft Azure Storage are two excellent resources on this topic for further reference.

How do I protect myself against accidental data loss?

Human errors is one of the top five reasons for data loss; below are some techniques that can help prevent and be better prepared for such situations.

Accidental deletion or overwriting of data within blob storage

You can take a regular snapshot of a blob. The snapshot is a read-only copy taken at a point in time.

  • It is easy to access a snapshot version using date time stamp in URI
  • You can copy from snapshot to base or destination blobs
  • Deletion of blob is locked if there is at least one associated snapshot (protects you from accidental deletion at blob level)
  • If using premium storage, the number of snapshots per blob is limited to 100
  • Snapshots can result in additional data storage charges to your account, and one should carefully manage housekeeping operations on the snapshots

How do I plan disaster recovery?

It is critical for every customer to prepare a disaster recovery plan that is designed to address particular business requirements. However, here is high-level guidance when dealing with Azure Blobs guidance.

Azure Storage region-wide outage

Wait for recovery: No action required, you will not be able to access the data while service is being recovered.

Copy Data from Secondary: Having enabled Read-Access Geo-Redundant Storage (RA-GRS) is important if you do have High Availability (HA) requirement to withstand region wide outage. You will have read access to your data from the secondary region and will have to copy the data from one storage account to another. This can be achieved by using AzCopy, Azure PowerShell, or the Azure Data Movement library.

Write to two storage accounts: One of the alternatives is to always write to two different storage accounts in different regions (and subscription if required).

Asynchronous implementation (using Azure Function): This solution may incur additional egress cost as data moves between different regions. These include:

Blob based trigger: starts as soon as you upload the new blob to the storage account. To use blob storage and bindings with the Azure Functions, you will need general-purpose storage account, and will not work for a blob only storage account. This solution will work for situations where you are not required to sync, delete or update operation on the blobs.

Queue based trigger: starts every time the application adds, deletes or updates blob when you add a message on the Azure service bus queue with operation name and unique blob reference. Azure function or background process with custom code can then be configured to be triggered on this queue to synchronization the blobs. One downside of this is it requires changes to your application.

Synchronous implementation:

Application: Your application explicitly writes and updates both the storage account. It requires making a change to your application which is only suitable for the scenario where asynchronous is not an option.

References:

Security

Blob Snapshots

Disaster Recovery