By Oussama El-Hilali, VP of products at Arcserve
Most organisations believe that if they’ve backed up their data at some point in time, they’ll be safe when an unforeseen disaster or interruption strikes their organization. However, this might not always happen, resulting in harmful and costly data loss, especially for midsize and large enterprises. As an example, the Ponemon Institute found that the global average cost of a data breach is $3.6 million, or $141 per data record. Data breach is one of the more extreme ways a company can lose data, but this illustrates the colossal impact that data protection can have on an organization’s bottom line.
Causes of Data Loss (Including Some You May Overlook)
Cyber-attacks is the first thing that comes to mind when we think of data loss Ransomware actors have continued to threaten companies and hold their data ransom; distributed denial of service (DDoS) attacks can bring productivity to a screeching halt; and other forms of malware continue to infiltrate and permeate across company networks. Hackers will always find a new vulnerability to exploit or a new way to infect a network because their livelihood depends on it.
While these high-stakes attacks can have a serious impact on an organization, it’s important to not overlook some of the simpler, perhaps even more frequent, ways a company can experience data loss. Sometimes hardware breaks, employees make design or coding errors, and even natural disasters can cause disruptions that lead to the accidental loss or deletion of important, highly sensitive corporate information.
Whether the event is large or small, there’s a high chance your systems will suffer interruptions and failures at some point, no matter how you prepare for every scenario under the sun. Today, data loss is a reality, but that doesn’t mean there’s nothing you can do about it. There are some fundamental yet basic steps acompany can take to mitigate the impact of these events and minimize the data loss that occurs.
Where to Start: Change Your Perspective
It’s important to first change your mindset when it comes to traditional backup and recovery processes. Successful recovery is solely dependent on two factors:
1. Having information backed up as often as possible.
2. Having the ability to restore those backups in as a little as a few minutes.
If you have backups that are too old, or those backups are,for some reason corrupted or can’t be restored, your organization will be in trouble..If you don’t know what data is being backed up, how often your data is being backed up, and what your ability is to restore that data, then you’re at risk.
You can get a handle on understanding your backups and your ability to restore them by using recovery time objective (RTO) and recovery point objective (RPO) metrics. RTOs refer to the amount of time it takes to restore your backups, while RPOs are a way to measure how much data you’re willing to lose during the recovery process. For example, if you have an RPO of 17 hours, then you’re willing to lose 17 hours’ worth of data.
Moving Forward: Four Steps for Changing Your Backup Strategy
While every company’s backup and data recovery needs are different, there are four steps every organization can take to shore up their plans and achieve disaster avoidance.
1. Create and understand your risk profile: Business continuity managers must make it a priority to deeply understand the company’s level of risk. This includes evaluating how long the organization can withstand being down and the various threats it’s most likely to face (whether that be a ransomware attack or a failed server). Understanding the internal and external threats to the organization can help business continuity managers know which systems are most important to recover because of the potential impact they can have on the overall business.
2. Consider automating workloads to streamline the process: While this all may sound pretty time-consuming, it doesn’t have to be that way. For example, automating workloads that can take up a lot of the IT team’s time or could be prone to human error is one way to cut down on time resources. Automation can be a little tricky initially, but it’s well worth it in the end. When getting started with automation, begin with less critical systems so you can work out the bugs and overcome any unanticipated issues that may arise.
3. Define your business-critical RPOs: Different parts of the business require different RPOs. There may be some areas in your company’s ecosystem where any data more than an hour old is useless, but there may be other parts of the business where much older information is business-critical. Some business continuity managers don’t take the necessary time needed to fully understand how the age of data impacts the overall recovery process.
4. Check out new technologies to help the team maintain and execute the plan: In the past, customers would need to call their disaster recovery as a service (DRaaS) provider to manually fire up a virtual machine in the cloud to transfer workloads, which could be very expensive. Today, however, customers can automatically access their cloud and initiate the fail over process. Advances in DRaaS have also improved RTOs, making it an excellent choice for protecting semi-transactional and near-transactional databases. Many companies are apprehensive about using disaster recovery technology.
It’s most important for companies both large and small to recognize that disasters are inevitable. Being aware of this is the first step to mitigating the impact of those problems, which can, in turn, help safeguard the profitability and customer loyalty of the business. Taking the time to understand risk, prioritize mission- and business-critical systems and data, and invest in technologies that will help maintain the process from start to finish are the best steps all companies can take to make sure they have the right backups at hand to achieve disaster avoidance.