Philip Bridge, President of Ontrack, explains what businesses must do to minimise the risk of data loss from human error
The complexity of managing today’s virtual IT environments, combined with the sheer amount of data that streams through corporate networks, requires diligent IT administration. Unfortunately, humans are not infallible. Teams are one accidental deletion or failed backup away from losing (or losing access to) sensitive information.
The consequences of human error are wide and varied. It can cause intellectual property to fall into the wrong hands, put the organisation at increased security risk or even result in crippling regulatory fines. It is imperative that you don’t gamble with your data and instead invest in robust technology risk management policies.
Accidents leading to a data disaster are more prevalent than many think. One survey by Spanning, a Kaseya company found that the accidental deletion of information was the leading cause of data loss, driving 41% of cases – far above malicious hacking (Trends in SaaS Data Protection across the US and UK, Winter 2015-2016 Data Protection Survey).
Even when an attacker from outside the organisation is behind a breach, failed data backups caused by human error could mean the company is without vital event log information that would point to where the attack originated.
The biggest risks
So, what are the most common accidents that lead to data loss and security vulnerability?
*Failure to document and execute established IT, retention and backup procedures. Examples might include moving a test server into production and not informing IT that the data is not being backed up or IT administrators decommissioning a Storage Area Network (SAN) that is still in production due to inaccurate documentation. Whatever the cause, the result is the same: data loss and employee embarrassment. The number of times the delete key is mistakenly pressed is astonishing. It is important that organisations do their due diligence and ensure the data they delete is truly no longer of value.
*Neglecting to keep software up-to-date. A common failing is not installing patches when they become available. Days are busy and resources are stretched. However, failing to update security patches can leave systems and networks open to evolving security threats.
*Not backing up effectively. In a survey, we found that whilst three-infive (60%) businesses had a backup in place at the time of loss, it was not working properly. Failure to establish and follow backup procedures or to test and verify backup integrity is a guaranteed recipe for data loss.
*Failure to test IT security policies effectively. Even the smallest failure can have devastating results, including critical data loss and huge expense. It is important to restrict IT administrator passwords to required users and to change them when an IT administrator leaves the company. Don’t take chances. Some of the worst data loss cases we have seen are the result of a disgruntled employee with a live password intentionally deleting large amounts of critical company data.
So, what should IT departments do to ensure the best chance of an effective resolution if disaster strikes?
*Firstly, don’t panic and rush into action. If data loss happens, it is important not to restore data to the source volume from backup, because this is where the data loss occurred in the first place. Nor should you create new data on the source volume, as it could be corrupted or damaged.
*Be confident in the skills and knowledge you have within your team. IT staff must educate the C-suite to stop them making decisions that could do more harm than good. When faced with a possible data loss event, the volume should quickly be taken offline. Data will be overwritten at a rapid pace; the volume should not be formatted to resolve corruption.
*Have a plan. Staff should follow established processes and ensure data centre documentation is complete and frequently revisited to ensure it is up to date. IT staff should not run volume utilities or update firmware during a data loss event.
*Finally, know your environment and the data within it. IT staff must understand what their storage environment can handle and how quickly it can recover. Knowing what data is critical or irreplaceable, whether it can be re-entered or replaced, and the costs for getting that data up and running to a point of satisfaction are important. Staff must weigh up the costs and risks when determining what is most urgent – getting their systems up and running quickly or protecting the data that is there.
Too many organisations do not invest sufficient resources into understanding or developing policies based on threats and risk. Add common IT oversights into the mix and you’ve got a compelling story for the prevalence of data loss today. Prioritising hardware upgrades, rigorously testing and validating IT network processes, investing in skilled and experienced professionals, and enlisting a data recovery expert are fundamental precautions every business decision-maker must consider.