Whether caused by a hurricane, a virus or a storage failure, statistics show that most small to mid-sized businesses will experience at least one instance of system downtime a year. Once a year doesn’t seem like much, but consider this: Aberdeen Group estimates that an hour of downtime costs a mid-sized business an average of $74,000.Then factor in results from a Harris Interactive Survey, which found that IT managers estimate 30 hours on average for recovery.
This publicaBon is for small to mid-‐ size business owners and financial/operaBonal decision makers responsible for overall corporate welfare. If your reputaBon or revenue will suffer from a business failure, this is for you.
For many small to mid-sized companies an initial data protection strategy often consists of utilities that come with the operating system or environment or one of the “default” data protection products for those platforms. As these companies grow so do their on-premise data protection demands, and their expectations. Often the original data protection solutions are inadequate and they look for a new application. As part of this search they include cloud focused products that can also aide in offsite data movement and disaster recovery (DR). The challenge for the small and mid-sized company is finding a solution that addresses both their on-premise data protection and off-premise DR needs.
Originally data protection was focused on backup windows and getting backups done before users came to work in the morning.And disaster recovery (DR) used to mean getting those backups (initially tapes, but now disk backups as well) stored in a safe place off-site. But as the overall speed of business picks up and more companies rely on their computer systems to run their businesses, they’re starting to realize the cost of downtime. Recovery therefore becomes the focus, and the need for a ‘real’ DR solution is recognized by more of these companies, including those in the mid-market space and smaller.
Testing at least once per month is important to maintain engineering best practices, to comply with stringent standards for data protection and recovery, and to gain confidence and peace of mind. In the midst of disaster is not the time to determine the flaws in your backup and recovery system. Backup alone is useless without the ability to efficiently recover, and technologists know all too well that the only path from “ought to work” to “known to work” is through testing.
Chances are that you are dissatisfied with your current backup solution, as half of those surveyed by NetApp have stated. You probably have a pile of tapes or disks that represent you faithfully copying your files using a traditional backup software utility. But when disaster strikes and an entire server goes south, getting a working server online quickly and without a lot of running around isn’t going to happen. And chances are that the recovery process is so tortured and involved that you are lucky if you can do a full recovery test procedure once a year.
The options for the safe protection of data were once very costly to build and complex to maintain. Today, there are new alternatives to choose from and many companies are considering the cloud as a safe and secure way to remotely back up and protect their data. The cloud has become a popular trend, especially among small and mid-sized companies, since businesses can outsource this aspect of their operation and eliminate the hassle of day-to-day operations, while still ensuring security of their data.
Many disaster recovery solutions, including those from Axcient, Datto, Zenith and Unitrends, often categorize themselves as turnkey “appliances,” touting ease of use and fully automated capabilities. Yet, not all of these solutions fit the true definition of an appliance and are apt to cause frustration when claims of simplicity do not match reality.
Enter both words below, separated by a space
Please enter the words or numbers you hear
This is a standard security test that we use to prevent spammers from submitting fake response More Help