Highly Available Data: Why High Availability Is Not Just for Apps
High availability is a buzzword in IT today. Usually, it refers to applications and services that are resistant to disruption. But it should also apply to your data. Here’s why.
What is it?
High availability refers to the ability of an application, service or other IT resources to remain constantly accessible, even in the face of unexpected disruptions.
In an age when a cloud service that fails for even just a few hours can significantly impact the ability of businesses to maintain operations, and when vendors typically guarantee certain levels of uptime via SLA contracts, maintaining high availability is crucial.
Unlike in the past, when users expected infrastructure to fail from time to time, downtime is unacceptable in most contexts today.
In reality, virtually every type of service or resource will fail occasionally. 100 percent uptime is not a realistic goal; even the best-managed services go down sometimes. But uptime on the order of 99.99 percent or higher (AWS famously promises “11 9s” of availability for its S3 storage service, for instance) is now standard. That’s the type of high availability that organizations strive for today.
High availability for data
In most cases, when people talk about high availability, they’re thinking about applications and services. Using automated server failover, redundant nodes and other strategies, they design systems that allow applications and services to continue running even if part of their infrastructure fails.
Yet the concept can and should be extended to data. After all, without data to crunch, many applications and services are not very useful. If you plan a high availability strategy that addresses only your applications, you fall short of ensuring complete business continuity.
Achieving data high availability
What does all of this look like in practice? All of the following considerations should factor into a data high availability strategy:
- Servers that host data need to be resilient against disruption. You can, as noted above, achieve this by using redundant servers to host your data, and/or automated failover.
- Databases should be architected in such a way that the failure of one database node won’t cause the database to be inaccessible. Databases should also be able to restart themselves automatically if they do crash, in order to minimize downtime.
- If you rely on the network to access data, which you probably do, network availability is an important component.
This white paper presents an overview of IBM System i high availability to companies that are beginning to explore this powerful business continuity technology.
Highly accessible data
Data high availability can be taken a step further, too. In addition to keeping your data infrastructure and services up and running, you can build an even more effective strategy by ensuring that your data is highly accessible.
Highly accessible data is data that you can work with readily. It’s quality data that is consistent and available in the format or formats that you need it to be in order to work with it. It’s data that is compatible with whichever tools you are using for analysis and interpretation.
By aiming for high data accessibility as well as high availability, you ensure not only that you can always reach your data, but also that the data is ready to use.
Check out our white paper: An Introduction to High Availability for Power Systems Running IBM i