Why Data Integrity Must be Built on a Foundation of Availability
The immediate availability of information was not always as important as it is today. In times past, the flow of information was generally slower, and to a great extent, so was the overall pace of the world.
Today, data flows from one system to another in real time. Latency is measured in tiny fractions of a second. Decision-makers and autonomous systems are increasingly reliant upon the real-time availability of data to operate at optimal effectiveness.
To compete in this kind of world, businesses must strive to achieve maximum levels of availability. When an application or system is down temporarily, it can undermine data integrity and hamper the organization’s ability to function as a well-informed, data-driven operation.
Why Data Availability Matters More Than Ever
Imagine you’ve just gone to the ATM machine at your local bank to make a withdrawal. After punching in your PIN number, you’re informed that the system is down; you can’t make a withdrawal or even perform a balance inquiry. If this happens on occasion, it’s an annoying inconvenience. If it happens repeatedly, or on an occasion when you have an especially urgent need for cash, then it might be a reason to go shopping for a different bank.
When data is unavailable, it can disrupt transactions, as happened in the case of our hypothetical visit to the ATM, but it can also impact mission-critical systems such as fraud detection. Financial services companies routinely analyze credit card activity in an attempt to identify anomalies that might signal fraud. A spate of online purchases, especially for digital products that don’t need to be shipped to a physical address, will often merit closer scrutiny. Multiple card-present transactions in widely dispersed locations, likewise, could indicate a security problem.
Read our Whitepaper
This paper presents an overview of IBM System i high availability to companies that are beginning to explore this powerful business continuity technology. It also take a look at the cost of planned and unplanned downtime, with a brief overview of disaster recovery strategies.
Credit card fraud is a $29 billion dollar problem, and it’s continuing to grow. Given those numbers, it’s understandable that high data availability is critical to financial services companies’ efforts to turn the tide on fraud. AI and machine learning algorithms are constantly at work looking for anomalies, but if they don’t have timely access to the data, then they can’t detect fraudulent activity. It follows that without high data availability, credit card issuers can’t act promptly to stem the flow of fraudulent transactions.
AI, Machine Learning, and Data Integrity
The previous examples illustrate the importance of data availability in financial services. We’re generally accustomed to prompt and reliable information from the systems that handle our banking transactions, but what about wider business use cases?
Businesses are increasingly reliant on AI and machine learning (AI/ML) to automate and streamline processes. Many companies are using these technologies to make recommendations with respect to inventory and supply chain decisions (“Should this inventory be held or liquidated (and at what price)?”, “Is it cost-effective to pay extra fees to expedite this order?”; “Do we need to ship more batteries and flashlights to our stores in Florida?”).
Those kinds of decisions depend on high levels of data integrity. When a hurricane or similar natural disaster is expected, for example, retailers like Wal-Mart are using advanced algorithms to determine likely demand for specific products and ship inventory to those locations in advance of the severe weather event. In cases like this, information delayed is information denied. That’s poor data integrity.
Irrespective of AI/ML, a retailer needs to understand what inventory levels look like and where it is located. To make good decisions, they need to be able to trust that data. If it’s not accurate and complete or lacks full context, then decision-makers will quickly lose faith in it.
Proactive Steps Toward High Data Availability
How can you ensure that your business has the protection it needs to keep running 24/7 and eliminate the risk of lost data? Reducing the downtime of systems and applications is an important step, of course, but what happens on the inevitable occasions when those systems are unavailable?
This is where a proactive approach to data availability can make a critical difference, ensuring the integrity of data used to drive accurate business decisions in an increasingly fast-paced world where the timeliness of information matters.
High data availability begins with real-time replication of data from a production server environment to a backup server. Data replication is only one piece of the puzzle, though. For example, if you are concerned with the availability of an IBM i server, the programs, data areas, data queues, IFS, user profiles, device configurations, spool files, triggers, constraints, and other elements of that source system must be promptly and accurately replicated to ensure that the recovery server is always ready to take over operations when needed.
Next, mechanisms must be in place to allow for reliable role swaps between the primary production system and its backup. This requires continuous auditing of the replicated data to ensure that synchronization is accurate and complete. Unauthorized changes must be instantly and automatically healed at the record level, without requiring resynchronization of entire objects.
These failover mechanisms must be fully auditable and easily tested so key stakeholders can have complete confidence in their ability to protect and maintain high availability at all times. Finally, failover systems must be easy to monitor and manage, offering easy access to replication statistics, system status, and other key data points that reflect the health and readiness of those systems to deliver high availability when needed.
As the global leader in data integrity, Precisely helps small, medium, and large enterprises to achieve high levels of data integrity, with solutions for integration, data quality and governance, location intelligence, and data enrichment.
To learn more about the impact of high availability read our whitepaper An Introduction to High Availability for Power Systems Running IBM i This paper presents an overview of IBM System i high availability to companies that are beginning to explore this powerful business continuity technology. It also take a look at the cost of planned and unplanned downtime, with a brief overview of disaster recovery strategies.