Modern Data Architectures Provide a Foundation for Innovation
At Precisely’s Trust ’23 conference, Chief Operating Officer Eric Yau hosted an expert panel discussion on modern data architectures. Featured panelists included Sanjeev Mohan, Principal at SanjMo and former Gartner Research VP; Atif Salam, CxO Advisor & Enterprise Technologist at AWS; and Precisely Chief Technology Officer, Tendü Yogurtçu, Ph.D.
The group kicked off the session by exchanging ideas about what it means to have a modern data architecture. Atif Salam noted that as recently as a year ago, the primary focus in many organizations was on ingesting data and building data lakes. Today, they’re doing everything they can to prevent their data lakes from turning into data swamps. That calls for federating data products into distinct data domains so that businesses can get timely insights.
Building on that thought, Sanjeev Mohan noted that the building blocks of modern data architecture remain more or less the same, but today’s businesses are focus on optimization. They’re concerned with overall cost in the face of significant economic headwinds, and they want to accelerate their time to value so they can drive faster and better decisions.
Tendü Yogurtçu offered that organizations today are viewing data as a product, and they are working to streamline access to both data and metadata for faster insights that help business users deliver tangible value. In Yogurtçu’s perspective this conversation about data products requires an increase in the speed of access to data at scale, a focus on developing trust in data products, and better collaboration between business users and the IT department.
Modern Data Architecture: Challenges and Benefits
To be effective, Salam stated that a modern data architecture must make the right data available to the right users at the right time—and at the right price point. Conventional architectures simply cannot deliver that because they’re too rigid. They don’t lend themselves to scale, or they don’t support innovation at the speed and scale that today’s businesses require.
Cloud adoption is key to creating a modern data architecture environment because it offers cost efficiencies, rapid deployment, and agility. Salam noted that organizations are offloading computational horsepower and data from on-premises infrastructure to the cloud. This provides developers, engineers, data scientists and leaders with the opportunity to more easily experiment with new data practices such as zero-ETL or technologies like AI/ML.
In Mohan’s experience, modern data architectures are increasingly segmented into very specific specialized tools, prompting businesses to focus on determining the optimal approach for integration and interoperability. Metadata, taking on an equal position alongside other data as a first-class citizen in the modern data stack, is the solution. Today, every product is gathering its own metadata without overarching standards, which puts organizations at risk of creating islands of metadata. Unification of metadata into a common plane is required to enabls business use cases that deliver faster time to value, greater agility, and lower costs.
As the scale and scope of data continue to increase, that creates new challenges with respect to compliance, governance, and data quality. To create more value from data, organizations must take a very proactive approach to data integrity. That means finding and resolving data quality issues before they turn into actual problems in advanced analytics, C-level dashboards, or AI/ML models.
Data Observability and the Holistic Approach to Data Integrity
One exciting new application of AI for data management is data observability. This exciting new capability monitors data to proactively identify trends and patterns that could indicate potential issues before they impact downstream applications and analytics. Data observability also helps users identify the root cause of problem in the data. Yogurtçu spoke to the combined power of data observability, data quality, and data governance working together through shared metadata, which is become more critical than ever.
This powerful use of AI and machine learning was called out by Mohan as an increasingly important factor in defining a scalable approach to data integrity. Today’s enterprises are processing more data sources, at a higher volume and velocity than ever before. To get trustworthy results, organizations must find new ways to inject intelligence and automation into their data integrity processes.
Words of Advice from the Experts
The panelists in the Trust ’23 session offered some words of advice for organizations struggling to keep up with the rapid pace of change. Sanjeev Mohan recommends frequent and ongoing experimentation. “Get your hands dirty,” he says. “With very little experimentation, you can start deriving new intelligence and new insights from your corporate data. There is no other option but to get your hands dirty and start experimenting.”
Atif Salam offers his own words of wisdom: “First, establish an enterprise strategy and a chief data office to execute that strategy. Identify the right owners for your data – and they should be in business – and get top-down support for your enterprise strategy.” That includes a long-term view funded by a multi-year investment, he says.
Tendü Yogurtçu’s advice is to “always, always start with the business case and desired outcomes, and make data integrity a priority, not an afterthought.” She also emphasizes the need to automate as much as possible. Without that, it can be virtually impossible to develop and maintain data integrity at scale.
Over two virtual days, Trust ‘23 – the Data Integrity Summit, brought together global data leaders, analysts, and experts to share trends, challenges, and opportunities happening in the industry. Watch the full Modern Data Architectures session and learn more.