Data Quality Approach


Traditional Data Management practices have failed.

Organizations have spent extensive time and money on disciplines in the realm of Data Governance, Data Cataloging, and Metadata yet these efforts have not scaled or met expectations without a focused approach towards Data Quality.

Current trends show that data is generated faster than data can be consumed and, in this climate, as a business you need a change in thinking from traditional data management practices towards a “Data Quality First” approach.

What is “Data Quality First” approach?

As organizations grow, you must be aware of where your internal practices land within the data maturity life cycle as this can drastically enhance or hamper the overall growth of your organization. DQLabs defines this data maturity life cycle in four stages:

Data Quality First Approach

No data management practices implemented and more people-dependent processes using offline tools and methodologies


Have used or tried individual or departmental efforts either using custom solutions or off the shelf solutions with no centralized focus


Have implemented a Data Quality, Data Catalog or Data Governance solution and are engaged in an effort towards centralized management practices, however, they are still struggling to streamline processes and have not seen real ROI.


Have a centralized data governance team and implemented technology and processes for Data Governance, Data Catalog, Data Lineage, Data Quality, DataOps, and Data Science but many are still struggling to have a modern collaborative data workspace with a focus on Data Quality.

Here’s the bad news - irrespective of which category you belong to, many organizations have no understanding of what percentage of bad data exists. This is because most of the efforts taken towards Data Quality are in conjunction with other disciplines of Data Governance, Data Catalog, or Data Lineage but there is not a primary focus or an actionable effort towards Data Quality.

That is why we preach a Data Quality first approach – an approach that focuses on understanding data from a business context via automated processes like semantic discovery and classification. This is further fueled by automation using proven machine learning technology that helps organizations measure, monitor, and remediate data quality issues in a more practical way and derive immediate value. If you are in the high maturity cycle, the good news is that a Data Quality First approach still takes into consideration your current learnings of business glossaries, governance practices and seamlessly integrates to automate an actionable data quality process.

Using this approach, you can benefit from trustable, actionable data and answer

  • What data is important and what is not?
  • What data needs to be improved or remediated?
  • What data can be used in reporting and analytics?
  • What data pipeline changes are needed to address schema/data deviation or drift?
  • What models can be built to enrich customer experiences?
  • Can I trust this report I am signing off on and am I as compliant as I think I am?

Recently published

Data Observability is Irrelevant without Context


Data Observability is Irrelevant without Context

With today’s rapid growth in data, a focus towards data quality is more critical than ever before. Since the first bit of digital data existed Data Quality has existed alongside it as a practice, and yet after decades gaps in the marketplace still exist as the focus towards business impact is often forgotten. We quite often get lost in the mix of what to measure or how to measure instead of focusing in on the real purpose of what Data Quality was meant to be.

Data Observability is Irrelevant without Context

In fact…

Data Quality is never about the “Data” but all about “Business”.

As we navigate through this space, it is also very important to identify noise from true value. For folks who have not read the six guiding factors to navigate around noise in data quality, take a moment to read Part 1 — The Noise in Modern Data Quality of this series. For the remaining folks, let’s continue to explore the new kid in the block — Data Observability.

First of all is Observability new? Not really!

It has its origination from control theory to industrial processes to software and centers around the idea of understanding/observing /inferring an internal state of a systems using its external output. But to understand and be successful in Observability, you must have the ability to observe the behavior of a system and derive its health state from it. Also it’s important to understand the difference between observability and monitoring. Monitoring is best suited for known failure modes while Observability is for debugging using the insights inferred. You can also use this comparison from Splunk on Monitoring vs. Observability as well to understand further but the main goal is the ability to notice (observe) something you have from system behavior.

Data Observability and Data Monitoring

OK, so why is observability relevant?

During the last few years, we have seen many transformations happening in the “modern data stack” space. “Modern data stack” meaning a set of tools and technologies that enable analytics, particularly for transactional data. It started with the enormous scalability and elasticity of cloud data warehouses and extends to modern data pipelines allowing us to carry large amounts of data from multiple sources and transform the data directly inside the warehouses (ELT) at ease. This trend has also now morphed to ETLT (extract-transform-load-transform i.e., best of both worlds of ELT and ETL), ELTG (with governance layer), and merging of data lakes and warehouses into a Lakehouse or Unified Analytics Warehouse. With such complexities in mind, yes — it is relevant to have a system to observe and funnel up things to debug so as to eliminate and prevent any downtime or provide insights on the health of data sources.

However, when you extend Observability to Data Quality or, as I like to call it “Business Quality”, now you are no longer talking just “data” but “business” aka “data with context”. Data in its raw form without any application of use case (i.e., how an organization uses and governs, how it collects and handles, or its functional value) doesn’t have “context”. Let’s take a very simple example of measuring the total records (“volume” — one of the pillars of observability). If a million rows suddenly drop from your data, yes — you should be aware of it in the context of understanding the health of your data sources. Yet, the problem comes when you extend observability to a broader umbrella of data quality as it can pose some unique challenges. In this example, let’s say that the drop in records when debugged could be related to a strategic leadership move to close a couple of retail stores which weren’t profitable. Let’s think about another example — an email column that is used for marketing is treated very differently than an email column used for risk and authentication purposes. Same email, same format but different use case, therefore the broader application of data quality measurements and the underlying purpose are very different. This poses a big challenge.

Now if you take that and apply it across all your petabytes of data sources, various monitors, as well as pipelines, the amount of time needed to debug each of these “alerts” can be time-consuming and daunting. When extended into benchmarking to discover anomalies, it is bound to create even more false positives or negatives as well. To me, just throwing anomaly detection at your data doesn’t create value. Why? In the world of outlier detection even with all of today’s advanced algorithms, model accuracy isn’t complete without some sort of understanding or ground truth to compare with the model’s predictions. In other words, the expectation is important even before the measurement is done as without expectation there is no base to draw conclusions from. This poses a big challenge when you draw observability beyond its limit to the wonderful world of Data Quality. With Data Quality it is not only health checks but there are further levels of checks around all aspects of objective and subjective dimensions of Data Quality. This requires understanding the context.

If you speak to any expert or customer who has had their hands dirty in Data Quality, the first thing you will hear is —

“Measurement was never the challenge but making measurements purposeful is the one and only challenge”.

So, for anyone who sets out to build an “actionable” data quality assessment, starting without an ultimate purpose will result in more data chaos. This means for a true Modern Data Quality platform, you have to go beyond observability into the world of inferring metadata, use of artificial intelligence/machine learning (AI/ML) to understand the context or semantics, expand the universe into knowledge graphs and bring new practices through automation to simplify data quality processes. This is an exciting time to be in the space of Modern Data Quality as we are just scratching the surface of what we can bring innovation to.

To conclude, if you are talking about data observability and applying it across the most basic health checks of your data sources and pipelines, then it is still relevant. But, if you are going to stretch observability across all your subjective and objective dimensions of Data Quality / Business Quality without context, then it’s irrelevant and it’s time to think differently about your data culture.



Raj Joseph,

President & CEO DQLabs, Inc.

View More Arrow image

Best Practices

AI & Big Data Expo North America 2022 - DQLabs Events


AI & Big Data Expo North America 2022

We’re going to the world’s leading AI & Big Data event series happening in Santa Clara and taking place virtually in a hybrid format on 05th – 06th Oct 2022.

DQLabs is proudly Sponsoring this grand event. We would love you to stop by our stand #158 to know how DQLabs Modern Data Quality-centric Observability Platform harnesses the combined power of Data Quality and Data Observability to enable data producers, consumers, and leaders to achieve decentralized data ownership and turn data into action faster, easier, and more collaboratively.

Come join us and explore the latest innovations within AI & Big Data, and the impact it has across industry sectors including, manufacturing, transport, supply chain, government, legal sectors, financial services energy, utilities, insurance, healthcare, retail, and more!

This technology event is for the ambitious enterprise technology professional, seeking to explore the latest innovations, implementations and strategies to drive businesses forward.

Don’t miss the opportunity to explore this innovative technology. Stay tuned for more updates!

View More Arrow image