Preloader Icon

The Data Quality Journey

What is an Informonster?

The Informonster is a metaphor for how our best intentions, when it comes to healthcare data can lead to unintended consequences that range from distraction to disaster. The general principle of collecting and leveraging the data we produce in providing care to people seems straight forward. Why then is having confidence in our data quality so elusive? How do you avoid being destroyed by the Informonster you created?

Is there really a data quality crisis in healthcare?

We have been using software to augment healthcare for decades. Historically, we have been focused on using software for scheduling and billing for services. These are use cases that operate in silos. The way we have traditionally defined, collected and used data, while not optimal, provides adequate support for these silos. In recent years we have introduced use cases for healthcare software to include population health, value-based care, artificial intelligence and machine learning. Initiatives intended to enhance our ability to make good decisions are dependent on algorithms. Algorithms require access to current, complete, accurate and structured data for every patient. Healthcare does not organically produce data that supports this level of clinical understanding, which is why many initiatives of this type are met with failure.

What is the impact of poor-quality patient information?

What we know about a patient is the foundation of healthcare information technology. It is the driver of everything of significance that we do collectively as an enterprise or an industry which includes:


Appropriately diagnosing and treating patients


Providing useful clinical decision support


Cost-effective use of healthcare resources


Staffing and other logistical concerns


Value-based care and quality measure reporting


Population metrics


Public health reporting


Clinical research


Machine learning initiatives

The minute we do anything beyond the scope of a single human provider, the data we collect and leverage dominates our ability to understand, evaluate and act.

The Data Quality Survey

Clinical Architecture conducted a survey across healthcare segments. In this survey respondents were asked about the impact of poor-quality patient data on organizational objectives. They were also asked to rate the quality of the patient data in their enterprise.

Impact of Poor-Quality Patient Data on Enterprise Objectives?


High Impact


Moderate Impact

Rate the Quality of the Patient Data in Your Enterprise?


Poor Quality


Mixed Quality


High Quality

While this survey might not be a comprehensive representation for the industry, it does reflect the reality that we have found in working with clients – patient data that is not refined is not reliable.

What can we do to improve the quality of the patient information?

It all begins by accepting three fundamental truths.


Patient data is not a single manageable entity, but rather a collection of information of thousands, hundreds of thousands or millions of patients. We cannot improve the quality of patient data without addressing and refining the data for each individual patient.


No one system knows everything about a patient. We must share data across the silos of healthcare to enrich and establish a more complete picture of each patient.


We must accept data quality improvement as an operating principle and not just a project we tackle from time to time. This means having resources, best practices and measures that focus on the stewardship of the patient data, reference data and master data in our information ecosystem.

When faced with these truths it is easy to feel overwhelmed by them. The combination of the effort involved to continually improve the quality of patient data, coupled with the sheer volume of actual data is an obstacle that stops many enterprises from even trying. However, Clinical Architecture has helped many of the largest healthcare enterprises overcome these obstacles through the application of best practices and software designed to transform data at scale.
One of the barriers to leveraging data across an enterprise is semantic interoperability. This means that the codes used to represent patient data elements (diseases, medications, lab tests, procedures, etc.) are different from one system to the next. In order to make sense of what’s happening with patients across these systems, these elements must be normalized, or mapped, to some standard set of codes. This was traditionally done manually which was both expensive and time consuming. This often resulted in attempting to do the minimal amount of work for a given use case (e.g.: mapping for antibiotic utilization) or focusing on high volume elements (e.g.: mapping the top 10% by frequency). The problem with this approach is that one-time projects provide limited value, cannot be leveraged and typically have to be redone. Enterprise data harmonization can be achieved in a cost-effective manner by using industry best practices and intelligent software that constantly monitors, manages, and facilitates semantic mapping.

Reference data management is another issue that impacts large scale analytics efforts and regulatory compliance. Keeping up with the industry standards can be challenging. The terminologies we use to comply with interoperability, public health reporting and quality measure reporting requirements are constantly changing. Understanding how the terminologies are changing and how those changes impact your enterprise can be daunting. Our Symedical® platform provides seamless updates to these terminologies. In addition, the platform monitors how the terminologies change, how those changes impact your utilization of them and streamlines your ability to manage that impact. The goal is to eliminate the risk of noncompliance and the associated penalties in a manner that minimizes expense and optimizes the deployment of your valuable human resources.

When considering data quality, we have to address the elephant in the room, spreadsheets. When it comes to creating and managing the data that we use to leverage our patient data (facilities, providers, groupings, value sets) we are guilty of using spreadsheets that are scattered across our networks, desktops and shared drives. This data is master data, and it plays a critical role in our ability to leverage what we know about our patients. It is therefore important to manage this information in a single source of truth that provides transparency, access controls and integrates easily with the reference data, patient data and our other IT investments.

Cultivating data quality requires a commitment to understanding, managing and refining all of the information in your healthcare data ecosystem.

How does Clinical Architecture help clients turn their data into a high-quality strategic asset?

Every enterprise has a unique collection of data sources, existing systems and strategic objectives. Our approach to client engagement begins with an in-depth discussion and thorough analysis of the organization’s data issues and needs. The Clinical Architecture team works collaboratively with our clients to address these needs with a combination of thought leadership, software solutions and professional services.

We have worked with health systems, payers, population health vendors, content vendors, life sciences and public health organizations to pragmatically improve their data quality and unlock amazing and, sometimes unexpected, benefits.

Using our approach and software suite for semantic interoperability, one client was able to normalize patient data for over 150 facilities in less than 90 days. This saved them millions of dollars in direct costs and provided incredible benefits. More importantly, this was not just a one-time project. This organization now actively manages enterprise normalization of data in real-time, so the data is always ready, always valuable.

Clients have used our Symedical platform to get timely updates to standards like RxNorm, SNOMED, LOINC, ICD10 and CPT. Our subscription portal has hundreds of assets that are used across healthcare. This makes staying current and managing change as simple as clicking a button.

Many of our clients use our Adaptive Workflows to manage their master data. One client was able to represent all of their facilities and resources across their enterprise and normalize how those facilities and resources were represented in ancillary applications. The resulting data allowed them to optimize clinical staffing which resulted in savings and improved patient outcomes.

Once our clients understand and control the quality of the information across their enterprise, they are able to realize the full value of their technology investments and significantly increase the effectiveness of their human resources.

We have helped several of the largest US health systems including CommonSpirit Health, Montefiore Medical Center and UK Healthcare; the Centers for Disease Control and Prevention, the Defense Health Agency, Association of Public Health Laboratories and others. We can help your organization as well.

It’s time to begin your journey to better quality data.

We know the way.