#

View All Sessions

It’s Time to Get “PIQI” About Your Data

March 12, 2024

Share This Page

Speakers:

Ryan Fung, Product Architect, Smile Digital Health; Rick A. Moore, Ph.D, Founder/CEO, Moore Than Consulting Group, LLC; Charlie Harp, CEO, Clinical Architecture

An engaging panel discussion on the transformative potential of a healthcare data quality framework called “PIQI” (Patient Information Quality Improvement). This panel explores the importance of standardized methodologies for assessing and reporting data quality, enabling data owners, users, patients, and policy makers to evaluate and communicate findings effectively.
h

View Transcript

There’s more to learn about PIQI

The Ultimate Patient Healthcare Data Quality Measurement Tool

Data Quality in Healthcare: PIQI Framework Podcast Series

Visit the PIQI Framework Website

Transcript

View Transcript

Stephanie Broderick (00:03):
Good afternoon everybody. My name is Stephanie Broderick, I’m the EVP of Strategic Initiatives for Clinical Architecture. I hope everybody is having a great HIMSS so far. We have a great session today with three experts in healthcare data quality. The title of our session is “It’s Time to Get “PIQI” About Your Data”, and a little pun there, a little play on words on PIQI and you’ll learn a little bit more about what PIQI is as we move forward. I’m going to go ahead and have my illustrious panel introduce themselves. So Rick, let’s start with you.

Rick A. Moore, Ph.D (00:40):
Sure, thanks, Stephanie. I’m the former Chief Information Officer at NCQA, the National Committee for Quality Assurance. That’s probably where most people will know my background. I spent 14 years there and worked with a lot of folks in developing the measures that you’re all trying to measure. And my current role is I have my own consulting company. I do a lot of consulting with vendors and health plans and the like. So I get to see now all the problems that I created while I was at NCQA with these measured definitions and specs. So I’m looking forward to talking about data quality. Charlie?

Charlie Harp (01:11):
I’m Charlie Harp, I’m the CEO and founder of Clinical Architecture. I’m a simple country programmer. I’ve been working in healthcare for about 35-ish years and I’m excited to be here today with these fine gentlemen to talk about data quality.

Ryan Fung (01:28):
And I’m the least qualified person here compared to these two. I’ve spent 16 plus years in life sciences, senior principal data scientists there, but also one of the main architects behind decentralized clinical trials and then now working at Smile as their product architect overseeing their product strategy and data interoperability and data quality.

Stephanie Broderick (01:48):
I think that makes you pretty qualified. So today we’re going to be talking about why we’re talking about data quality, and why it’s so important. We’ll talk about what the current state of data quality is from these guys’ experience. We’ll talk about the ROI of high data quality, making an investment in ensuring that their data is of high quality, and then we’re going to talk about how we can measure data quality. So, Rick, I want to start with you. Why are we talking about data quality?

Rick A. Moore, Ph.D (02:26):
Follow the money, right? I think most people know that data quality, the downstream impact of it, particularly if it’s for quality measurement or things like value-based care, are going to start mattering a whole lot more than they have. And the more you have to get clinical data, which I think is where it’s headed with things like ECDS, that sound familiar? Electronic Clinical Data Systems, right? That data that plans don’t traditionally get today is going to be become more and more prominent in their world. So I think the plans are now starting to see the data quality issues paramount to their downstream impacts of value-based care. So that’s to me where I think it’s starting to have a little bit more intense focus.

Stephanie Broderick (03:06):
So there was an article that came out right after the end of ViVE that was talking about AI, but it was talking about needing to have data quality in order to have good AI. And it made the statement that we are data rich, but insight poor. Charlie, what are your thoughts about that?

Charlie Harp (03:29):
I think that we’re living in a world we created. A few years ago when healthcare realized that longitudinal data was valuable, we solved that problem by no longer deleting anything. And so now we have this freight train of data that we’ve just been accumulating over years and years and years. And when you give that data to a human being, human beings have the fantastic quality of being able to tune out the noise. Ask my wife about that. She’s very good when I start talking and tuning out the noise. Software is not. There is very little of what software considers to be noise. So all this preponderance of data, when we try to ask software to make decisions, whether it’s traditional analytics or artificial intelligence, the noise affects what it thinks and what it does. I think that that’s part of the problem is we have so much data but so little information that all these systems that we have are just overwhelmed. So to address that specific question, I would say that’s really what it’s about. The question is how do we eliminate the noise? How do we reduce the noise and how do we determine what signal in the noise so that the software can actually do something useful?

Stephanie Broderick (04:48):
And Ryan, how about you?

Ryan Fung (04:50):
I totally agree. At Smile we aggregate a lot of information, a lot. But information is just data. So how do you take that data and create information and ultimately knowledge out of that information? And that’s the key is that you need to have clean data. So looking at data structure, all different formats is one thing. You can codify that and you can bring that in and systems today can validate that, but then you need another system to say, well, let me look at the data point itself to see if it actually makes sense. Could I have someone that has a systolic and diastolic pressure over 350 and 200 and still be alive? No. But yet that’s the data that’s there. So how can you trust that? And then when you look at code sets and how things are encoded, a lot of times providers are focused on providing care, not on data entry. They’re usually attuned to what am I used to, what I remember as a code and I’m going to add it in there and then move on to the next patient. So a lot of times you get these inconsistencies, right? And if that data’s driving your AI mechanism and they’re training off of that data you’re going to get that garbage in garbage out effect.

Rick A. Moore, Ph.D (06:01):
I want to build on that just a second, Steph. So back to Charlie’s point, we looked in the mirror and we are the enemy. To Charlie’s point, we built what we have got. And I think the incentives for years, decades, to Charlie’s point, were focused on billing and optimizing the billing mechanism around that. And back to an earlier speaker on the panel here at 11, talking about the clinical care being the focus and building the systems around that construct. Then building the measurement system off of that as opposed to what we’re currently experiencing is claims-based measurement pushed down into the EMR in a way that wasn’t necessarily designed for that. And that’s what I think, to Charlie’s point, we built that and that’s what we’re experiencing right now.

Stephanie Broderick (06:41):
Right, right. Good. So Ryan, you guys are a repository, so you’ve got a lot of data that is coming into your platform. What are you seeing as the current state of data quality today?

Ryan Fung (06:54):
Messy, really messy. And that’s the challenge that we’re trying to solve. Is first off, how do we identify the issues? Because that’s the hardest part. Once you identify it, then the next part is, how do I fix it? There’s a lot of regulatory things you can comply with. Can I go and modify the data? Probably not without consent or without different pieces at play. So how do you bring together multiple solutions so that you can actually take data that is not quality data but usable, transform it in a way that we can use it for studies or even for personalized medicine. That is a very hard task to do, but we first need to evaluate it, do that discovery, and that’s a piece that we’re missing right now.

Stephanie Broderick (07:42):
Yeah. Charlie, Clinical Architecture has embarked on its second annual data quality survey. I haven’t seen the results of year two, but can you talk about what we saw in year one in terms of people’s assessment of data quality?

Charlie Harp (08:00):
I mean, when we did the data quality survey last year, we asked people a couple of very pointed questions. The first one was, does data quality matter? And almost universally people said yes, and this is across the different sectors of healthcare, payer, provider, life sciences, data quality matters. The next question we asked is, how’s your data quality? And almost universally it said bad. And we dove even deeper. We said, well, what about medications? They’re okay lab, eh, they’re okay. Allergies, terrible. So we went through all these different categories and they basically said our data’s really bad. And we said, well, what about taking data from somewhere else? And they said, no, no, their data’s much worse than ours. So they feel like there’s, but then if you ask them, would you participate in data sharing? They said, yeah, sure They’re willing to accept the data, but I don’t think they’re willing to integrate it.

(08:56):
And I think that what we’re really dealing with in this industry is we have an incredible need for high quality data because we’re getting to the point where our ability to deal with the data using humans as an intermediary is becoming a bottleneck. It’s becoming a barrier. Humans can’t scale and we can’t give bad data to, if you give bad data to an artificial intelligence, what do I always say? You get artificial stupidity. And the problem is artificial stupidity will scale if you start giving people bad data, it’s bad prompting. So I think that we all kind of agree. I’ll let Rick talk, but I think that the data quality in our environment is bad.

Rick A. Moore, Ph.D (09:40):
Yeah, I’d have to agree on your points, Charlie. So when you’re down at the clinical level, when we were at NCQA, we did, you may have heard of the data aggregator validation program that we did. And that was sort of our first, I would call it data collection effort around this phenomenon Charlie’s describing. So why are the CCDs and millions of CCDs that are traveling are good enough for care but not good enough for quality? In other words, the measure quality. Well, because as you’re talking about, as Charlie talking about is the fidelity of the data that it has to be usable inside of a quality measure, is magnitudes of order higher in terms of this date timestamp has to be here, only these codes can be used and only these things can be within that segment. When you’re doing quality measurement as a use case as opposed to care, that’s where you get these disconnects.

(10:28):
So we looked at the data inside these CCDs coming across, and about 20% of it raw off of the systems are usable at the, I’m going to call it measurement level. Because it’s missing all the elements that needs to be a record that’s usable. So this construct of data quality, what I like to call usability for the other use cases, then manifests itself in the problems that we’re all dealing with today. Going back to what I said before and what our colleagues said earlier is if we design the systems for optimizing care, the quality part of it should happen naturally. If the systems are designed in such a way that you get the data quality designed in not inspected in, which is what we’re doing now, we’re inspecting it in. Yeah.

Ryan Fung (11:11):
And typically what happens is the physicians are looking at that data. They don’t look at the details of it from a code level. They go back to the human readable side, look at the SOAP notes or look at the diagnostic report, reread everything, the stuff that’s not currently computable by a computer to basically look at, okay, is this correct? So this is a human factor that comes to play. So although the data quality may say, oh, it’s good enough for care, it’s because there’s supplementary information is not being computed today. That we take into account, it’s a blob of data. That’s all we know.

Rick A. Moore, Ph.D (11:46):
And this is where technologies would come in, I think expedite this process of getting out of the blobs, to your point, Ryan, and make that less of a, I’m going to call it burden on the providers to go through this and try to put the structure data in the right spots. Because that’s what we’re telling them to do now. And that’s not a scalable solution.

Charlie Harp (12:03):
The thing I always say, and it’s that in healthcare we deal with a lot of uncalibrated uncertainty, which is a nice way of saying that we don’t know what the hell is going on.

Ryan Fung (12:16):
Ultimately we got to build the trust again. So we all know that the data quality is bad, and because of that, a lot of organizations don’t trust other organizations’ data because they’ll say, oh, your data’s going to pollute mine because it’s bad. It’s true, it can multiply. So how do you build that trust back? And that’s I think where hopefully Charlie can talk about that today a little bit, how we can build a framework to evaluating that data so we can build that trust.

Stephanie Broderick (12:48):
So one of the conversations that we had, we talked about the ROI of high data quality and we talked about data as currency. Rick, can you talk a little bit more about that?

Rick A. Moore, Ph.D (13:00):
Well, going back to the opening comments about, but why are we even talking about this? Because it is now about how that currency, how well you’re performing is going to be measured by this poor data quality. So this goes back to the trust factor. So if I’m a clinician, I’m putting my information in the CMR, but yet my measures don’t reflect what I’m practicing. The trust is totally gone. So if there was ever a problem between a plan and a provider in terms of trust, this is going to exacerbate that problem if you would. So the more accurate the data can become, the less I think burdensome it will be for the providers collecting it and putting it in. And then the payers will be able to essentially maximize their currencies, so to speak. Same for providers. So if you’re going to get these value-based contracts, you’re going to have to have some measure of trust. And then I think what Charlie we talked about offline was how do you measure the level of that quality? How do you know if I’m going to go out and buy data from a particular source, how do I know I’m getting my biggest bang for my buck? We don’t talk about data quality in the same construct, in the same ways we all have our different definitions. I know you’ve got a framework you’re talking about putting out there for that. Yes.

Charlie Harp (14:04):
Yeah, I’ll be talking about it in more detail, at 2:45 in the healthcare interoperability showcase.

Stephanie Broderick (14:11):
But that’s a good segue. Charlie, can you just give a high level overview? I know it’s detailed, but a high level overview?

Charlie Harp (14:21):
Want me to stay out of the weeds? Is that what you’re trying to say?

Stephanie Broderick (14:23):
Yes.

Charlie Harp (14:23):
I can try to stay out of the weeds. So one of the things that we started looking at, when you look at data quality, the challenge we have is once again, it’s uncalibrated uncertainty. We know things are bad, but we don’t know how bad. And there’s an old saying, if you care about something, you measure it. And really quick before I go into the non weeds, when you talk about an ROI for data quality, I think it’s interesting when you look at healthcare data, there are people that pay billions of dollars to get access to healthcare data. But if you were to ask somebody how much they’re willing to pay to get data from something like an HIE, it’s a very different value proposition. In part because they know what the data looks like and they know that it’s going to be a challenge to take that data and put it to work. As opposed to the people that buy data in bulk thinking that it’s going to be this amazing asset.

(15:22):
When you think about uncalibrated uncertainty, the first question you have to ask yourself is how do I decide what’s the bar for data quality? And that’s really different depending upon the use case. So the first question you have is, what’s my evaluation criteria? Because if I am using the data for some anecdotal purpose, I’m probably fine with whatever I can get. But if I’m really trying to do something important like taking care of a patient or feeding AI or determining what oncology drug to give somebody, I really want to have good information. I want the bar to be pretty high to make sure the data’s good. The next question is, how do I take the data from all these different formats and schemas and normalize it in a way where I know what I’m talking about when I point to a particular field or element of that data? And I’m not talking about a schema or a record, I’m talking about data, the actual data around a patient.

(16:22):
The next thing is really when you’re looking at the qualitative issues, qualitative issues don’t exist in a vacuum. There’s a taxonomy of quality issues, something’s missing, something’s not populated, something’s incomplete, something’s implausible. You can’t have a hemoglobin A1C of 10,000. So there are all these different things that are categories of the type of qualitative issue. And understanding the type of issue gives you insight into where it’s probably coming from. And so if you take this taxonomy of the type of issue, if you can very concretely identify the elements and you know what your evaluation criteria is, you can essentially come up with a library of what are the types of things I’m going to check and how do they align to the taxonomy? And if you come up with that library, is it a valid LOINC code, is it a valid date, is it date that’s in the past because the date of birth?

(17:18):
Things like that. Then we can share that library, at least even conceptually, we can share that library. And so the PIQI framework, which stands for Patient Information Quality Improvement framework, is about a shareable standard way of looking at patient data. Not data writ large, but patient data specifically. A way to look at it, measure it, so if I show you your results for your patient data feed, you know I can compare you to everybody else. You can see what kind of issues you have, you can gain insight into how to resolve it. And really it’s just a step for us to, I mean, the sad thing is it’s like when you buy a house, you get the house inspected so you know just what you’re getting into. We need to have a common way of framing and understanding what the data looks like so we can decide together what’s usable, what’s not usable.

Rick A. Moore, Ph.D (18:17):
Yeah, I want to build on real quick, becuase you mentioned ROI. So taking Charlie’s concept, imagine if you had an ability to score a data source that you want to purchase for use in Medicare Stars or HCCs. Take your pick.

(18:33):
This scoring methodology would say what I said earlier. So let’s say you’ve got a raw data set that’s got 20% usability for that use case using the scoring model, you could say, okay, I’d like to get it to 80%, because then you’re getting your bigger bang for your buck. And I think this is where the industry is starting to get mature and understanding it. If I’m going to go after a dataset, I’m going to start scoring it in a certain way that I know I’m going to pay a sliding scale, so I’m going to buy it the 20% level, I want to pay 20 cents on the dollar. If I’m going to get it at 80%, I’ll pay 80 cents on the dollar. So I think there’s a way to get this same tax or this similar taxonomy out in the ecosystem where we can all start talking about this in a similar way, and that’s an issue. We’re not doing that now.

Stephanie Broderick (19:10):
Ryan, do you have any thoughts about that type of a framework?

Ryan Fung (19:13):
I totally agree. I think even from a data processing perspective, you don’t believe how many hours is spent on taking data through a cleaning pipeline. Talking about not just looking for mandatory fields, looking for duplicates. How do we match records together? Everything from normalization to enrichment of the data. It’s things that typically someone has to do before data can be used for analysis. And so when we’re acquiring data, it’ll be good to know as an acquirer of data, how much effort I would need to do to process that data so that I can actually use it downstream. If I had to invest a lot of money into it, I might as well look for a different data source, right?

Stephanie Broderick (20:00):
Yeah. So what do you guys say to the people who think that FHIR solves everything, that it’s the panacea that solves everything?

Ryan Fung (20:11):
I’ll chime in here. So coming from the FHIR side,

Charlie Harp (20:15):
So FHIR side chat,

Ryan Fung (20:16):
FHIR side chat now, right? So FHIR’s a structure. It has some validation conformance that looks at data structure itself. Also some data quality, but it’s limited when it comes to analyzing the actual content within the data. So FHIR is great. You can use it for interoperability, you can use it for storing, but when it comes to the actual data element, there’s nothing stopping me from putting a thousand in for my A1C reading. It doesn’t prevent me from doing that. I can put it in. So it’s about understanding the data quality, and that’s a piece where it doesn’t do. Data structure standard for sharing information. Great. Check, check, check, check. When it comes to the data itself though, we don’t do that. So FHIR doesn’t do that. That’s how we partner with companies like Clinical Architecture that can do it.

Charlie Harp (21:07):
So I’m an analogy guy. I have an analogy for you. If you look at, and I’ve started building interfaces back in 1987, so I’ve been doing this a while. I kind of think of these schemas, these formats, like, let’s take a bus. Let’s say HL7 is like a bus with assigned seats. Don’t be offended if you’re an HL7 wong. And so we had HL7 for a long time and it worked perfectly fine. It shipped people around. And then we had CCDAs, which is like a train. So now we have a train with assigned seats. FHIR is like a Boeing 737. It’s a container, but it doesn’t, just because it’s a 747 or 737, that doesn’t attest to the quality of the individuals riding the plane. And I’ve ridden some planes with some very interesting people over the years. So the idea is people get caught up in the hype and the excitement, and don’t get me wrong, FHIR is exciting. There’s a lot of cool things about FHIR that allow us to share information and expect certain things. It’s like a contract like HL7 was like CCDAs, they’re more like guidelines than rules, but it doesn’t solve the problem of who’s on the plane. The trick is how do you get really high quality people on the plane, get them off the plane and put them to work?

Rick A. Moore, Ph.D (22:31):
I’d echo the same concept. I think FHIR is a great door opener. Accessibility from an API perspective, I think all of that’s the right direction, but it’s just going to, what is that old saying? If we shine the light on the data quality issues, FHIR is really going to shine a bright light. So yes, it’s got some schema protocols and things of that nature, but this data conundrum we’re talking about here, it’s still going to be there. So if you go into this transformation thinking it’s going to solve the data quality problem, no, you’re still going to have to solve that problem. FHIR’s not going to do it for you.

Stephanie Broderick (23:05):
So beyond FHIR and even beyond a framework, are there other things did you guys think we could be doing to move the needle on the quality of healthcare data?

Ryan Fung (23:20):
Well, I’ll let Charlie go first.

Charlie Harp (23:21):
Well, there are things, but they’re scary things like eliminate terminologies. But that’s a whole nother thing. I won’t go there. I think that some of it is awareness. I think that one of the challenges we have is documentation is a burden, and it’s one of those things where if you’re going to ask somebody to do the work to document something well and give them a nice way to do it, they need to see the benefit of that. I think the problem is we have this big documentation burden. We say, do this documentation, do this work. I don’t know that the people that do that feel like, they don’t see the benefit of that. They don’t see that it takes something off their plate. And I think that, for example, if you’ve got a quality, a gauge on a data stream and you identify that the quality is bad, you can’t remediate all quality issues.

(24:16):
You can do mapping, which is fine, but there are quality issues that you cannot remediate because you’re changing the data. And the way to remediate it is to go back to the source and say, hey, this patient’s not a diabetic, or you didn’t document that this patient has heart failure. And to have that kind of communication channel and to have the people that are putting the data in feel the benefit of better data. And that’s kind of the challenge. People have to see the benefit, let’s call it the personal ROI for me to care about the data quality that I put in. Because if you ask most people that deal with that, it’s in the note. And the note is what I look at and I know I’m taking good care of my patient and they don’t always understand all the whirly gigs and machines in the background to get them paid and to make sure the patient gets their foot exam and all those things that happen because of software. They just need to understand what the potential is of improving the quality of the data.

Rick A. Moore, Ph.D (25:13):
I just, I’ll build on Charlie’s point. I think it’s an exciting time for the industry. I think we’re solving some of the accessibility problems. I think the fact that data has been siloed for so long, these problems have been buried for so long that it’s just taken us a little bit longer than other industries to figure out, hey, like I said earlier, we’ve looked in the mirror, we’ve seen the enemy and it’s been us because there are a lot of things that we could be doing in our own systems to solve these problems more methodically as opposed to retrospectively where we do it at the front. And I think technologies now are starting to catch up, AI included, where we’re going to start to see the burden reduction that we’ve been asking for for a long time. These technology firms are going to start looking at this data in a way, to Charlie’s point, they’re going to start scoring themselves or we’re going to start scoring them in ways that are going to start shining a light in areas that we couldn’t see before.

(26:02):
So the accessibility issue, I think FHIR and API mandates are going to open the doors. Why? And patients, we all of us, we’re going to start demanding to be digital like everything else in our lives. How many of us have put up with errors in our finances? My check didn’t get deposited today. You’re going to find out real quick. But in healthcare, we put up with a lot of data quality issues that are impacting our lives daily that nobody says anything about. How many times have your records been mixed up with someone else’s because we don’t have a master patient index for the country. Things of that that we could be fixing, those are data quality issues.

Ryan Fung (26:36):
And Smile, I mean that’s what we do. We break down those silos so that we can bring in the data as a single source, master the data, but still we need the data quality. My data is only as good as the data is being put in.

Stephanie Broderick (26:50):
So yesterday in the HIE and Interoperability Pre Symposium, Mariann Yeager, who is the president of the Sequoia Project, gave an estimate that data quality in healthcare is at least a decade long problem. I’m going to end with before we go on to questions with, I really hope that that’s not true. I really hope that something like a framework like this, scoring the data sources, looking at data as currency will actually really move the needle and that we can get there much faster. Any questions? Yes.

Audience Speaker 1 (27:28):
Thank you. This was a fabulous talk. Very interesting. I’m old enough that going back to the interfacing in 1987 to remember Larry Weed at Emory in 1970 on black and white video recording of his grand rounds. And one of his conclusions was that data captured collection was really important and it was so important that we shouldn’t be asking clinicians to do all of it. I’m just curious your thoughts on that. I’m seeing Rick smiling and shaking his head.

Rick A. Moore, Ph.D (28:00):
Yeah, this is where I think the technologies are moving so much faster and better. Even last year I was in an environment where ambient technology is running in the background, did a scenario where patient provider discussions are happening, the notes are done by the time the meeting’s over and it’s all been done with technology, all accurately done, all structured. So I think we’re at a point where this is actually realistic and doable now. So yes, I think we’re getting it. Others?

Ryan Fung (28:25):
I totally agree. I mean, think about your PCPs today. If they want to order something for a patient, how much paperwork this person has to do and how do they send it in? A lot of the smaller places they still fax in their data to the lab and then wait for a fax coming back with a report. If they’re lucky, they’ll get an email, but nothing that goes directly into their systems. So how do we reduce that burden? Now, on top of that, what if it requires prior authorization? That’s another form that you have to go fill out first. You got to find out if it needs it, then you got to find the form, then you got to fill it out, then you got to submit it, and then you got to wait a few days later, then you’ll get a yay or nay. So how do we reduce the burden on the practitioners and get more timely response?

Charlie Harp (29:06):
I think there’s another issue too. And that issue is, I always ask people this question, who’s responsible for your data? Who’s the steward of your data? And well, it’s not you. You’re not allowed to do it. You’re not qualified to take care of your own data. Rick, what are you talking about? I think that when you look at it, people think, oh, my provider’s going to do it. Your provider has a note. They get six minutes with you when they get to see you. They don’t have, they experience significant time famine, so they’re not going home at night and curating your medical record for accuracy. Your insurance company isn’t doing it. They have millions and millions of members. You’re not allowed to do it because you’re not qualified to do it. And at the end of the day, you don’t have a single medical record.

(29:54):
You have 73 of them. And so even if we solve the data capture problem, which ambient technology, there are ways to capture data, and that’s why I said terminology is an artifact from the 1970s that we still use in healthcare. There are other ways that we can capture and leverage data that we need to start thinking about, but we also need to think about how do we take this data that we’re accumulating? How do we put it into a construct that’s aware of time because now we just move data forward into a big homunculus of episodic data. And then how do we distill that into what’s relevant right now? Because that’s what’s valuable to the people that are interacting with the patient. What’s going on with you right now? Yes, there are 73 different codes in your history that say you’ve experienced variations of diabetes, but what’s going on with you right now? Because until we do that, flooding everybody with a bunch of historical data does not make the problem easier. Just makes him turn around and read the note.

Rick A. Moore, Ph.D (30:59):
I’ll agree with that construct. And I’ll add an analogy. I know you’re good at analogies. So think about all of you who have credit cards. How many times do you go look at your credit bureau and figure out the accuracy of the credit rating that you have? Anybody? Right? Similarly, I think we’re going to get there with data for our own healthcare at some point, right? Open up the accessibility of getting all of our 73 records in a point where we can actually start seeing what’s being written about. So we should become the curators of our own data. We should be looking at that more intently, but we don’t get it. How many of you get access to your records? All 73 of them.

Charlie Harp (31:33):
What’s ironic is we’re not qualified to manage our clinical data, but where does a lot of it come from? That piece of paper we fill out by hand when we get in the doctor’s office.

Stephanie Broderick (31:44):
Over and over and over again.

Rick A. Moore, Ph.D (31:46):
Exactly.

Stephanie Broderick (31:49):
All right. Any other questions?

Audience Speaker 2 (31:55):
What are some of your thoughts around, I think the idea here is consolidating our own patient data into a singular record. Some may argue that consolidating all that data creates a lot of security risks, cybersecurity, PHI, data leaks, things like that. What are some thoughts on how to protect that? So if we move towards the future of consolidating our own patient data, what are ways we can secure that data and also prevent malicious use of it from things like payers and providers taking advantage of things like that? So question.

Rick A. Moore, Ph.D (32:29):
So I have, great question. So how many of you have an application called Mint or heard of the application called Mint? It’s similar to what you’re describing for your finances. You can go to all your different financial institutions, connect the dots, same kind of security issues, right? No different. I mean, your financial data is probably just as important to you as your clinical data, but maybe more than your clinical data. I still think that patients being empowered to do some of the things that Charlie’s been talking about here is a very important part of the puzzle. When we get access to our own information, we know ourselves better than anybody. But to Charlie’s point, we haven’t been empowered yet, I don’t think as consumers to do what you’re describing. So I don’t think that the fear, and this is where we get into this whole thing about master patient identification, there’s this fear that someone’s going to find information out about me because there’s an ID at the government level, all that kind of stuff. We have to get over that. We, all of us, have to get over that because otherwise this problem’s going to continue to exacerbate itself. I think the technologies are there, but I agree with you. We’re going to have to figure out a way to keep the bad actors from being bad.

Charlie Harp (33:31):
And I think in some ways privacy is a myth. I mean, I’m serious. I’m convinced how many of you have been sitting in your house talking about something and all of a sudden an ad appears on it on your phone? I think that the real question is how do we mitigate what happens with the data? And the government does come out and say, you can’t do this and you can’t do this. But there are certain data sharing agreements and things that happen in back rooms that I’m sure that my data has been shared a hundred times over without my knowledge. And I also think that to Rick’s point, not having a national identifier, are there risks with having a national patient identifier? There are privacy risks, there are potential legal risks, but at the same time, I don’t know that there’s any more risk than us moving all this data around erroneously because we don’t know who belongs to what. If I were being a smart aleck, I would say blockchain is the answer, but I’m not going to say that. I think that what we need to do, well, first of all, I’m not too worried about my data today because I know what the quality is.

(34:41):
Ask me that question when the data quality is good, maybe I’ll be more concerned. But I think we need to continue to figure out what is PHI? And I think we also need to figure out models for separating out and identifying information from clinical data records. And maybe blockchain is the answer, I don’t know.

Rick A. Moore, Ph.D (34:57):
But you’re smart as a patient to be concerned about, if I’m going to be connecting to an app, what am I giving away? So be very careful with that.

Charlie Harp (35:04):
Well, but here’s the thing, and we asked this question years ago. If you put your data into a data repository and they have clinical decision support and you think about an AI or a platform as if it were an individual, do you consent to giving that platform access to your data? And if you’re in a situation where you have an illness and there’s a clinical trial that comes out that could save your life, do you consent to being asked if you’d like to be a subject in a clinical trial? So it’s one of those things where, and I think it’s different for different individuals that have different concerns about what their health record says about them and who might find out about it. I think if we take certain precautions to make sure that people don’t get exploited or excluded because they have a certain health condition or a certain thing going on, you don’t want to throw out something good to get rid of something that you perceive as bad, I guess. And it’s tricky. I mean, if I had the answer, I wouldn’t be sitting with these jokers.

(36:06):
The morning talk show circuit.

Ryan Fung (36:09):
I’d be in St. Lucia.

Stephanie Broderick (36:12):
All right, I think we’ve got time for one more question.

Audience Speaker 3 (36:20):
Hi there. My name is Adi. What is an entity or an organization that’s the current goal standard that’s doing data quality right?

Charlie Harp (36:29):
Clinical Architecture.

Ryan Fung (36:33):
Smile Digital Health.

Rick A. Moore, Ph.D (36:36):
Wait, do we pay you for that?

Audience Speaker 3 (36:37):
No, from a provider standpoint or people that are actually consuming and using the data?

Charlie Harp (36:44):
I mean, I think the problem is it’s endemic. I don’t know. I think the system, like Rick said, we have seen the enemy and the enemy is us. I think that, I don’t know that it’s a question of whether or not a organization has decided we’re going to have bad quality. I think the issue is the systems we have in place, and I don’t mean to besmirch anybody out there, but if you look at the systems we have in place today, none of them really deal with time. It’s all episodic because healthcare is historically a transactional environment. You come in, you tell me what’s wrong with you, I treat you, I send you on your way. I store that episode. And if you think about it, episodic data is like one of those old flip books where it’s not a video, it’s a bunch of pictures.

(37:37):
And when I flip through the book, I see what happened at each episode that is at the core of the data quality issue. That and the fact that you have people that are being asked to do detailed documentation and they have absolutely no time. So a lot of what they do is say, what did I say last time? Yeah, you still have those problems and they just carry it forward. And then they write the details in their note, which is not easily accessible by a compute platform. I don’t know that there’s anybody that’s doing anything magical or special. I know that there are a lot of organizations out there that are trying to improve their quality, but we’re dealing with uncalibrated uncertainty. And frankly, that’s why I wanted to build a framework because we can’t address the problem if we don’t even know what the nature of the problem is or the scope of the problem is.

Rick A. Moore, Ph.D (38:29):
Yeah, I can echo what Charlie’s talking about. When I was at NCQA and we did the pilot around this whole construct of the aggregation and how well that was going in terms of HIEs and the data quality conundrum. Basically the HIAs are saying, well, we’re getting it passed to us, so we’re just only passing on what we’re getting. So they’re passing the buck. So if you go back to the source, the EHRs, then you start to say, well, why aren’t the EHRs designed in such a way to capture these validation issues? Input validation, right? Because when we talk about the thousand reading on a HBA, A1C or what have you, and it’s out of tolerance, how is that possible? How are the EHR vendors, and again, I’m not trying to be disparaging, but how is it possible that we don’t know to put these input validations in the front end? How is that possible after I’ve been in this business for 30 years? We shouldn’t be having these problems. So I think what we’re talking about here is shining a light on those data quality issues at the source, because I’m sure if we did that for providers, they would understand better the downstream impacts that they’re having and they might, if we incentivize it correctly, actually put that data in better and the vendors may design them for that purpose, not for just coding for billing, because that’s where I think we’re at today. Fair?

Ryan Fung (39:38):
Fair. Took the words right out of my mouth.

Charlie Harp (39:41):
You’re totally wrong. I promise there’d be a fight so we got to…

Rick A. Moore, Ph.D (39:45):
Wrong. Yeah, we’re supposed to arm wrestle up here. All right.

Stephanie Broderick (39:47):
All right. We are at time. You guys have been a great panel. Thank you so much. And you guys have been a fantastic audience.

Ryan Fung (39:54):
Thank you.

Charlie Harp (39:55):
Thanks you guys. Thanks for coming.