The Informonster Podcast
Episode 15: CommonWell Health Alliance and the Mission to Bring People and Data Together
January 12, 2021
On this episode of the Informonster Podcast, Charlie Harp meets with CommonWell Health Alliance Executive Director, Paul Wilder, and Director of Product, Liz Buckle, to discuss the organization’s mission to provide universal access to data at the point of care for the right clinician and the right patient. While the goal is simple, getting there is incredibly complex. Charlie explains why Clinical Architecture decided to join this alliance, and both Paul and Liz talk about how they’ve been able to connect and share data through their growing nationwide network with the help of their members and initiatives like the CommonWell Connector™ program. The group also shared insights on how federal regulations have affected the push to universal interoperability and the importance of ensuring data quality along the way.
Hi, I’m Charlie Harp, and this is the Informonster Podcast. Today on the Informonster Podcast, I’m going to be talking to the folks from CommonWell Health Alliance. So today, who I have with me is Paul Wilder, the Executive Director of CommonWell, and Liz Buckle, the Product Manager of CommonWell. So if each of you, Paul, you first could take a few minutes to introduce yourselves and tell us a little bit about you.
Thank you very much, Charlie. It’s a pleasure to be here. Again, it’s Paul Wilder, been the Executive Director here at the CommonWell Health Alliance for about a year, approaching the anniversary of my start next week, actually. And my previous life was in health information technology, primarily New York state, working with the health information system that we have over there, which is a network of networks across the regions and the state. Most of my career was in healthcare before that as well, primarily radiology and cardiology informatics, with a brief respite out of healthcare for about two years where I immediately realized I had to come back, which is part of my usual story of how I got to where I am. Which if you give me – if I have one minute, I can explain, Charlie, if that’s okay for you.
So I was outside of healthcare for a brief period of my life, and my first daughter got born; she’s now 11 years old. And everything was fine. She was early about five weeks, so slight risk, uh, to many, many factors, but we got home. Everything’s good. She had a little jaundice when she was born. Third day of life, we wind up at the pediatrician, you know, because that’s what you do when you have a newborn, and jaundice has returned. We don’t notice this because it’s a gradual thing going on day by day, and I’m not a clinician, I’m a health IT person, right? So he says, “Hey, little bit of jaundice. Did you have that before when you were at the hospital?” We’re like, “Yeah, and it went down,” and he goes, “Ooh, you know, really that’s not supposed to ever go that direction. Once you’re… Once jaundice abates for a newborn, it doesn’t come back unless there’s a problem. So you need to go back to the hospital.” So you have this nice little bundle of joy. Your first one, three days old, and this guy says, “Go to the hospital again, now.” Now we didn’t go back to our original hospital. We went back to the one that’s close to our house, which was walking distance. We live in a urban setting, with our stroller we walk down there and they ask us a bazillion questions of which we have no answers. And being a person that formerly was in healthcare and did some informatics and exchange stuff, I say, “Why don’t you go check the system or at least call the people right across there? Three miles away, the hospital she was born in right across the Hudson river, give him a call and get the records.” They can’t get anything. So I call and they say, “Your records are between floors.” Literally the printed copies of whatever the heck it was, it was on a cart in an elevator somewhere. And no one knew which cart it was on, which elevator in what floor it was on, which is a travesty. What happens then is a bazillion extra tests, and tens of pages of additional lab tests because you got a three-day old newborn, which they don’t like to make mistakes on. I hear – I get to hear the screams of my daughter getting a spinal tap and her third day of life, which is not a fun experience, and she was set under UV lights for about a week. Perfectly fine, healthy child, no big deal. But the inefficiency of that process, the trauma to the parents, the potential trauma to the patient, fortunately, a patient that won’t remember it, but you get the idea, was horrendous. It was at that moment I said I got to get back to health care and someone’s got to fix this interop stuff because there’s a big gap between what we expect as patients, consumers, and clinicians, and the reality of where we are, even as we’re moving into electronic health records and trying to operate them better. So here I am doing that on a national scale. You know, I started here, like I said, a year ago and, you know, thrilled to be able to do it at this scale with this kind of alliance, with the mission-based focused for patients, providers, systems, payers, and all the members of the Alliance that we get to meet with every day. Thank you.
Paul, thanks for sharing that story. I think a lot of us have some kind of intersection with healthcare, especially as we get older and our parents get older and we go through things like, you know, having children. We intersect with the inefficiency that’s kind of baked into the way healthcare has operated for decades. It’s interesting that that is something that kind of triggered you to go back and try to focus on fixing the problem. Thank you for sharing that. Liz?
Hi! So, I started out working with actually one of CommonWell’s member companies, one of the founding members, and began by working on their interface implementations, kind of the old school version two, trying to get out of the point-to-point space and move towards a more centralized model, which really led us down the path of CommonWell, moved into product management, overseeing all of their interoperability services and solutions. And (I) have been now with the Alliance for a little over a year, and working towards all of the Alliance key initiatives with Paul, and trying to figure out how we can solve this big data quality burden.
Great. Thanks, Liz. It’s funny. When we joined CommonWell, Clinical Architecture joined CommonWell, CommonWell is made up of these folks that actually have the need to share and move data. And to me, it’s kind of a – it’s kind of a no-brainer. People do things more effectively when they do it together. Instead of, you know, kind of falling into our traditional role in healthcare, where everybody toils away in their individual silos, you know, coming together, making agreements, and kind of making things work better. But people have asked us, “Clinical Architecture, why did you join CommonWell? You’re not a health system. You are not an EHR vendor.” But to me, it’s one of those things where, when I see an organization doing something that we need to do, I want to do what I can, and I want to do our part because we are… We kind of live in that interoperability data quality space; that’s what we do. So for me, it’s kind of a no-brainer to be part of something like CommonWell. But one of the things I would ask you guys, and Paul, if you would, to kind of tell us what you see as kind of the vision and the mission for CommonWell, for those that are listening that may not be familiar with it.
Yeah, thank you for that. The mission is simple. The way to get there is often complex, right? The mission is universal access at the point of care, for a clinician, for the patient standing in front of them, or sitting in front of them, or laying down in front of them. All the available information that is collected throughout the entire ecosystem, all the places that I’ve seen and had services performed on me, had maintenance, et cetera, available at that point of care. It has launched some secondary missions, which is use the same data and make it available for other settings, such as for claims adjudication to payers, for operations use cases for quality measurement, and coordination of care, and things like that. But the core mission started out as: Make sure the right information is available at the right time for the right condition about the right patient. And to do that, two things have to happen: One, you need scale, right? You need to have an environment of participating providers and their vendors willing, ready, and able to meet the mark. And I believe we have that. We have 110 million people registered to allow the exchange of their records through our framework today. The second thing is that scale is the patient part, right? So I can get the providers engaged, by getting the patient part. And the 120 million is the proof point of that; and growing. It’s actually growing a good five to 10% every month or quarter, depending upon what’s going on. But it’s doubling basically every year. We are at that critical mass scale of connectivity to get to that mission. The part we’re really attacking now, and the reason why I’m really excited to do this podcast and talk to others about it, is now we’re moving to, “Okay, we have quantity. Do we have quality?” And that difference from those couple of little letters makes a big difference. And you, of course, and what your company does is very well aware of that. And I think it’s important to realize that we’re not saying we have bad quality. I think that’s one of the things that people think about when you start adding a measurement rubric or some sort of way of evaluating where you are. It’s knowing where we are to know, one, is it worth improving? How far are we? Maybe we’re perfect. Our data is a hundred percent and, anything we do to try to improve it is infinite cost and effort and not worth it? Or, most likely, where are the areas we could improve as an Alliance, as a group of members, and potentially how the providers themselves are using those products to make sure the quality of the data exchange throughout the system is at the highest level possible for providers to use and rely on it?
One of the things that I encounter all the time, and it’s one of the things that amazes me to this day, is that we live in a time when, on this, you know, glass rectangle I keep in my pocket, I can check the weather, and I can navigate from point to point, and have it route me around, you know, accidents. And so we have all this information, we’ve become so accustomed to having technology assist us and be aware of things. When people intersect with healthcare, it just makes it that much worse. It feels so broken that if I go to see one doctor, that the other doctor doesn’t have that information, or if I go somewhere several times, that they don’t incorporate that information into what’s going on today. A lot of people, when you talk to them, you know, the consumers, the people that aren’t in healthcare IT, they really struggle with that problem. And I think when it comes to the quality, you said it really well. Quality and quantity are not the same thing. And I think one of the things we struggle with in healthcare is this thing that I call “uncalibrated uncertainty”, where sometimes we have data that isn’t really meant to do what we’re trying to do with it. It’s not that the data’s bad. We’re trying to use it for an unintended purpose. It conflates things so that when we try to apply technology like machine learning and analytics, it adds a little extra bit of confusion. So a big part of solving that problem is us recognizing it because if you value something, you measure it, right? You look at how you can work together to take actions to make it better, instead of, you know, trying to do it in a silo, because if we’re truly going to share data across all those silos, no single silo can fix it. Right?
Yeah. And I think, interestingly, the idea of quantity in a treatment scenario where humans are reading the documents or reading the exchange actually does have a bit of quality embedded in it because a lot of it is, “Do I have enough data points available to me, that the likelihood that a key piece of information is missing is low,” right? So if I see five primary care doctors in my last – in my 20 years of being an adult in medicine, (well, 25 or 27 for me, whatever it is, and I’ve actually probably seen about eight personally.) as long as one of them gets the right piece of data in the right place, that I’m either hypertensive, or have diabetes, or whatever, that data point is likely to get through. But what we’re looking for now is, now that we have so much quantity, we need better tools to go through the data, right? We need suggestion algorithms that say, not necessarily diagnosis, but suggestions. “Hey, you might want to be aware that in the fourth document that you haven’t opened yet, it said that this person has a history of near diabetes, right? And today the record you have in front of you doesn’t show that, but I thought you might want to go look at that,” and the computer can help do that suggestion. The problem really comes with bad data, or bad quality data and documents, is the computer can’t do that, right? If it’s not codified the right way, it can’t guess; it can’t make a conjecture and help that suggestion to say, “You human, you MD, DO, the person that’s caring for this person, here’s some advice you might want to know about. With the amount of quantity, the quantity of stuff coming through now is so large that they can’t read it all. So now that we have the quantity, we have to go, “Okay, what are the tools that are needed to make the data more actionable?” And the quality isn’t there, it’s really hard to build those tools. That’s why we’re here now, right? Now we have the data probably filled in enough. Let’s make sure it bubbles up to the top, and the right workflows can be set in place to help provide better care for patients and for the population at whole.
Absolutely right. I mean, as you, as you look at the… One of the challenges that we have in healthcare is that we’ve got a shrinking provider population. Providers, I think, experience time famine; there’s not enough time. While the quantity is good, their ability, having the time for them to kind of sift through it all, because 80% of what we capture about patients is not intended for software to understand, it’s intended for a human to understand, but the human doesn’t have the time to go through the 1800 pages of healthcare records that get accumulated for a single patient. So you’re absolutely right. We need to find ways, going forward to either capture the data in a way that the computer can actually do that because the computer manufactures time through computational effectiveness. If we can put it in a format, the computer can understand and do some of that heavy lifting, that’s good. We can make sure that the data we’re getting there is the right data and not the wrong data; that obviously helps too. So with the work that CommonWell has been doing, Liz, what are some of the things that have been accomplished, that people should know about, since you guys started?
It’s pretty incredible to see just how far CommonWell has come since its inception, you know, a little over seven years ago. We now are exchanging data with over 17,000 provider organizations, we’re in all 50 States, DC, Puerto Rico, Paul mentioned a little bit earlier we just probably about a month ago hit over 100 million unique individuals enrolled in our Record Locator Service, which is huge, right? That’s, you know, one third of the US population. And I wanted to talk just briefly, just so that folks who are listening can get a better understanding of what our network is, what it looks like, what our services are that we offer. There’s three different ways that we connect and exchange data. We obviously have our core membership, our service adopters that adopt and deploy CommonWell services to their clients. We also connect directly to care quality organizations and other external organizations through collaboration agreements. And then finally we have this cool program that is relatively new, called the CommonWell Connectors. And those connectors are interoperability intermediaries, or they’re certified to be serviced adopters on our network, but they’re able to connect some smaller organizations that may not have the technical capacity, or time, or ability to invest in connecting directly to the network. So we have three of those certified connectors available now that are actively connecting. So it’s been a pretty awesome period of time over the past several years. I’ve watched it grow now for many years, and we have, I think about 80 members across the healthcare continuum. So from a forward looking perspective, I think, you know, Paul brings a really fresh take, and we’re continually brainstorming back and forth a variety of new use cases that we can bring to the Alliance. We’ve already brought three or four through the approvals process this year. (We’re) continuing to work on what’s next, what’s at the forefront of, of what we can be doing, especially when we look at having the transactional volume, having the quantity. What can we use? How can we use that to further our mission of allowing this data to be wherever it needs to be when it needs to be In the right hands?
It was pretty exciting to see the progress you guys have made over the last seven years. And a lot of your members are also clients or associates with Clinical Architecture. And just as an aside, they all seem to be pretty excited about what you guys are doing, and they really kind of see where you’re going and look forward to the collaboration, the innovation, and the types of things that can happen when a bunch of smart people get together and work hard to solve a problem that plagues all of them; my anecdotal feedback on that. So one of the things that obviously impacts how we do these things in healthcare is health policy, regulations, and things that are, that are coming out of the federal government. When you think about those things, like, you know, TEFKA, information blocking rules, what’s your perspective on that and how that’s going to impact us going forward?
So I’ll take that. The policy work going on right now, or the policies in place, are important to make sure there’s a level playing field and that access is as easy as possible, right? Especially when we start talking about making sure patients have safe, secure, and easy access to their data. And I think that’s where information blocking is really trying to hit the mark, is say, “We have it available, but it needs to be a little bit easier for it to be impactful for myself or anybody else to access the data as a consumer. I will say though, that from an effort perspective, the mission CommonWell and our partnership, our partners, at care quality, and others have really been ahead of the game here. You know, so the, the regulations come out, the information blocking blocking and others, when we’ve already built a very robust system to do all this stuff, right? So we are a hundred percent in favor of making sure that information blocking never occurs. And you can make that a regulation, and they did, I would say the organizations and vendors in our network would say that was unnecessary because they were already on that mission and on path. But that being said, I do think at the edge, it is important to have those rules of the road. More importantly, what it did is actually set a different target for us. We were doing something we call “query-based document exchange,” And we often abbreviate that QBDE because everybody abbreviates something. And that’s a document-based form of exchange where you have, you know, an encounter or visit a bunch of data warehoused in a thing that passes around, like a fax with a whole bunch of data in it. And it’s good. It actually works relatively well, but the new target has set towards FHIR. You know, that’s a different standard for getting access to kind of more discrete data, coded data like your procedures, and diagnosis, and your labs, and your meds, versus a full document that houses all that stuff. So we’re excited actually to kind of have that almost forcing function to accelerate our work on that side of the world. And we are working that right now. So all the things that Liz talked about, about a record locator, now we’re not a record locator just for documents, but we’re working on being a record locator for individual discrete data elements that will come out of a FHIR resource. So we’re excited about the rule because it’s directly in line with what we want to do, and now to align the industry towards the next target that we expect the lead on from here forward as well.
Having that makes a lot of sense. I think if you go back and look at the different regulations over the last 20 years, I don’t know that they magically fix anything because a regulation can’t magically fix things, but it does act as a motivator and a driver for people to kind of put their heads together and figure out how they can adhere to, you know, hopefully not just the letter of the law, but the spirit of the law. And I think that you’re right, a lot of the stuff that we’ve done historically is about sharing documents. And I think that the spirit of interoperability is sharing more than just a physical piece of paper. I think you’re right, and where you guys are going with FHIR, and sharing data that is portable and computable, could lead to some pretty amazing things happening in healthcare, really improving our ability to, you know, reduce the number of stories like the one you told about your daughter at the beginning of this podcast. I also think that, you know, what’s interesting is when you create these connections, and you create these thoroughfares to share information, today they might be sharing “X”, but in the future, once those connections are established, and you have people that are, you know, forward-thinking and innovating new ways to accomplish better things, just having those pipelines increases the likelihood that you’ll be able to use them for something interesting, to something new as you evolve over time. So I think it’s very exciting, and I do think that the fact that the federal government kind of focuses on standardization interoperability, and kind of sees that there’s a problem, is good because it does help us move things in a particular direction.
Yes, it surely creates a level of focus. The industry on whole, I’m not gonna say there’s this profit made from things not being standardized, but the reality is that it’s efficient agency to work with the thing that you like the best, right? And that doesn’t necessarily mean that’s the thing that works in the community as whole. It’s just the thing you work best on. By focusing everyone, that now becomes that the thing that is easiest for you to work with and everybody else to work with, right? And now it becomes useful, both internal and external; for the internal parts of the products and things that where you interoperate amongst business lines and amongst products you create, as well as how do you go between products and industries. So, you know, it’s, it’s a, win-win when we start thinking about a common target, and the regulations did do that, right? It set a standard for, “You may be talking this and you may be talking that, but we decided this is the one we’re going to talk about from here on. Let’s focus, and not create a environment of non-homogeneity that we can’t get out of.” So we think that that’s exciting. Does anybody love regulation? No. It’s not the most exciting thing in the world, but it does have a purpose when there’s something broken at the edge, right? Where the decisions are easier to make and you know what you’re targeting. If you know, the other person, the other exchange entity on the other side, is going to be hitting the same thing. You’re not wasting your effort; there’s more focus.
I think you’re right. I think that when you think about things like interoperability and sharing data, I always think it’s interesting that when you receiving data, you know, sometimes there’s trust issues, but you care about the data you’re receiving. When data’s going out, that’s you doing work, and some people might feel like, “Hey, I worked hard to create that data. Why do I have to share it?” Or, “Me giving that data introduces to some kind of risk,” and people start to get protective. And so sometimes you need something that forces you to prioritize that because even if you might want to do it, with all the stuff you have going on, it’s very easy to deprioritize something that benefits the future. Does that make sense? You think I’m off base there?
I don’t think so at all. I think that that’s well-stated and accurate.
The other thing too is, you know, I always make a joke about standards, where I’d say that standards are kind of like the Pirate’s Code in Pirates of the Caribbean. They’re more like guidelines. The other nice thing about organizations like CommonWell is it’s almost like, “Yeah, there’s a standard, but here’s how we choose to interpret the standard.” So it’s almost like there’s a standard on the standard to help make sure that people are doing things in a way that’s compatible because you know, we’ve all dealt with HL7 in the past, and the fact that, you know, you can follow the HL7 2X standard and still have something that you can’t consume because it’s not your flavor of the standard. And that’s another thing that I like about CommonWell, is it’s people coming together and saying, “This is how we’re going to do this.”
I think that’s right. I also, I think I was in a room, you know, this is going back about 10 or 12 years ago, right at the beginning of Meaningful Use, or maybe in the middle of the first phases, Meaningful Use stage one, working on getting to stage two, and there was a bunch of vendors in the room, and it was a precursor like to EHRA, trying to work on interoperability and how we could work with HIEs. And it was a great conversation, but one vendor was very honest, and I learned a lot from this statement, or what he described. He said, “Look, we are perfectly happy adopting a standard, and don’t mind if it’s forced upon us. The problem is I’m not moving unless everybody else moves. Right? So until that happens, whether it’s efficient or not, I am going to, unfortunately for the efficiency of the ecosystem at whole, I’m going to make money on the professional services that bridge the gap between the way I work and the way someone else works. What I would prefer is that everybody in this room agrees to a standard, and if not, force it on us so that everything’s more efficient.” And he said, this is the interesting part from a commercial perspective. I think this is important for vendors to think about: “It will likely make me, my company, more money when I don’t have that revenue because the amount of time I’ll spend on the help desk side, and the break fix side, having to work with interfaces that change, break, don’t interoperate anymore, all the downstream costs should decrease. So I’m happy to move, but I can’t do it unless other people in this room do it because then I don’t get advantage of having gone to that standard in the first place, and that’s definitely when [inaudble] it said, it’s FHIR, USCDI, Version 3 of FHIR, run.” And I think that changes a lot of the variability. It makes it easier and faster to be able to interoperate, to iterate from there, and brings down costs overall.
No, I think that’s a great point. I think it’s one of those things that… You know, I’ve been in healthcare for 30 years, developing systems in all different segments, and it’s always astounded me how often we all solve the same exact problem in our own little silo. And, you know, at Clinical Architecture, one of the things we’ve tried to do is put some tooling and technology in place so that, you know, we can accumulate solutions and solve problems in a standard way, so that other people can leverage that and not feel like they have to reinvent the wheel. Because that allows for the uplift of healthcare. It allows people to focus on specific problems that are directly in their wheelhouse and not have to all go out and solve the same problem. But you’re right. We’ve kind of been stuck in that rut, and sometimes you’re afraid to… I mean, in some ways I would argue that what you just talked about, it’s not possible unless everybody does it. You all have to come to agreement for anybody to step away from the old way of doing it, or the person who steps away is liable to fail for the very reasons you stated.
That’s right. Yeah. It’s hard to adopt a standard. It is actually a first mover’s disadvantage because you’re the one that has to make it work in practice, and that’s not normally, so thank you.
So, Paul, one of the things you talked about earlier is you kind of talked about the shift from quantity to quality, and I was wondering if you or Liz could talk about the CommonWell Data Quality Project.
Yeah. I can take that one. So we have, um, been working over the course of the past year to put some smart people together in a room and kind of do something similar to what Paul’s talking about, right? So how do we solve for something that, you know, everyone is kind of addressing in their own unique way, by no fault of their own? We just go in the direction that makes the most sense at the time. And so we did that. We brought some, you know, smart people together with experience in the data quality area, and we came up with this use case, really, that we’re calling the CommonWell Data Quality Framework; it’s a collaborative effort. We worked with your team, Charlie, a few members of your team who’ve been excellent to work with, the VA, HIE system, and Meditech, among other members within the Alliance, to figure out how to mitigate the challenges associated with incomplete, or inconsistent, or I guess what we would call poor clinical data, but really to define that. What does that actually mean? And figure out a way to incrementally improve the quality of data on the network. So we came up with this thing, this framework, and we framed this framework around four key pillars: Improving things like miscoded data or data that’s not in the right place, syntax violations. So for example, we want the ability to evaluate the conformance to required fields. So required codes, required code sets versus the use of local codes, bringing together that technical component, but also layering on top of it the business aspect. Why is this clinically important? Why is this, why should a provider care? Why should a hospital administrator care? And we talked about documents and the usefulness of documents moving to FHIR, really, regardless of whether it’s documents or FHIR resources, it doesn’t matter when you’re thinking about quality. It’s all about how the data gets into the system, how it gets wrapped up, how it gets transformed, how it gets ingested, you know, extra bold emphasis on that last step. And it’s a problem that has to be addressed kind of either way. As we mentioned kind of at the top, the Alliance has been definitely focused on ramping up, and broad network availability, and getting to that critical mass, and now that we feel like we’re getting to that point, it is time, I think, to start addressing, you know, what’s inside the envelope. And so this framework is a bit of a game changer, in my opinion. It jumps right into evaluating production data. I’ve worked with test systems in the past, and it’s easy for someone like me, or someone like Paul, or someone like you, Charlie, to go in and make this beautiful record with everything in the right place, using all the right codes and code sets. But in reality, that’s not what’s happening. And we know that, anecdotally, and we know that through our own experiences. And so we wanted to skip that step altogether and move right to evaluating what’s happening in the real world, analyzing it against our four key pillars, and digging into what’s going on through a clinical lens; through a business lens. And so what we’ve done is presented an incremental and phased approach to doing this. And so I think Paul touched on this earlier. We kind of don’t know what we don’t know. And so the first step, and really the only step that we intend to take until we can prove it out and move forward, is allowing the data to speak for itself, moving into an observation and education phase, where we do pull back the curtain and take a look at, you know, a subset of the data that’s moving across our network to figure out what the problem areas are. In my experience, we do make some assumptions, and they’re probably right most of the time, right? Around what the most prevalent issues are. And that’s great. And I think that there’s definitely been a lot of other industry initiatives that have worked to move us forward, and we participate in those gladly. But I think when you do pull back the curtain and start evaluating what’s really going on from a production standpoint, it gives more teeth to the output. So how can I drive real meaningful change? Anecdotes are great, but when you have the true output from the analysis of, “This is my health system and 30% of the time my RxNorm code is missing.” That says a lot, you know? That’s really powerful. So it’s a flexible approach. Probably in our top three (I don’t know, Paul can say for sure, right?) key initiatives moving into 2021 for us to drill into exactly what’s going on, and truly understand what’s going on in our network, and allow our members the same opportunity.
No, very well said, Liz. I think that, you know, the quality issue in healthcare information, that’s kind of what puts the “T” in the “Trusted Exchange Framework” acronym, right? It’s one of those things where, if we go back 20 years, we were still very siloed in healthcare, and I know I’ve used a lot in this podcast, but we had systems that purged data when people left, so it was very episodic. It was not a longitudinal way of managing data. We had more providers, the providers seem to have more time. It was a very different world than the world we live in today. In the world we live in now, and then this is a question I have for you guys, in my experience, people are still reluctant to take information that come from somewhere else and just incorporate it into what they’re doing in their environment. And I think that some of that is this idea that, “Now that I built this longitudinal record for Fred Smith, I don’t want something that I don’t trust to come in and break that data or to introduce something that might not be correct.” And I think that part of what you guys are doing around the quality can go a long way to help make that less of a concern for people that are trying to integrate data that came from elsewhere into the data that they’ve been curating and managing for that patient in their own system. Do you feel like that reluctance is still out there a little bit?
Yeah, I do. I can completely understand where that reluctance is coming from. You know, when you talk about ingesting data into your system, at least from what I’ve seen, the majority of the time this is happening through some manual process. And I think that that manual process exists mostly because there is so much reluctance in, “Is this accurate? And I don’t want for my problem list to be altered. This is my evaluation of my patient, and I don’t want this outsider information coming in to alter what I have as my truth; my source of truth.” And I get that. You know, this is one of the key things that our partners at the VA have drilled in on is (sic). The data ingestion side is a risk if the data is not good enough. And I think we’re in this interesting place where now we want to move so much to automation and allow for data ingestion to happen without, you know, so many clicks. Who wants to go in and do a bunch of clicks, right, in the EHR? That’s the opposite of what clinical staff, you know, has time or wants to do, and we hear that over and over and over again. But how do you marry the reluctance to incorporate external data, when in fact it could be significantly beneficial to the clinician at the point of care and beyond? How do you marry that with moving towards automation? And the only answer is, you know, resolving the consistency and the completeness concerns of the data that we exchange today.
I think that that’s right. I also think it’s important to think about the workflow. I was on an “industry task force something,” you know, on a similar topic, and someone was talking about ingesting data. And I said, “Well, just a devil’s advocate position: If all the data’s available, if I interrupt connection, why are you ingesting it? It’s available in the EHR with connections to the outside world.” And that devil’s advocate position was not taken well by all their clinicians on the call who said, “But if I relied on it, I have to ingest it.” And I said, “Hold on; I agree. The thing is we have to be careful with the vendor side and how we build products. That there’s a difference between data I’ve seen, data I took action, on and data that’s available.” And if we just think of ingest, “Is that data good enough to ingest?” Now, that does mess up a lot of the flows of, “Did I make that decision? Did I make that call or did someone else make that call?” But if I relied on a piece of data to help me make a decision, I better keep it so in the event that data goes away, for whatever reason, that server’s turned off, that provide a practice is no longer in business, or they’re [inaudible] something doesn’t work, I need to remember that I use some piece of data, right? Maybe in my system, I had no evidence of diabetes, basically. I didn’t have A1C levels, I didn’t have all the things that I would need to know that, and the patient wasn’t quite clear because for whatever reason, they were unable to explain their H&P the way we’d expect them to. But the data I got from “Bob’s Primary Care Practice” clearly had a history showing diabetes, and I need to understand that for the care of the person in front of me. That data I should probably remember in my own system, so the rest of the care I do, I can remember, “Oh yeah, that’s right. I don’t have primary data on this, but I did have primary data from someone else about that. And here’s where I got it from. That’s who I call if I have any questions.” Right? It’s not just interop. Some interop is about having the breadcrumbs to know who to work with next. And I think that’s where we have to be careful when we talk about ingest, that there’s the data quantity to the quality leads to a quality workflow of how you operate that data.
No, I think that’s a good point. I think one of the things we’re still challenged with, today, is that a lot of the systems we use in healthcare are geared more towards episodic data and not longitudinal data. And they don’t, also… They weren’t designed to receive external data, so they don’t have good mechanisms for firewalling things based on provenance, on where it came from. We ended up with this issue where, for us to accept data from elsewhere, you know, we might get, you know, 4,000 indications that they were taking Coumadin, or we might get 40 different, you know, codes that indicate that they had, you know, diabetes with some co-morbidity. And then that basically, you know, we run the risk of having this thing, that’s really geared to be an episodic thing, become a junk drawer of information that’s almost impossible to find anything in. So, we need to, number one, I think, look at how our systems work so that we can firewall data just like you described, where you don’t have to slam the data in to be able to take advantage of the data. And the other thing is, when I look at information, I kind of, I tend to look at information as is something in motion. You have the history of where it’s been, you have the current state: What you know. So if I look at everything across the history of this thing, I can surmise a current state of what’s going on. “Yeah, they had a broken arm 22 years ago, but it’s not still broken. But they still have diabetes. That’s something that’s persistent.” And then you can also look at information and look to the future, which I think is really exciting. Or you can look at the trend in the data and say, “Well, they’re not diabetic today, but they’re headed in that direction if we don’t do something.” And so part of what I think we need to do to really take advantage of the ability to share data and do meaningful things with that data is we need to rethink how we store and think about the data, in terms of the history of the current state and where we’re going, and how we manage things like data problems.
Does that make sense, or –
No, it does! I absolutely agree. And the example you gave with the broken arm is an example I use all the time. And it’s funny when you do these stories and you kind of have the frame the idea, I once sat back and said, “But even that broken arm is important when the person has pre-arthritic conditions later on,” like, you know? That could be nerve damage. The difference is, you know, “My arm, like, something’s not right.” And it’s like, “Oh, might say with no history, you might be developing arthritis. Do you have other pains around there?” Or you say, “You know, I had a break there five years ago,” or the data says that, well now we have a different path we’re going to take, in terms of investigation, and think about whether there was nerve damage that wasn’t present in the initial break but might be there now, manifesting itself through other causes that caused it to trigger. So all these data points are important now and potentially later. Still, a lot of them are not through, and that’s a lot of the noise, and why we look for data quality helping us build the better tools to be able to do the right things with that data. So we have CommonWell. The mission is “The right data at the right pati for the right patient in front of the right provider at the right time.” Now it’s micro-segmentation of that data, right? Actionable pieces, and then it’s operational with the EHRs and other tools. What do I store? What do I know where to get if I want to get it later? And what do I ignore completely because it’s not important? It’s really hard. Humans are bad at this. I mean, I have… You look on my phone today. I back up the pictures on my phone to three different locations: I have a Google Photo, I have an Amazon, and I have a one drive location. What’s my problem? When I curate that stuff, when I delete it from one, it still exists in all the others, right? I’ve created that garbage bin of stuff, and I can’t get out of it, just from taking pictures of my kids. And we do the same thing with this kind of data. It’s not… It’s normal for us to accumulate data and things and not know how to purge it all. So this is a normal problem, but we do have to work on it to the benefit of patients and population in general.
Well, absolutely. And the quality, to get to the point where we want to be as an industry and solve some of these high class problems, we’ve got to get through the basic problems; and Liz mentioned this earlier. With interoperability, just a basic example is if you can’t move the data from point A to point B, you can’t solve any of the other problems relating to the data. Once you move it, if you can’t have a syntax and it’s not packaged and formatted in a way where people can understand it, you can’t solve the next problem, which might be semantic interoperability. So we have to… It’s like a pyramid. We have to solve these foundational problems to be able to get to the high class problem of doing some of these things that we’re talking about. I totally agree. And I think that the work that CommonWell is doing moves us a long way in that direction. And so I really appreciate all the stuff that you guys are doing and your members are doing to help make that kind of progress in healthcare. We have a ways to go, to get to the point where we’re like some of these other things that we look at our phone every day, and I think we’ll get there, but to get there, we’re going to need organizations and groups like the CommonWell health Alliance to make it happen. So I appreciate what you guys do. All right. Before we wrap up today, anything you guys would like to share or anything else you’d like to say?
Nope. Nothing from me.
Well, first of all, thank you; I do appreciate the time. It’s always… Every conversation about things that we do that we get to educate others, we always learn in the process, you know? It’s like being a great teacher is also about being a great student, right? CommonWell is here to not quite solve the problem of interoperability anymore because I think we have interoperability, at a base level, licked in some respects, right? It’s now advanced that to the next layer of how we have actionable data at the right time, and that micro-segmentation, and quality is important. You know, what CommonWell does at its core is match patients across disparate systems so that we can collect that together. Without that Record Locator thing that Liz talked about, you’re not going to be able to find that patient’s data. And so that’s what we do at a core is make sure that it’s available. The next step is for all of us to make sure that the data we’re getting is actionable and that it has the right provenance, as you mentioned, as well, so that we know how to go back to it, ignore it, store it, but keep it generally available for someone else that might need things that we were not aware were important at the time. So interoperability is important today, and the data is going to stick around to be important later, too, for new care paths that are yet to be developed. And (I’m) super excited to be here. I think our members are, as well, as are our customers, and I greatly appreciate the time you spent with us today.
It’s been my pleasure. So, Paul Wilder, Liz Buckle from CommonWell Health Alliance, thank you so much for your time today. If anybody out there wants to check out the CommonWell Health Alliance, it’s CommonWellAlliance.org, check it out. Be part of the collaborative solution, if you can. Thanks again, everybody, for listening. This is Charlie Harp and this has been the Informonster Podcast.
Have a question or topic idea?
Get our News and Updates
Get notified about new podcast episodes, upcoming events and webinars, and more!