The Informonster Podcast
Episode 3: Randy Woodward Shares His Experience Regarding Software Augmented Healthcare
November 4, 2019
Charlie welcomes Randy Woodward, RN and Senior Clinical Architect as a guest on the podcast to discuss topics relating to applied automation in healthcare IT, including RPA vs AI, patterns of success and omens of failure, and the value automation brings to the industry. Charlie and Randy also discuss the nature of healthcare, and how that relates to the best perspective one should have on automation. Those who are interested should stay tuned at the end of the episode to hear Randy discuss how Clinical Architecture’s products work relevant to automation.
Sure. Thank you, Charlie. Happy to be here with you on the Informonster podcast. A little bit about my background: I’ve been with Clinical Architecture for a little over three months now, but I’ve got a pretty long history working with this company going back 10 years. I’ve been working in healthcare as an information technology professional for over 20 years. That’s how I started my career. I’m working with hospital systems and provider organizations on quality improvement, population health and analytics to support safety, quality, and data research. Throughout my career, I’ve worked with individual hospitals and research organizations, very large health information exchanges and very large healthcare systems. Most recently, I was director of enterprise solutions architecture and big data for Ascension healthcare. After that, I took a year off, started school, gained an RN degree and have joined Clinical Architecture recently to help apply both my clinical and technology knowledge to our customers and our products.
Very excited to have you on the team. So, uh, anybody that knows me knows that I’m a big proponent in applied automation, and doing things in healthcare that can take some of the burden off of human beings and the clinicians and the other folks that toil in healthcare sometimes and in the process improve many facets of the way we interact and use data in healthcare. Why don’t you share a little bit about the things that you’d like to focus on today and we’ll take it from there.
Sure. I wanted to talk about robotic process automation or RPA and artificial intelligence or, as I prefer to call it, augmented intelligence because it’s never just organic from the machine. People always teach machines what to do and how to do it. So what is RPA? In simple terms? Robotic process automation, sometimes called RPA or “bots” for short- You know, short for robots- is the use of software to automate the work that a human would normally perform. The type of work that are good candidates for RPA range from simple repetitious tasks to complex workflows that may involve many people and many systems. People have been using software to automate tasks for decades. A simple example of that is the use of macros to perform a series of actions in a spreadsheet. I think most people can relate to that. RPA could be that simple, but it’s evolved to handle more complex workflows, make decisions, branch to other workflows, and invoke other automated systems when needed.
RPA may even recognize when the bot is encountering a condition that it wasn’t designed to handle. Now, artificial intelligence is a complimentary technology that’s a very, very broad term. It can encompass many different computer science technologies and statistical methodologies that implement machine learning or reasoning. These tools can range from software algorithms that can respond to multiple variables, recognize signal from noise and make decisions to perform functions. Other forms of AI include machine learning, or ML, that use statistical methods and models that have been trained by lots of data, usually big data, to recognize patterns and make predictions. Another form of AI you may have heard of is neuro networking, or deep learning, that’s used to recognize relationship patterns, such as “who a doctor is likely to refer a patient to” and “which doctors are influences for other doctors.” There are many other forms of AI, but one thing that’s common to all of AI is that it’s ultimately designed by humans and it’s often trained by dataset.
So if you have good data and a good design, it’s more likely that your AI models will make accurate predictions and bad data will let your AI make really fast bad decisions.
So when you think about the ways you’ve tried to leverage automation and AI in the places that you’ve been, have you seen any particular patterns of success or any omens of failure that you’d like to share?
Yeah, absolutely. First, I’d like to talk a little bit about why healthcare would be interested in using RPA and AI in the first place, just in general, and then get into some specific examples and throw a few use cases that I’m familiar with out there.
So what can RPA and AI do to add value for healthcare? You know, there are many use cases that range from administrative functions such as back office, invoice and claims processing and validation to interoperability and even clinical decision support.
The reasons for healthcare system to use RPA and AI are the same as those for other industries and include benefits such as increased productivity, better reliability of the processes, timeliness of the functions and data as well as some continuous improvement, which I think is one of the more interesting benefits. So when I talk about productivity and efficiency, what I mean by that is that machines are very fast and software can be used to perform repetitious tasks more quickly than a person or even a whole team of people. In some cases, this decreases the cost for an organization to do that type of work. Productivity, efficiency and scalability of machines also allows organizations to perform greater volumes of the same type of work, which in some cases could actually be a revenue center for an organization. That might be a service that they provide to other healthcare organizations.
Or other business units.
Absolutely. When I’m talking about highly reliable processes, I mean that RPA and AI software that has been professionally engineered, validated and tuned and run continuously without failure. It can be confirmed with standard tools that monitor software or service execution and make sure that the system is up and running, further supported by monitoring error cues with a combination of software and human operators. RPA processes can be expected to send some work to error cues, and not every transaction or document or thing that an RPA system encounters is going to meet the parameters of system.
It’s important that than any system like that understands what it failed right? And then those to call for help.
Yeah, absolutely and those problems can be because of errors in the source data might be formatted poorly, the document may be incorrect or not correctly formatted or you know the file may have been truncated or otherwise can not meet the expected standard or layout. There can be a number of novel and unique variations and source data and processes that the system itself recognizes as something that should be reviewed by a human. Next, low latency. This is kind of related to the high reliability. Software can run 24-7 instead of the human work hours. I’ve never really understood why bank machines don’t run over the weekend, but hey, I’m not in that industry, so I will have to leave that one.
You can’t fix all industries, Randy, you gotta one at a time, one at a time.
Anyway, the little latency, it results in quick completion of work and it avoids bottlenecks and downstream workflows, which I think is very important in healthcare. Timeliness is critical, in some situations. AI systems can also have continuous improvement, where AI improves an accuracy and precision over time with human feedback. Humans, when they review things that go well or, or more importantly when they review things that have been thrown into exceptions or cause problems or just wrong, they can gain insights that can help them refine the software to fix the problems or enhance it to do more things or automate more challenging workflows and make more nuanced decisions.
So when it comes to leveraging these tools, these ML tools, these RPA tools, can you think of areas where you’ve seen people think they were appropriate, where they weren’t appropriate?
A problem that I see often is that people become overly enamored with what AI and ML can do. They sound very sexy and almost magical and I try and remind people that magic is not real. So there are limitations to what AI and ML can do. For example, you can’t feed an ML system complete unstructured garbage and expect them to launch a rocket to the moon for you, that’s extreme. But more realistically, you do still have to clean your data and have some reasonable parameters, have a very clear understanding before you can and can’t do with data and ML software. AI, just like any other tool has to be used appropriately. There’s a big danger with things like machine learning and automating statistical methodologies when people may not completely understand which methodologies are appropriate for the data they’re working with and the types of questions they’re trying to answer.
I agree, and I think when I look at healthcare from an analytics perspective, not necessarily big data analytics or machine learning or artificial intelligence, just good old fashion analytics, right? One of the biggest challenges we have in healthcare is uncertainty. As humans, uncertainty in the data is crippling and I would say that when it comes to machine learning and AI, they don’t solve uncertainty. Their Achilles heel is uncertainty because we as humans can look at data and we can kind of look at something and say, yeah, that just doesn’t look right. Typically AI and machine learning is looking at data and it’s using it as a model in some cases. And so you could learn something the wrong way and that could change everything you do downstream. I remember we had a friend of mine who had a younger sibling and when the child was younger, they didn’t experiment with the kid where they made them believe that the word except meant to include something. Whenever they’re going out to dinner, the kid would run in and say, “Everybody except me?” And they go, “yeah, everybody except you.” And that’s an example of machine learning where you can teach the machine something and if you teach it the wrong thing or if it learns the wrong thing on its own, that color is how it’s going to give advice. And that’s one of my biggest challenges. Like you said, people, we tend not to live in the AI space, or in the ML space, other than what we do with collective reasoning. And that’s really to help promote consistency, right? And to help make sure that you’re not doing the same work over and over again. That’s to me the whole idea of collective reasoning, but I do feel like where we play a role, when you talk about data quality in normalization harmonization, the role really is to prepare the data for those types of things. You don’t end up with artificial stupidity instead of artificial intelligence, or artificial uncertainty as opposed to good old fashioned uncertainty. And so to me, I agree with you. I think people go in with expectations that it’s going to magically somehow solve all their problems and it can do some really cool things, don’t get me wrong, but it is not a magic bullet and it doesn’t solve the data quality problem in healthcare.
Yeah, that’s right. And I want to say again how important it is to have very accurate and large volumes of high quality data that can be used to train AI models. If your training data includes a lot of wrong answers, like you said, you’re going to teach the model the wrong behavior or to predict the wrong thing. The other thing that I’ve seen with AI is incomplete data. And I’ll give you an example. If you are trying to build a model that would predict which patients are likely to be readmitted to a hospital, after a hospital discharges them, who’s going to be readmitted within 30 days? You may have training data for patients that you’ve discharged that came back to your hospital. What you may be missing are the patients that left your hospital. They were sick again and wanted to go to somebody else’s hospital. So you’re missing a lot of the signal just because it’s unavailable in your dataset.
That’s true. one of the things that I’ve always thought because we do have tools and things, and I’ve worked with things like this for decades, where you curate knowledge, create rules so that you can do a kind of automation. I kinda call it artificial experience. You’re taking knowledge that somebody may or may not have. You’re analyzing a particular circumstance and you’re giving them advice based upon what you see in that circumstance. Right? I feel like when you curate that somebody has made a decision that when I see these things align, you should tell somebody. One of the things I’ve always thought when it came to AI and machine learning in particular is a machine learning is a good way to discover patterns in the first place.
When you’re doing some kind of discovery when you’re ingesting data from all over the place, you can go in and you can use something like machine learning that exposes a pattern that you would never have seen had human beings toiled over the data, just because of the computational efficiency of software, right? The kind of thing that we do, which is kind of deterministic automation coupled with machine learning as really the thing that helps us identify the patterns that we then turn into deterministic automation, is a good way to think about it. I think sometimes people think about it the other way. We’ve actually interacted with folks who started with, “well, we’ll use the deterministic stuff and we’ll use it to train the machine learning stuff.” And to me that didn’t make a whole lot of sense. I might be missing something or not. So have you seen any situations, in the stuff that you’ve done in the past, where the machine learning, or automation, RPA, AI used appropriately, tuned appropriately, is always a worthwhile endeavor?
Almost always. Again, I’ll have the caveat if you have good data to help train the model, but there’s a kind of a famous example of bad data that trained a really good model that was used by a police force in a large city to predict where crimes were likely to happen. It was good idea but they would proactively send police officers to these areas to either catch people very early on during the crime or even better provide a presence in those areas and hopefully deter the crime from happening in the first place.
I saw that movie.
Yeah. So it’s good idea, but um, had some very negative consequences. It didn’t always accomplish the goals that they were trying to do. There were people that suffered untoward effects. There’s profiling that was part of that experience as well as crime to happen in places that were not predicted that humans did a much better job of predicting.
And that’s the other issue with machine learning, is what you’re really doing is you’re relying on the fact that you’re providing huge volumes of data within that data. The patterns that are relevant will expose themselves just by sheer volume, right? Not taking into account that the data might be incomplete. The data might be wrong, and honestly healthcare is unpredictable because it’s not like manufacturing. It’s not like finance. What’s that old line from Jurassic Park? “Life finds a way”, right? Human beings in healthcare creates unpredictability and it’s one of the reasons why we practice healthcare, right? We don’t have health engineers where you go in, they put you on the table, they adjust you and send you on your way. We have health care practitioners that do their best. They use their intuition, they use their knowledge and every now and then we encounter something that totally surprises us. We realize that things for the last 10 years was completely wrong or just not right. One of the challenges we have with machine learning and AI is applied to something that obeys fixed laws and known information is one thing, but healthcare still is not that, and I think that that causes us to stumble and the challenge of when you apply a technology and it stumbles, for good reasons, sometimes people throw the baby out with the bath water. They say, “Oh, well obviously this technology doesn’t work.” Instead of realizing that either we’re not feeding it right or we’re not applying it to the right problem. It could be perfectly good technology. I think NLP is like that. I think NLP is another technology and I don’t think that NLP is right for healthcare. I think the approach we’ve taken is a little more appropriate because it’s much more focused. That doesn’t mean NLP isn’t a good technology. It’s just not the right solution to the problem people were applying it to.
Yeah, I’d agree with that. I’d say there are appropriate use cases, even in healthcare, for traditional NLP, which I would describe as trying to understand an entire block of unstructured narrative text, and our NLP and SIFT, that looks for very specific things and pulls it out and wraps structure around it so that it can be used and other systems.
Part of the challenge of that too is the way we operationalize intelligence in healthcare is based upon all these pre coordinated terminology sets. One of the big gaps you have to leap over from NLP to this structure-discreet world, is the things that are being said and may not have been pre coordinated by somebody sitting in a room somewhere. So you need to have kind of a different approach. We’ve noodled over that for while a Clinical Architecture and that’s a subject of a whole nother podcast, but to really take advantage of a lot of these things that people want to use like machine learning and AI, I think that we need to enable those technologies by both providing quality data and providing kind of the ecosystem where these systems can integrate with the humans that are actually providing care in a way that is not disruptive because, well, you can automate a lot of things, you can move a lot of things, but at the end of the day, people might disagree with me, but I’m a big believer that you need to have human beings involved in the decision making process. It’s one thing when you’re looking at populations and patterns and trying to discover things and do things, but when you’re actually interacting with a single patient, the software should always be in that support role and not clinical decision making. The software should also know its limitation. Knowing where and how the automation can fail and making sure the automation knows that it could have failed so that it doesn’t leave the human, that it’s supposed to help, down the wrong path without at least something saying, “you know, I could be wrong” because once again in healthcare, the Achilles heel is always going to be uncertainty. There’s the things you know, the things you think you know and all the things that you don’t really know that really can disrupt your process when you’re trying to come to a reasonable conclusion or determine the best course of action. So Randy, one of the things that you had said when we were preparing for the podcast is that you thought it’d be useful for the audience to kind of get an overview of some of the things that we do in Clinical Architecture that would fall into the category of RPA and AI. I’m going to give a disclaimer because I don’t make it a habit of talking about Clinical Architecture products in the podcast. We’re going to do that, so if you don’t want to hear it, thank you very much. Sign off. Otherwise I’ll turn it over to Randy.
Some examples of RPA and AI from Clinical Architecture, our software, that I’d like to talk about include a variety of tools and platforms that we offer to our clients. The first one is Symedical, that’s our flagship product. It’s our terminology, data management and data quality platform which automates the following time intensive and repetitive tasks: Reference terminology, acquisition and curations. Our content portal offers many public and commercial reference datasets. These data have been acquired from their various publishers, validated and made available to Symedical clients automatically, thus avoiding the time and intensive work by their own staff. Symedical also performs change detection analysis, so that clients can easily see what’s new, retired, deleted or changed in the terminologies that they rely on. Symedical also uses RPA and AI to automate the activity of mapping terminologies. Our AI algorithms apply a sort of cognitive reasoning to the source and target terminologies. The AI is domain specific and rationalizes the mapping process in much the same way that a human would. Symedical can be tuned and configured by our clients to automate or assist a human with this work, depending on their preference and comfort level, and Symedical can also recognize when a mapping is not appropriate for the cognition engine and will route these terms to a workflow for a human to review and we’ll often provide information to the human to help them quickly make a decision and complete the mapping. And in some cases, like we said earlier, this will yield insights to the human teams that are fed back into Symedical to further tune and improve the AI for future mappings. And Symedical also monitors its own performance and provides a dashboard and wealth of information to the platform administrators that help the organization prioritize their data quality work so that they can focus on the things that are most impactful.
The next product and platform I’d like to talk about has a variety of names. I think the official one is Advanced Clinical Awareness. It’s a sophisticated rules engine that can leverage Symedical clinical ontologies and finely tuned rules. It’s another RPA and AI system that can be used for an ever growing list of use cases. Some few examples that I’m familiar with are these: So one might be finding patients with undocumented chronic disease conditions. For example, the system can review an entire patient record and assert to a provider that a patient may have undocumented diabetes because they have a high lab value for their hemoglobin A1C test. They’re taking Metformin, which is a diabetic medication. They’re not pregnant and they don’t have documented polycystic ovary syndrome. Reasons that somebody might have high A1C or take Metformin and they’re not a diabetic. It’s a pretty smart platform and that’s a lot of work and a lot of reasoning that can be performed and allows the doctor to consider all of the relevant information available so that they can take the action that they decide is appropriate.
Well, and when you think about why we built the awareness engine, the inference engine or whatever we call it, depending upon the day of the week, is we had a situation where it might not just be the doctor, it could be data stewards, it could be the patient. But the bottom line is data quality is critical. And the more we aggregate data from multiple places, the more we increase the risk that we’re going to encounter something that’s problematic. And so for me, it’s really just a question of how do you get through all this data and look for things that need to be fixed or removed. If you try to have human beings do it, it would take forever. We’d never get it done. So deploying some type of deterministic artificial intelligence to do that, or RPA, to me is the only way that you could do it. But you do have to have somebody with knowledge curating the things to look for so that the automation can do what it’s designed to do.
Yeah, that’s right. And I think the challenge is even a bigger challenge now that healthcare systems, hospitals, clinics, outpatient practices, specialists are working together and they’re trying to combine their data that are in different systems. They speak different languages. That just compounds the problem. Another example of how I’ve used Advanced Clinical Awareness platform in the past is to identify patient cohorts. So imagine for example, if you need to find all the heart failure patients based on data in your EMR, first you’d probably look and find the patients that have a documented problem based on a diagnosis code, but you may miss some patients. Next, you might look for medications that they’re taking that are specific to heart failure patients and that might give you some additional insights, but you could still miss some. The gold standard, I would probably say, is to find their left ventricle ejection fraction percentage from the narrative parts of a radiology report from the patient’s echocardiogram.
Our tools can automate all of that and even segment the patients by their most recent LVEF value and to more serious or less serious heart failure patients. This is much more complex than that spreadsheet macro we discussed earlier. Right.
Just a little bit, just a little bit.
A couple of other quick examples I want to mention with the same platform is identifying gaps in documentation for things like HCC capture and this can yield tens or hundreds of thousands of additional dollars to practices for work that they’re already doing to care for patients to have various severity of conditions and multiple conditions, but may not be documented completely, and accurately every year over year, so that’s a potential for RPA to increase an organization’s revenue for work that they’re already doing. Another example, last one I’ll give here is identifying a patient’s level of risk by calculating something like their Charlson comorbidity index score.
There’s many more examples of how the advanced clinical awareness system can add value, but those are just a few. There is one more product that I wanted to talk about. It’s a Pivot. It’s our newest product and I’m very excited about what it can do for healthcare systems and what they can do with it. Pivot is a system that can ingest a variety of clinical data formats such as HL7, CCDs and FHIER – just to name a few. The platform can then invoke Symedical, the clinical inference engine or Advanced Clinical Awareness tool, whatever we’re calling it today, as well as SIFT, which is our NLP tool, and it uses all of those components to validate the source data to check for correct syntax – So is the message or document structured correctly? – Can validate semantics, are things within the source data within the expected vocabularies?
Does it conform to those terminologies and ontologies? What about the quality of the data values themselves? Are they reasonable? Are numbers really numbers? Do those numbers fit within reasonable ranges for the thing that they’re associated to? Once Pivot has validated and clean data it can then enrich data. An example of that would be pulling out that LVEF percentage and putting it into a structured format. It can also translate terminologies, so think of that situation where we’re combining data from many different EMR and other systems that all speak those different languages. If we want to combine data, we need to make it all speak the same language. So translation is important. Also, Pivot can deduplicate. It can consolidate, for example, a current medication list. So if you have four different health systems sending data for the same patient, deduplication is something you might want to do.
Finally, Pivot can transform source data into one or more standard formats for output and consumption. These capabilities can streamline things for organizations. They can help with data acquisition, validation, enrichment and distribution. So one example might be that PCO that I mentioned earlier, they want to quickly access patient data from all of the places and settings that care for that patient and consolidate it into a common platform in a database. Pivot can help them quickly acquire that data, lean it, normalize it, and make it ready for use and analytics right away. As you can see, the possibilities for RPA and AI are endless. Very excited about these technologies and how they can add value to healthcare by reducing costs, creating revenue in some cases, supporting clinical decisions, better patient outcomes, and think in the end, happier patients and happier providers to.
Great, Randy. Thank you very much. Well, that’s it for this episode of the Informonster podcast. Thanks to Randy Woodward for sharing his thoughts about and experiences with RPA and AI in healthcare, as well as his shameless plugs for our Symedical products. We hope that this was informative and thought provoking, and as always, I appreciate you taking the time to listen. Remember, if you have any feedback or ideas for future episodes, please send them to email@example.com. So until the next episode of the Informonster podcast, stay classy and try not to get devoured by your Informonster. This is Charlie Harp signing off.
Have a question or topic idea?
Get our News and Updates
Get notified about new podcast episodes, upcoming events and webinars, and more!