Preloader Icon

The Informonster Podcast

Episode 14: The Impact of Logica and How the Healthcare IT Industry Can Come Together – Part 2

December 14, 2020

Charlie Harp is joined by Logica’s Chair of the Board, Stan Huff, Chair of the Nursing LOINC Subcommittee, Susan Matney, along with Clinical Architecture’s very own Chief Informatics Officer Shaun Shakib and EVP of Client Services Carol Macumber to discuss the history and impact of Logica on the Healthcare IT Industry. In this second part of the Informonster Podcast’s first two-part series, they discuss the importance of information modeling’s interaction with terminology, how a standard by itself can only be effective if we all agree to follow it, treating decision support like a medical device, how Logica is changing, and how decision support can ease provider burden in the trenches with open and reliable data.


View Transcript

Follow Us

Have a question or topic idea?

Get our News and Updates

Get notified about new podcast episodes, upcoming events and webinars, and more!


I’m Charlie Harp, and this is the Informonster Podcast. This episode is Part Two of our two-part series on Logica Health. If you have not listened to Part One, I suggest you go back and find it. This part will make a lot more sense if you do. This episode picks up where we left off, with Stan Huff and Susan Matney from Logica Health, and my Clinical Architecture colleagues, Carol Macumber and Shaun Shakib.

Stan and Susan, can I ask, if we could go back a little bit and… is my audio coming through or is it really light?

You’re good.

Yeah, we can hear you.

So maybe if we could back up a little bit and just talk about you’ve mentioned information modeling and terminology, and how those are both important components to achieving interoperability, but we have a wide variety of folks that listen to this podcast and maybe, you know, just some of that, some of the fundamentals about why wasn’t it enough, for example, when we just licensed SNOMED for free for use? Why didn’t that alone just solve the interoperability challenge? What is information modeling? Why, you know, how did – How does it interact with terminology and why is that important to interoperability?

So I’ll share the experience. So, you know, you go back to the early nineties and, you know, Version Two of HL7 had come out; version 2.3.1. And so you had people using the HL7 standard, and if those who are familiar, you know, you have different kinds of messages, and those messages are made up of segments and each segment has fields in it, et cetera. But what you have, basically, if you think about that, that’s a data structure, if you will. It’s the form, or I don’t know what better word than structure, that then needs to be bound to particular meanings. And back in the early nineties, a lot of people using HL7, there was no common terminology at all. So I think that’s the same era when the phraise was coined, you know, “If you’ve seen one HL7 interface, you’ve seen one HL7 interface.” Every single interface, now, you had to map manually between the codes that were being used in one system and the codes that were being used in the other system. Now, when we started making LOINC codes, working with Clem McDonald and Regenstrief, we were struck with the realization that for things to be truly understood and interoperable, they needed to be in the context of a data structure. In other words, you had the OBX segment and it provided the of what the meaning of the code was. So, you know, you could think that the total meaning, for instance, is, if you’re in an order record, the code means, “This is something that I want you to come and do. I want you to come and draw the blood and do the testing on this.” If it’s in a result record, then the LOINC code is the name of the measurement that you made. You know, a glucose level, or a red blood count, or a white blood count. And that led, you know, basically to the understanding that, to get to a truly computable meaning, you needed to have a structure and you needed to know where the codes were used in the structure because the structure adds context to the meaning; and those two things taken together can get you to a pretty computable meaning. But that evolved, from sort of thinking about and recognizing that, to saying, “Well, what if we wanted to think about that structure not in the context of just an HL7 OBX segment, but we wanted to think about it as a logical model, if you will, that presented the logical structure of the information, but then could be interpreted to many different syntaxes? It could be interpreted, or translated if you will, from an HL7 Version, Two style of information exchange, or it could be placed in a CDA document, or it could be placed in a C Disk message, or it could be placed in a FHIR transaction using a FHIR API. You know, information modeling really is not – you can’t separate it really into two parts. I mean, there are two parts that you can talk about. You can talk about the structure and the terminology, but those two things have to be considered at the same time in order to make something that is explicitly defined and computable. Because if you’re not careful, you can get into a situation where the same data can be expressed either as the meaning of the code, or as a separate element in the data structure. And so a common example is that, you know, there are LOINC codes that mean, basically, you know, a serum glucose done by a testing laboratory or a glucose level that’s done on a home glucometer. And there are at least three codes in LOINC that pertain to that. There’s a general glucose level that doesn’t have any mention of the measure, or excuse me, the methodology by which it’s done, and then there are LOINC codes that say, “This is a glucose that was done by glucometer,” or, “Done by a test strip,” where the LOINC code itself contains that meaning. You have to think about those things so that you’re not saying, you know, “Glucose done by glucometer with a method of glucometer,” (laughs) and getting that kind of redundancy and confusion, you know, in the representation. So, I mean, that’s a little bit at least about, you know, what we mean by information modeling and terminology binding. It’s the idea that you create a structure, and the structure really usually, you know, you think of as fields, and the field has some name like “Test Ordered” or “The Test That Was Resulted,” and then you have a value for that field, and then you have a field that contains the actual measured value, and that information then taken together – the combination of the coded structures and terminology really bring together the information model and the terminology.

You know, it’s interesting when you go back in time. So my first job in healthcare was developing interfaces for SmithKline Beecham Clinical Lab in 1989. And that was back when I was writing interfaces for different pieces of equipment in the lab, and I was also creating ASTM lab messaging interfaces, which, you know, they were OBX records and OBR records, and also a lot of these custom serial interfaces. And what a lot of people don’t realize is, you know, back then, when you were getting a reference lab feed from someplace or pushing reference lab data, you didn’t really map it because what you’re ultimately doing is you were putting it on a piece of paper to put in front of somebody. So the human was the computer that was taking in whatever was being provided, and they were doing the cognitive lift to figure out what that meant. And you know, where we are today is, you know, we’re really trying to let the computer help the human. Let the computer help kind of merge the data into a pattern, so that we can do things that a human can’t reasonably do or can’t easily do. A lot of that lift, from back when we were putting things on paper in front of a provider to where we’re doing things like population health and analytics and patient portals, is a big part of why it’s important that we have these agreements. And really that’s what it is, right? We are forging agreements amongst ourselves that this is how we’re going to say that because a standard by itself is kind of like the pirate’s code and Pirates of the Caribbean; it’s more like guidelines. People have to come together and say, “This is how we’re going to agree to leverage this standard, whether it’s a terminology or whether it’s a syntax.” Do you think that’s a fair statement, or do you think I’m off base?

I think it’s totally right. This is Susan. I mean, they are… They are patterns. We started with ASN One (Abstract Syntax Notation One), and now we’re on the Clinical Element Modeling Language which we have aligned with FHIR. We have more elements than FHIR because we’re doing a hundred percent of a wound assessment. We’re not big on letting you just assess 80% of it. We have those patterns at the highest level, and then we extend them or constrain them based on what type of medical clinical care it is. Is it a procedure? Is it a medication administration? Is, you know, is it an order? All of those have specific patterns.

So, I mean, one thing that I just like to express, I guess, you know, as part of this, is why; more about the why. I said it in general, that we want to create this healthcare ecosystem where you have a standards-based ability to create applications that are shareable across healthcare providers. Going back to things that we’ve been doing at Intermountain Healthcare, starting with Homer Warner, and Reid Gardner, and Al Pryor, Paul Clayton, a whole bunch of pioneers in the informatics arena, and I would mention specifically also Scott Evans, who did the actual development of a lot of things, we (I say the collective, “we.” I’m a very small part of that) created hundreds of decision support applications that did things like suggest that the best antibiotic, detected ventilator disconnects in the ICU, guided therapy of patients who are on chronic anticoagulation, guided people in ordering the right kind of blood based on what was going on with the patient, kind of broad product, whether you use packed red cells or fresh frozen plasma, or other things, et cetera. And over time, I mean, we developed this feeling that we could only provide the highest quality, and most appropriately priced, healthcare, if we used advanced decision support in care of the patient. In all of those things that I’ve mentioned, we provided higher quality care, implementing those kinds of algorithms, and almost invariably lowered the cost. And then you think about that as a microcosm of what’s possible in the world. There are statistics that are out there. Very good studies. The one that I’m thinking about is one that was published a couple of years ago in the British Medical Journal. The authors were from Johns Hopkins University, and their estimate was that 250,000 people a year die of preventable medical errors in the United States. That’s sort of hard to grasp. I mean, as bad as COVID is, you know, we’re now just starting to approach the number of people in the United States who are dying from preventable medical errors. That’s a number that’s six times greater than the number of people who die in automobile accidents. It’s 10 or 12 times the number of people that are affected by narcotic addiction and abuse. It’s an astounding number. There are other that show physicians do the right thing about 50% of the time. Those errors and those challenges are not going to go away. Unless we change what we’re doing, that’s an unsolvable problem. Because that’s not… It’s not going to happen by teaching people better in medical school, and it’s not going to happen by the typical “zero-harm” sort of programs because both of those things assume that somebody knows the right thing to do you do. And, you know, everybody can’t know everything. People can only know and use some very small area of knowledge where they may be the best expert. The only way for us to start making a dent into that kind of medical error is getting to a point where we can share clinical decision support applications. And the challenge, and this comes back to, you know, interoperability and terminology and the modeling and all that sort of stuff, the challenge that we have at Intermountain, we did those hundred things (and) they haven’t gone anywhere. You can’t install them at some other institution because they don’t have the same infrastructure, they don’t have the same codes, they don’t have the same assumptions. So that’s what we’re trying to create. It’s an environment where, if we figure out a good program for diagnosing and managing pulmonary emboli, if we can establish the infrastructure and the standards, and use those standards as we access and use data in decision support, then that can go to another institution, hopefully with little or no modification so it’s ready to use. And likewise, if the Mayo Clinic does something good, creates a good program for diagnosing occult sepsis in the emergency room, or a great algorithm for weaning people from ventilators, or, you know, you go down the list, we can share those. That’s important because no single institution has the resources to create all of the kind of software that would be useful and appropriate for patient care. Intermountain can’t do it alone. Kaiser Permanente can’t do it alone. I mean, you take the a hundred things that we’ve done, and, without exaggeration, there are 10,000 things to be done or more; probably even more than that. We’ve done things that are high volume, you know, where there are great high patient risk, but you go down the list a little bit and say, “Well, what have you got there electronically that’s going to help with management of Hashimoto’s disease, or management of asthma, or management of fibromyalgia, or helping us determine the right time for a C-section, or other things?” And literally there are 10,000 things like that that we could create. And unless we get to a point where what we create is shareable, we’re never going to realize the potential of the improvements that we can realize in the quality of care, based on those interoperability standards.

Stan, I think you make a great point. I think there’s… I spent 10 years at FTB, and four years while I was with FTB also at Zynx, and spent a lot of time in the trenches looking at the different ways people leverage things like evidence-based medicine and artificial experience. And the ability to take what we know today and push it out and make it available to people that may not know that is really important. But I also think the other thing that limits our ability to leverage that is the quality of the information that we present to decision support. I think that’s the other challenge. It’s data quality being good, being understandable or inter-operable, and being complete, I think is another thing that we still struggle with as an industry. What are your thoughts on that?

No, absolutely. You know, along with what I would call the technical part of the modeling and creation of value sets and codes, there has to be also a part of this that is based on clinical experts; in nurses, and physicians, and respiratory therapists, and pharmacists, and other people who say, “Oh, you know, the kind of information that we need to make an accurate diagnosis of myocardial infarction in the ER, you know, we need a blood pressure, we need a heart rate, we need a pulse oximeter. And then, you know, we need to different cardiac markers or enzymes that are indications of injury to the myocardium”. And there has to be an agreement because if there’s no agreement, it doesn’t matter if we have a model for it. If the clinicians don’t agree that that’s the data that’s needed to make an appropriate diagnosis, if that’s not the food that the decision support module needs to eat in order to be effective, then we’re no place, you know? Just making the models doesn’t help. And so you need clinicians in at least two ways. One is to say, you know, “What is the data that we need?” And then to agree that it’s been modeled accurately. And I guess the third part is to actually figure out a way that it happens, you know? Sometimes it might be, you know, just teaching clinicians that if you enter the data accurately, we can provide you with more help. But it’s absolutely more than the technology. You have to have the people who know what’s important guide the creation of the proper data elements, and the representation of that information, to know that you’re that saying in a way that is meaningful, and defined, and the way that they would say it, you know, the way that a clinical expert would say it, in order that you have the raw material you need in order for these other higher level applications in decision support processes to work.

Absolutely. Decision support in healthcare is near and dear to my heart. And one of the things that I think we periodically struggle with, and I’m really curious as your thoughts on this, is the risk aversion. So, you know, when I’ve done things in the past, like back in my Hearst days, you know, occasionally we would go to somebody and we would talk about leveraging decision support work they were doing in their system, and commoditizing it, or commercializing it, rather, and taking it out and packaging it. And both in terms of what you hear in the industry about treating clinical decision support like a medical device, and people that are concerned about, if they give advice or if they recommend something, and it gets outside of their ability to control it and somebody uses it, that that can come back at them. What are your thoughts on that, relative to the stuff you guys have been doing with these models?

That’s an important question, and I got a jumble of thoughts about this. One is, you think about medicine, and I’ll use my own example. You know, I had prostate cancer, had surgery, and when I was thinking, you know, once I knew that I needed surgery, I tried to look at the medical literature and understand, you know, the difference between “the traditional prostatectomy” versus a robotic-assisted DaVinci device. What I found was… I’ll tell you that I’m convinced that the surgical literature is not science. There’s so many variables in there in terms of, because it’s different surgeons, you don’t know if they’re doing the procedure in exactly the same way, you know, it’s not randomize-controlled in terms of what procedure people receive, all of that kind of stuff. But my point in all that is that’s just one example within medicine where we don’t know exactly the right answer. And for some reason, people think of clinical decision support as something and they always think about the risk rather than the benefit. So clinical decision support doesn’t have to be a hundred percent accurate to be good for the population. Medications aren’t good for every patient, and we’re learning more and more about that. But if the medication in fact, you know, saves 10 lives, and maybe it has a complication of either death or mortality in one life, we saved nine people. We’ve got to do absolutely the best we can, and we have to do real science to understand the impact and to correct things, but we shouldn’t expect that decision support has to be perfect before it’s valuable to use it and to help patients with it. And I think in terms of sort of the legal implications, the greater body of evidence that you get, it’s going to turn around and what’s going to happen is that people are going to have bad outcomes when people didn’t follow the decision support logic, or they didn’t even use the decision support logic. And I think we’ll see a change where the claims now are, “I’m not receiving the quality of care, (I’m blocking on the buzzwords. What’s the right legal term?) The appropriate standard of care because you didn’t use decision logic.” And I know that there’ll be a lot of resistance from certain physicians and maybe even physicians groups about that. But in the end, I think what’s going to win out is: Are you providing better care because you use this new tool? And we’ll do what we have to in court to defend the fact that we’re doing the best care that we know how to do, and that involves using these kinds of decision support applications.

No, I agree. I think like, when I go back to in the early days of CPOE and they rolled out a lot of decision support to the providers, and there was a huge pushback because of alert fatigue (sic). These alerts were popping up, they were getting in the way. And I think that one of the lessons there – I mean, there’s a couple of things. One is decision support is good if like 85% of the time or better, when a provider is presented with information from decision support, they feel like it’s relevant and they’re happy that they saw it. It gave them some piece of advice or some piece of experience about what’s going on with the patient that, maybe in the chaos of their daily job, they might’ve missed it, or there was an article that came out and they weren’t aware of it. The problem we had back then was you’re throwing 23 alerts in front of somebody. And every single time, it’s like, “Yeah, I know,” or, “No, this is irrelevant,” and after a while they just stopped paying attention. A big part of it is us getting better at the precision of decision support and being able to take in the variables, which are much more available now than they used to be, and present something that’s really relevant to that patient and that provider and their experience. The other thing that’s important when people talk to me about decision support being a medical device, the thing I always struggle with is a medical device is a machine where it’s predictable. You put something in and something comes out, whereas decision support, you’re dealing with very complex system. And there’s a lot of probabilities: People’s experience, the things that are not in the data, that they’re in provider’s head, or in the note that didn’t make it into the the codify data. There’s a lot of moving parts. With decision support, the ultimate processor is the learned intermediary that is looking at everything that they know, getting the advice that they didn’t know, or the help they got from the decision support, or the best practice, or the evidence, and then factoring that into their calculus. So while they’re making those critical decisions about the patient, they’re less likely to fall into that statistic of medical errors because we gave them a little bit of help that they needed in that moment when they needed it.

Yeah. Yeah, no, it’s, it’s really interesting. It reminds me of something that, well, a lesson and a warning for the future in exactly the way that you said, you know? We have to make, we have – basically, we have to make the decision support better. What we found, and this really comes from Dr. Scott Evans’ work, is that we never do it right the first time. You know, he did some of the foundational work on drug-lab interactions, drug-drug interactions, checking for allergies of patients, et cetera. The first versions of any of those things that we put in, you know, we would see 30, 40, 50% compliance with the recommendation. And we would quickly look at, you know, the situations where the clinicians didn’t follow it. And in, you know, laboratory alerts, for instance, there would be this alert on a high potassium. And then you go in and look, and a big bunch of those people were people who had chronic renal failure. And if you looked, they’d had that same potassium level for months. Then you approve the logic and say, “Oh, if this is high, if it’s been high for a long time, and they have a diagnosis of chronic renal failure, don’t put that alert up because all you’re going to do is frustrate somebody.” And what we found is that, by doing that and continuously improving it, we got up into compliance being in the high nineties and clinicians would complain when you took it away because it was providing real value to them. Part of this came to mind when you mentioned your association with First DataBank. We changed at one point from the alerts that we were maintaining locally to the things that First DataBank was doing, and we had a marked decrease in the sensitivity and specificity of those interactions. They were developed from literature and not on the front line, and it’s hard for somebody just reviewing literature to know the real outcome. It’s absolutely essential that decision support, especially in the early phases, is something that you can rapidly iterate on. We were literally changing, you know, the logic and changing limits and levels and other things every few days, or once a week until we got that much more specific and exact sort of representation. And if you throw, the thing is a device that needs an approval from the FDA and the exact algorithm you use is part of that, it means you can’t change it at the rate that you need to, to improve it. And yeah, you just really worry about the FDA regulating it like a device, like a physical device because the whole life cycle, and how you develop it, and how you tune it, has to be different than what you would do for a mechanical device.

I think there are parallels to like self-driving car is what I think about. Because, you know, I’m driving a car, I’ve got a GPS, I’ve got all the controls on the dashboard, but I’m still driving the car. I’m still behind the wheel. I’m still making those decisions. And I think that that’s the thing that the folks that talk about decision support being regulated in that way, it’s exactly like you said. It’s the kind of thing where the minute you do that people will stop using it because at the minute it’s not relevant for the weird cases where they can’t go in and tweak it, just like you said, people are going to say, “I can’t use this. I just can’t use it.” And just like with a self-driving car, you know, I would like one, I’d love to have a self driving car. I’d like to take a nap between here and Nashville. It’d be great, but I’m always afraid that if I tried that I just wouldn’t wake up because the car would crash between here and Nashville. There’s going to be scenarios where the person who was testing that self-driving car just couldn’t account for. That, and the appropriateness of the content. You know, when you talk about drug decision support, when that was first rolled out into the ambulatory and inpatient setting, a lot of that stuff was built for retail pharmacy back for OBRA 90. What I said at the time is, you know, a provider, a physician is not the same as a pharmacist. They are not going to tolerate the same level of pushback. They’re not going to tolerate the level of ambiguity that goes into something that’s popping up or printing out on a patient info sheet. They’re going to want something. You know, we would roll out 80,000 drug interactions back then, and the comment we would get sometimes was, you know, “You gave me 80,000 drug interactions, and I need to figure out how to turn off 79,950 of them.” We’ve come a long way since then, too, though.

The advantage that First Databank or all of us have is scale. You know, if we can share together, we can generate real world data for tuning those on a population that, you know, no single institution could even, even dream about having. It comes back to the whole idea of the learning health system. If we represented the data and the outcomes in a computable way, then we could tune First Databank in the same way that we did it locally but we could do it together and we could do it on hundreds of thousands or millions of patients instead of on a hundred patients or 200 patients.

We have amazing health systems and facilities here in the US and across the planet, and traditionally we had this approach where, if you look like a First Databank or a Wolters Kluwer, or an Elsevier, they have these teams of people that are reading the literature and they’re synthesizing decision support and assistance, and they’re creating these things and they’re distributing them. And that’s great. But one of the things that’s exciting about an organization like Logica and what you guys are doing is, the problem with a lot of these big publishers is you have to live in their walled garden to use it. You have to, you typically use their terminology, you use their structure, you call their API and maybe you can localize it. And that’s great. But the beauty of having a accepted, and shared, and open platform is that if somebody at the Mayo Clinic, or at Partners, or at Kaiser, or Intermountain comes up with something really awesome, that is great decision support, they can share it, and everybody can take advantage of it. You don’t have to say, “Oh, I’ll have to create a custom interface so I can take advantage of that thing that Intermountain did.” Theoretically, you should be able to just plug it in and use it. And people should be able to get the benefit of that and not have to worry about reinventing that particular wheel or rediscovering that within their silo. Right?

Part of what you’re talking about is kind of, you know, this vision of a shared repository of standard-based things. And I think part of, you know, the barrier to success behind something like that is instilling the confidence in the community that the stuff that’s been put out there is of a certain quality. You can’t obtain that without getting people to use it and test it for you.

Absolutely. So I have one other question. It’s kind of a, I wouldn’t say it’s a loaded question, but if you say, “Charlie, I don’t want to answer this question right now.,” I’ll get it right out of the podcast. I have certain feelings about this, and I’m really curious as to what you guys think about it, and that is the push that we’re getting right now in healthcare, around machine learning and artificial intelligence. Not in general, because I think machine learning artificial intelligence, when it comes to identifying patterns and doing discovery is perfectly cool, assuming you trust the data that you’re pointing it at. But when it comes to people that have tried to implement it, or people who want to implement these types of things for active decision support, I’m really curious what you guys think.

This is my thinking: That the AI tools, and machine learning, and all of that are an incredible tool for gaining new knowledge and insight. But that thinking of the analytic part, if you will, the part where I’m gaining new knowledge, that is half or less of the problem because what I need, then, I need an implementation of that new knowledge that guides future actions and care. So the example I would use is that people did research, and I think this was, I think it was Google that did this, it was really nice research, but they could basically predict, with almost a hundred percent accuracy, three or four days ahead of time of people being admitted to the hospital that were going to die during that hospitalization. And so that’s an example of predictive analytics and learning, and recognizing what clinical situations, you know, were going to lead to the death of a patient. Well, my next question would be, “Okay, so now you’ve identified these three people, what do I do so they don’t die?” That’s an entirely different question. I need knowledge now that says I’ve got to treat them with different medication, or I’ve got to put them in an ICU bed instead of in a, you know, in a regular med surge unit, or I need to do renal dialysis, or I need to do something else. In some sense, you can almost do the analytic parts and have no connection to the real world. And what I mean by that is that you get into an n-dimensional space and all you’re looking at is eigenvectors and other kind of stuff that, you know, predict an outcome and correlate with the outcomes. But then on the other side of the equation, you have to say, “Okay, but you know, the elements that are contributing to that vector, what are they? What’s the treatment.” If I’m going to take actions in the real world, you’ve got to tell me the medication I need to order, or the procedure that I need to order, the lab that I need, or the nursing intervention, or the physical therapy intervention, or the respiratory care intervention. And so, you know, there’s some people who are thinking that, “Hey, we don’t need to do no structuring of this data. We can learn from the data, you know, just by overcoming the errors in the terminology and the model of the data.” They can sort of compensate for the inaccuracies in the learning just by doing more and more patients, but that doesn’t help in treating the patients and changing the behavior of the people. In that way, you have to have explicit models, and you have to know how those models tie to actual medicines in the real world, and actual therapies, and procedures in the real world that are going to improve the care of those patients so they don’t die.

Fair enough. I want to address one other thing. And that is, you mentioned Graphite earlier. Tell me about where Logica is going and how you see it changing as we move forward.

So I don’t know that the answer to that completely. What Graphite is, is a new company that’s being formed by Intermountain healthcare and Presbyterian Health in Albuquerque. And we want to recruit other healthcare providers or healthcare organizations. It doesn’t have to be acute-care people. You know, Susan would jump in and say it, you know, it’d be great if we had some partners that were skilled nursing home people, and public health people, and all kinds of other (people), but the idea is that we want to create that plug-and-play level of interoperability. And it gets back to what we were saying earlier. We have an incredible level of interoperability that’s enabled by FHIR, and LOINC, and SNOMED, and things that you’re doing all contribute to part of the solution. The thing that we have to have is actually not… Not technical. It is a big enough group of consumers, of healthcare providers, people who are taking care of patients, and who are at risk for the cost of taking care of the patients to say, “We’re going to do this together because we see the value of the shared software; of shared decision support. We think that’s valuable enough that we’ll agree to do it together. We’ll agree together the way that we’re going to do it. And we’ll back that up by saying, you know, we’re gonna buy things that are certified against the platform. If we build things we’re going to certify them against the platform.” That creates the marketplace. It’s a realization of the vision of Logica, but it’s doing it at a scale that we haven’t been able to realize in Logica is so far. We need millions of dollars, instead of a few hundred thousand dollars, to do the work that needs to be done. But even more than that, we need the commitments of organizations to be a part of a voluntary coalition that really wants to create plug-and-play interoperable applications. And so I see Logica going forward, I see Logica being, there are a lot of things because the goals are the same. And for that matter, a lot of the organizations and people are the same. So there are things about the terminology and models we just want to do that, you know, with Graphite and Logica together. We want the development sandbox to be a combined effort between Logica and Graphite. The same thing with the marketplace. I think Logica has played a role in, and probably will continue to play a very important role. Maybe that’s the home for where we get the input from the clinicians about what we should do and what data we should collect, and what’s important to do. That kind of stuff. I see, you know, Logica continuing, I see a very close relationship with Logica and Graphite, but there’s parts of that that I don’t understand. I don’t know in terms of whether we should have a combined board of directors, for instance, between Logica and Graphite, or what the actual legal relationship might be between the two organizations.

Okay. So Stan, Susan, if somebody listens to this and they’re like, “Gee, I’d really like to find out more about Logica, and how I could get involved in Logica, and where things are going,” where do they go? How do they get in touch with you guys?

We have a website, and they’re always welcome to just send me an email, or send Laurie Herrmann Langford an email. And I don’t know if we use any way to publish this with it, but I mean, I’m happy to share my email so that people know how to get ahold of me.

We will post your website address, and if you really sure you want me to put your email with the podcast I can do that.

Well, I don’t know how… Everybody that I don’t want to know it already knows my email so (everyone laughs) I don’t have much at risk.

Well, we’ll figure that out. I also want to say that, you know, we’ve been working with you guys for a while now. I know Shaun has had the pleasure of working with you, Stan, and Susan for awhile. It’s been a shorter time for me and the rest of Clinical Architecture, but I want to say I really do appreciate the passion, the ingenuity, and the dedication to help making things better, and the spirit of collaboration we’ve had working with you guys. It’s really been a pleasure.

Thank you. Yeah.

Thank you for us too; we’ve really enjoyed working with you.


Great, great team.

All right. Any final words before I sign off? All right.

I think we’re good. Thank you.

Okay. Well, thank you, Stan. Thank you, Susan. Carol, Shaun, I appreciate it. And to all our listeners out there, thank you for tuning in. This is Charlie Harp, and this has been the Informonster Podcast. Thanks.

Thank you.