Preloader Icon



Back to Home

The Road to Precision Medicine – Part III – Sharing Information

February 9, 2015

By: Charlie Harp

If the first pillar of a next generation clinical platform is the ability to understand what it knows, the second pillar is the ability to increase the applications knowledge by adding information from elsewhere. Accumulating meaningful and actionable information from external sources is essential if we are to create clinical platforms that augment the abilities of the provider in a significant manner.

There are several aspects to this accumulation mechanism that can be especially impactful.  A well designed clinical platform should be able to ingest and leverage the following types of information.

  • Patient information from other venues
  • Explicitly curated clinical facts
  • Explicitly curated clinical guidelines, advice or reference information
  • National, regional and local population trend information
  • Other ancillary clinical or administrative information

While all of these types of information are important, this post will focus on acquiring patient information from other venues.  This ability to exchange patient information between systems is commonly referred to as “clinical interoperability”.

What is “Clinical Interoperability”?

Simply put clinical interoperability is the ability for the applications we use to share information about the patient.  In order to be meaningful, the information we share must be understandable, actionable and trusted.  If this is true, the information can be incorporated into receiving application and used to help the provider with clinical decisions.

Think of the clinical application as a human assistant to the provider.  The interface is like a conversation between two assistants.  If the assistant on the receiving end is going to supply information to the provider that is critical to the patients care, you would want to ensure the conversation is complete and facts discussed are well understood.

When you think of the applications in this way, the manner in which we interoperate today (to a large degree) would seem comical.  Imagine a human assistant calling another human assistant and having the following conversation?

Sending AssistantReceiving Assistant
I am calling about Fred Smith
Great! We are about to see Mr. Smith
He is taking a medication called <gibberish> in my language
He said he is very allergic to Advicor, but let’s call it Alticor
He has a condition that might be similar to ‘heart disease’
There is some other things I can’t tell you
because the words I need to explain them
are not available over this phone line
Yeah… That’s not terribly useful
Hey, I am only required to call you and tell you what I can
I am going to have to ignore this conversation
I understand, I do the same thing when you call me.
Good luck with Mr Smith









Obviously, this conversation between humans would never happen this way.  When two providers talk in real life information is expressed by the sharing provider in terms they use and the receiving provider interprets the information into the terms they understand.  During this process they ask questions about things that might be uncertain and they don’t spend a lot of time sharing information that is not relevant to the care of the patient.

I often wonder how much of the information applications receive from other applications is ignored by the receiving application until it has been assessed by a human.  In other words, is this incoming information not trusted by default?  I would bet that is isn’t.  This is not because the receiver believes the provider that collected the information in the sending application was incompetent or dishonest.  It is because most systems do not follow the golden rule when it comes to interoperability.  “Share information with others as you would have them share information with you”.  Many system do what they can to satisfy the letter of the law, which is likely to prove inadequate relative to the spirit of clinical interoperability.

The Semantic Tango

In many environments the current best practice is as follows:

  1. The sender starts with a complex, pre-coordinated terminology
  2. They then map that the closest term you can find in a complex, pre-coordinated standard terminology and send that out in a message.
  3. The receiver then maps the standard to the closest term they can find in the third complex pre-coordinated terminology in a target system.

Even if we do our best to share information by translating to an accepted standard, there are issues that make this process questionable at best.  Not to mention that the mapping is traditionally done by humans at different institutions that do not have a common set of rules when it comes to mapping these terminologies.  It is almost inevitable that the information we send will undergo a semantic drift and change in the process.

Is it any wonder that our default position is to distrust the information we receive in this way?  Information that is not trusted requires human intervention to be included.  If the current process requires human intervention, then it is not really helping us to be more efficient.

In order to allow a system to expand its understanding about what is going on with the patient we need to have a mechanism for sharing information without losing or changing it.  The current approach of using a pre-coordinated terminology as a semantic pivot makes this difficult.  Using this approach often requires that we make our information fit into the list of standard predefined terms.  If it doesn’t, we either add information (by selecting a more specific or “narrower” term), lose information (by selecting a less specific or “broader” term) or give up and send free text.

How do we improve the fidelity of shared information?

Some might suggest the answer is to mandate that providers enter patient information using the ordained standards at the point of care.  It doesn’t take much mental calculus to determine all that does is force the provider to cope with the same specificity issue, except now they are doing it in their head… which is even more dangerous.

Post coordination, as discussed in the previous post, allows the data model to embrace differing levels of specificity.  This goes a long way in resolving the issue because it allows for dynamic assembly of information.  This would also accommodate inbound pre-coordinated terms as well, since most could be rendered into a post-coordinated data set.

Another option would be an intelligent mapping engine that renders pre-coordinated terms into post coordinated expressions in order to find the best fit (and I know where you can get one…).

The bottom line is we need to recognize that the reason we should do it is not to satisfy some edict from on high but rather to provide information that helps the patient, our patient, get better care.

In this post I have been focusing a lot of attention on the core notion of semantic interoperability, the ability transform concepts from one coding system into another without changing it.  This is because that is relevant to the mechanisms and approaches we have today.  Even if we are able to get the “Semantic Interoperability” mechanism right, that is only part of the puzzle.   There are other aspects of sharing patient data that will be just as relevant once we are good at it.  Let’s spend a little time talking about some of those.

Electronic Hearsay

When one venue sends information to another venue there is likely baggage that goes along with it.
This could be:

  • – Semantic drift introduced in the mapping
  • – Information that the sender had received from another venue
  • – Information asserted by the patient based on what they googled the previous week
  • – Information introduced by a “work-around” in the source application

This baggage takes the form of misinformation on the receiving end.  This misinformation undermines our ability to trust any assistance the application might render.  In order to mitigate this effect, we need a way to gauge the confidence level of each piece of information, to calibrate uncertainty.

Calibrating Uncertainty

Determining the certainty of each piece of information about the patient will result in better advice for the provider and a natural mechanisms that helps us cleanse the patient record of junk information that is not correct.  The mechanisms that we use to calibrate uncertainty will need to examine the information asserted about the patient.  They will assess contextual contradictions that cast doubt or contextual entanglement that reinforces the validity of an assertion.  Once a piece of information is flagged as uncertain it should be routed to a human for review and leveraged with caution.

What should we do with a piece of information if it is determined that it is incorrect or junk?  Should we delete it?  Only if you like deleting over and over again…

Orbiting Junk Data

Since we are living in a “connected” ecosystem, removing the information will not be enough.  Once introduced into the ecosystem, bad information will likely find its way back to a receiving system over and over again.  In order to avoid having the provider delete it repetitively we will need to have mechanisms that shield us from the re-introduction of this data.  This is harder than you might think and might not be possible without the cooperation of the ecosystem itself.

What about that other stuff over there?

When we talk about sharing information we tend to focus on terminological data and results.  No matter how sophisticated healthcare platforms become there will likely be a need for narrative, unstructured information.  This information has inherent value (some would argue more value than discrete terms) and should also be processed in a meaningful way.  (Processing clinical language into a computable form is a worthy topic that would extend this already overly long post so I will address it the future.)

Human Contingency

Even in our utopian future sharing information will not be without exceptions. The mechanism that consumes patient data will need to know when to ask for help from a more sophisticated and expensive mechanism… us.  Since we are counting on this information, the process will have to be quicker than it has been historically.

Sharing Patient Information – Nutshell version

In order to share information in a manner that meets our trust requirement the clinical platform will need to be able to do the following:

  • – Support high fidelity semantic interoperability for discrete information
  • – Calibrate uncertainty to identify and quarantine junk information
  • – Shield the patient record from the re-introduction of junk information
  • – Support the ability to consume and use narrative (non-discrete) information
  • – Know when it needs help and get that help fast

There is more to say about the ‘shared knowledge’ pillar when it comes to the other types of knowledge in a future post.  In the next post we are going to talk about “time”…

Please continue to leave your feedback.  I would love to hear your thoughts.

Stay Up to Date with the Latest News & Updates


Submit a Comment

Your email address will not be published. Required fields are marked *

Share This