The Terminology Uncertainty Principle
I am a Star Trek fan (not a Trekkie… I don’t wear a tunic around the office…if I did it would be gold to signify command… but I DON’T). On Star Trek they had a piece of equipment called a ‘transporter’. It was the job of the transporter to teleport a person from point A to point B. It does this by converting a person to a pattern of information and energy (dematerializing them), sending them through a beam and rematerializing them on the other side, hopefully without turning them inside out.
The idea of interoperability is that you are taking structured information, that is native to one system, and converting it to structured information that is native to the receiving system. This is typically done through mapping, where a source code is mapped to the single, most appropriate target code. The objective of mapping this information is to create a picture of the patient in the target system that is as complete and accurate as the picture was on the source system. The challenge in doing this is making sure you get it right. The Star Trek writers often had to create fictional mechanisms to make the show believable. One of these fictional mechanisms, relating to the transporter, was something called the ‘Heisenberg compensator’. In 1927 a physicist, Werner Heisenberg, postulated that an experimenter cannot accurately observe a particle without affecting it and therefore it is not possible to truly know the state of a particle. This assertion went on to be called the “Heisenberg uncertainty principle”. By establishing the Heisenberg compensator the star trek writers were able to conveniently set this aside, allowing the fictional transporter to isolate the state of each particle in the source’s body and rematerialize them in their target location. How convenient.
Truth is harder than fiction
When dealing with the exchange of patient information, in the nonfiction world, we also engage in a similar process and we also worry about turning our subject inside out as a result on uncertainty.
When dealing with interoperability is important to be aware of those circumstances that introduce uncertainty into the process and conspire against us.
The interoperability uncertainty principle that I propose is as follows:
When exchanging information between systems, the more you try to read into a surface term the more likely you are to introduce an unintended shift in the meaning of the original information.
Each time we take a patient’s clinical information and convert it to another collection of terminologies a couple of factors conspire against us. A few of these factors are transcription error, contextual ignorance and granularity.
Transcription error is the fact that due to terminology differences, software and human error, each time you transform a item you run the risk of degrading the meaning of that item. The more items and the more transformations the more likely a degradation will occur.
Contextual ignorance is the recognition that all terminologies are built based on the domain view (prejudices, knowledge and policies) of the terminology publishers and that viewpoint is not inherently bound to a given term. This being the case, when a provider selects the term, it is based on the term itself, not the context behind it. Therefore it is important that when we convert/map the term to another terminology, we should deal with it in a prima facie manner. To do otherwise presumes contextual knowledge that likely did not exist and could result in transcription error.
By way of example, let’s say that you come across the SNOMED disorder term ‘fracture’ in a patient’s problem list. Before mapping it to the target you consult SNOMED and find that the ‘fracture’ term that was selected was a child of ‘Injuries to the skull’. You have the choice of mapping to a local code of ‘fracture’ or a local code of ‘skull fracture’. Which do you choose?
In another patient’s file you find that they have a severe allergy to ‘sulfa drugs’. When you go to map that to the target terminology do you look up the ingredients in that class and try to find a class with overlap in the target or do you just try to find the best match on the term ‘sulfa drugs’. When the admission clerk chose ‘Sulfa drugs’ do you think they researched the ingredients and chose that class based on how their terminology provider applied the class to ingredients? Do you think they chose what they thought they heard the patient say? Or did the patient say a single ingredient and the clerk chose the class based on their clinical knowledge.
The problem in both of these examples is that you have no way of knowing whether or not the person that selected the code was aware of the context. If this is the case, your best bet is to choose the best prima facie match. This preserves the integrity of the surface information, which is what the provider chose, the patient saw and the clinical decision support uses as an entry point.
Granularity is essentially the degree to which a term specifies the concept it s representing. For example; a drug terminology that describes my prescription as “Loratidine” is less granular than one that describes it as “Claratin (loratidine) 10 mg Oral Tablet”. This concept is also referred to as “broader or narrower than” . When conversing between two like terminologies often you run into a situation where one terminology is more or less granular than the other. When this happens someone has to make a choice. It is easier to go from a more granular term to less granular term if the less granular term is the primary defining attribute of the more granular term. Using my previous example; it is not difficult to go from “Claritin (loratadine) 10 mg Oral Tablet” to “loratadine”. It is a perfectly valid target. I am losing information in the transaction, but that information is outside the conceptual awareness of the target terminology, which only understands generic drugs. However, if I reverse the flow and go from “loratadine” and select “Claritin (loratadine) 10 mg Oral Tablet”, because it is the only option I have to represent the primary defining characteristic (loratadine), I have added information that is not based on facts. One could argue that in this case, with loratadine in particular, the guess is harmless. That agreement might change if the drug in question was warfarin and I arbitrarily selected a 10 mg strength. Another example is the problem in going from ICD9 to ICD10. Both terminologies can represent disorders, both are for the same source but they express like terms at different granularities which makes life difficult when you are trying to transition or correlate for one to another.
This problem with granularity will persist as long as we have more than one system using more than one terminology. The crux of this obstacle is that sometime you have the make a leap of granularity. Some leaps are safer than others. What is important is that if you have information that is based on a granular leap you (a) indicate that there was a shift in granularity, (b) let the consumer know the nature of the leap and (c) preserve the original data for reference. This way, even if you have made a bad leap, you allow the consumer to recover from it.
Design for uncertainty
In a nutshell, the answer to uncertainty is to plan and design for it.
1. Indicate which terms came from elsewhere
2. Preserve the original term for reference
3. Indicate granularity shifts
4. Resist the urge to infer beyond the surface term.
For terminology creators, remember a good term adheres to a doctrine of ‘res ipsa loquitur’ – or ‘the thing speaks for itself’.
I hope this has been useful. I am always open to alternate points of view. If you have anything to add or want to argue any of these, please put up your dukes and reply with a comment or an email to me.