Monthly Archives: May 2011

Moving the deck chairs around the titanic

In a recent post, Tom Beale argued that one of the central planks of good object design is a principle he called the FOPP (fundamental ontology property principle) principle:

As a fundamental principle, all properties defined on any given type in any ontology or information model must have the potential to be true for every instance of this particular type, at some point in its lifetime.

I think, after reflection, I’m going to have to disagree. And not just because of the name; it’s not that I don’t want to have foppish objects. No, it’s more than that. My first issue, which I made as a comment on Tom’s post is that the definition is still too loose to be of use.

Tom offered as an example, the property “wingspan” on a class of Animal; this, he claimed, clearly violated the principle. Well, ok, I guess that’s reasonable. So let’s be a bit more challenging. How about the property “legCount”? Is legCount a good one to be on the Animal Class? I’m not sure. I sure don’t think Tom’s rule helps me figure it out either. I’m going to start by interpreting as “apply” rather than “be true”. But still – does it have the potential to apply to all instances? Does a snake have 0 legs? Or is it an irrelevant concept on a snake? Does a kangaroo have 2 legs and 2 arms or 4 legs? I guess we’d need a good ontological definition of legs before we could figure that out. And, btw, we don’t even know what an instance of Animal is – is an instance of Animal a species, or an individual beast? I think that might influence the answer too.

I don’t know the answer, because I think it depends on what you’re trying to do.

And there’s the rub: it depends on what you’re trying to do.

Which is where it suddenly stops sounding so easy (not that it did). So let’s go back to nullFlavor, which is our favourite little poster child for this argument. Now Tom says that he’d like a Quantity class that is used for the following:

  • A – uses Quantities to do statistical analysis;
  • B – represents Quantities in data captured from user application screens or devices in an EHR system;
  • C – uses Quantities to represent lab test reference range data, as found in a typical lab test handbook.

Tom claims that the concept of nullFlavor only applies to #B – because clearly you would never perform statistical analysis on data captured from users. Err. Right. Actually, I asked Tom about this and he said that you’d sanitize the data first before analysis (no, but that’s a separate discussion). But still, let’s grant that, and say that therefore we’ll take nullFlavor off this quantity type so you can define it and use in the hypothetical absence of unsureness.

But hang on – what do we do about use case #B now? Well, it pretty much comes down to two different approaches. You can take the HL7 v3 road, and define a “mixin”. That’s a class that extends it’s parameter type class. When I teach the v3 data types, the concept of a mixin is the single most difficult thing to cover. It’s such a simple concept to describe:

When we use type Quantity here, we’ll just add the nullFlavor to it

Easy to describe… but not at all easy to implement. I’ve never seen such a beast quite like that. AOP comes close, and C# has something close. Supposedly ADA does too – but I’ll never use that. The rest of us are stuck with real o-o languages where close enough is not good enough (anyone who’s stared cluelessly at an arcane compiler error about parameter type mismatches will know what I mean). In XML it comes naturally – but if you want to model the XML in some class language (UML, say), what are you going to do?

Alternatively, you can do what openEHR does, and wrap type A in a wrapper class that carries the nullFlavor. See, this is good because it avoids the mystifying questions a mix-in raises, and still means that

When we use type Quantity here, we’ll just add the nullFlavor to it by instead using a thingy that has the nullFlavor and then the Quantity.

All well and good. We now have an implementable solution based on this hypothetical ontologically clean class definition (though we’ve pretty much foregone being able to treat cases which don’t have a nullFlavor and cases which can’t have a nullFlavor interchangeably- this wrapper thing is in the way now). But the clean definition means nothing to me when I implement. Because I can’t go to my customers and say, “Oh, yes, this particular combination of circumstances isn’t something that could potentially be true all the time, so I can’t think about it.” No, indeed. My rule of thumb when programming is that something that happens one in a million cases will arise on a weekly basis across the deployed code base.

Given Tom’s design, instead of dealing with this cleanroom “Quantity” object, I’m just going to end up holding on to this wrapper class instead, so I can handle the few weird cases where I do positively need this nullFlavor. So much for clean design.

The problem comes from this “doing” thing: clean theory doesn’t make for clean implementations. All you can do is move the deck chairs around the Titanic (or see my second rule of interoperability).

p.s. A note about ISO 21090: there are attributes in there that violate the FOPP principle, and that I would’ve liked to go away (HXIT, for instance). But NullFlavor itself isn’t one of them. It’s ubiquitious presence is not an embarrassment to be subtracted away later; instead, it’s a positive statement of value. I understand why people (including Tom) resist that notion, but ISO 21090 does at least offer a single consistent solution to bad data across the whole implementation space. (Some of the particular nullflavors, on the other hand, clearly violate the FOPP principle, and mainly because of modeling laziness too).


What are three things everyone should know about working with HL7 standards?

I got this question through the “Ask an HL7 Question” page. Thanks Erica

1. It’s the outcome of a committee process

The HL7 standards are the product of a consensus based committee process. Lot’s of people have eyeballed them and argued greatly about every single detail. They’re all volunteers, and they have a passionate commitment to making sure that the standards meets the functional requirements that people identify.That’s got it’s good points – it leads to real quality in those terms. Where it’s not so good is in non-functional requirements, like, “how do you make this thing work”. And it doesn’t lead to a coherent architecture either. The problem with coherent architectures is that they get coherent by rejecting efficient solutions for some things.

So you can have more reliance on the functionality of the standards that you can have ease implementing them.

2. Healthcare is really a mess

I see this all the time – Healthcare really is a mess. Everyone is doing everything differently. HL7 can’t change that – we’re not chartered to. So HL7 standards just have to absorb that mess, to allow everyone to do things their own way. What you see in the standard follows from that reality.

No one likes it, but we all get to live with it. Blaming HL7 is like shooting the messenger (yeah, it’s never stopped me before either).

3. It helps to know someone

If you’re going to work with the HL7 standards, you’re going to have questions. The volunteer (or “voluntold”) insiders are imbued with the culture and know the thinking that underlies the standards, but no one has invested the money it would take to make that knowledge explicit. So when implementing HL7 standards, it helps a lot to have access to the insiders. Most of the insiders are pretty open – you can just ask, though the amount of detail you’ll get is somewhat limited. For instance, when someone emails me and ask for a detailed comparison of openEHR archetypes and HL7 templates, along with implementation notes in Java appropriate to their particular circumstances, and explains that they’re not really sure what a RIM or an archetype thingy is, then instead of spending a week or more writing a comprehensive self explanatory treatise on the matter, I’ll brush them off with references to the various specifications (I get a variation of that question every few weeks).

Simply getting on HL7 email lists and asking questions without context isn’t going to help either – the HL7 email lists are open to all, but exist to run the committees.

If there’s really no one you know, and the economics mean you can’t buy – I guess you can ask here.

Well, that’s my 3 things. That’s a hard question – I’m not really sure that if I thought about another few more days, or in another context, I wouldn’t come up with a different list altogether.


Writing a book

Several people have commented to me after reading my blog that I should write a book.

Well, I was. That’s where a lot of this content has come from – from the draft of my book. But books are so last century now. Anyway, I could write and write and write, and then it might not happen, certainly not in a timely fashion. And I might never finish it anyway – there’s a lot of time involved. It’s not like you make a lot of money from a book anyway.

So I’m just getting it all out here now on the blog.

One day, when I think I’ve got enough quality content, and if there’s any publishing houses still left, I may turn it into a book. One of the other well known authors in the space has offered to help me out if that day comes (thanks very much).

btw, there’s a lot of acknowledgements that I should do. None of the thinking on here has just suddenly appeared – it’s grown over the years through my involvement mainly in the HL7 community, as well as other interoperability communities spanning network protocol toolkits to conformance testing frameworks. My thanks to all that I’ve interacted with. I’m not going to name names, because then I’d miss someone out.

p.s. all these posts in a row? There’s not much to do on these long intercontinental flights…

Healthcare is Special

Healthcare is special. Things that work in other industries won’t work in healthcare.

If I had a dollar for every time I’ve heard that… well, I could be sitting off a beach somewhere, surfing. Though usually, this statement is immediately followed by its denial, that healthcare is not actually special, that every industry thinks it’s special (and, if every other industry thinks it’s special, doesn’t that make healthcare special all by itself?). But for every person who says that, who wishes to claim it’s not true, there’s ten people who, whether they believe it or not, act like it’s true, behave as if its truth is one of the founding principles of their lives.

But is healthcare so special? In fact, just what is healthcare?

There is a wide scope of IT systems and/or applications that may be included under the banner of “Healthcare”:

  • Patient Administration systems
  • Clinical Tracking and Reporting software
  • Clinical Decision Support Systems
  • Financial transactions for payments related to healthcare
  • Population statistics and forecasting software
  • Specialized variants of standard IT infrastructure
  • Patient-centric healthcare data tracking software
  • Bioanalytical programs or frameworks, both in research and in diagnostics

Within this wide scope, several different factors combine to make healthcare different, and potentially special.

Jurisdictional Variability

The first and most important cause of the uniqueness of healthcare is the variability between countries. The most obvious difference between various countries with regard to healthcare is in the amount of technical sophistication, with rich countries such as USA and EU countries using high-tech high-information diagnostics and treatments, whereas these techniques are only slowly becoming available in third world countries.

Another obvious difference is how healthcare is financed. There is a huge variation between different countries, from relatively commercially based approaches such as USA to highly government financed systems such as in the UK. This same variation is seen through the whole world, not just in the rich countries. While all countries, whatever their system, share a common interest in tracking and controlling clinical costs, and relating this to population outcomes, the amount of government involvement in healthcare, and the specific details involved, can make a great deal of difference to the kinds of teamwork that are fostered, and the commercial effects that this has on interoperability. For instance, in the UK, the government has long sponsored the notion that each person has their own General Practitioner (GP) who is responsible for their care. For this reason, the government created a specification that allowed for transfer of records between GPs, and this process is gradually catching on. On the other hand, no formal arrangement like this exists in Australia, and while there is sporadic interest in exchange of records between GPs, there is as yet no prospect of a formal process existing, let alone catching on.


Another factor that makes healthcare unique is the high degree of complexity of healthcare. Actually, the problem is not so much the complexity of healthcare, but that there is no way to cherry pick the problem space. While it’s a phrase that is commonly heard in healthcare IT, that we should cherry pick the problem by “picking the low hanging fruit” (which is hardly an idea unique to healthcare), actual provision of healthcare is not really like that. Imagine, for instance, creating a special clinic that only provided renal services to renal patients, and didn’t provide care for patients with other healthcare problems.  The problem is that patients with renal disease often have a number of other significant problems, ranging from depression to cardiac failure. A clinic that was unable to have these patients would exclude a significant number of patients from care. This would be either a financial problem or a patient care problem depending on the way the healthcare system is being run. So even specialized clinics must provide general medical care.

The inability to properly control the scope of the problem is a real source of the complexity of healthcare: patients get sick in all sorts of inconvenient and unexpected ways. This is a genuine difference between healthcare and most other domains. For instance, a transportation company does not provide a service that transports anything to anywhere – it can choose to restrict the scope of its service.  Generally, healthcare providers do not have this choice. Of course, this is not a problem unique to healthcare. In fact, it’s true to at least some small degree of all industries, even the transportation example above. The more true this is, the more the problem space will share with healthcare. One obvious example is in the defense/intelligence space. Once you start a war, you’re committed to everything that follows, so the scope is extremely hard to limit. There are several overlaps between the informatics and interoperability concerns of healthcare and the national intelligence agencies.


Another key underlying difference is the fact that there’s always an altruism involved in the provision of healthcare: we’re here to save people’s lives. There’s no way to place a value on a life (and much published research on this problem from an economic perspective). Even just trying to make people’s lives better – we might know how much it costs to make it better, but this is not the same as how much it’s worth. This is true irrespective of how the care is funded.  Of course, exactly what “better” is varies wildly from context to context, and even when the meaning is well agreed, the underlying altruism may not always be significant, or even evident; for instance, if a provider refuses to provide care when there won’t be any payment, it’s hard to see altruism at work. But if the provider isn’t paid, they won’t be to live, and then they won’t be able to provide any care (of course, a surgeon earning millions a year who never provides care at no cost isn’t displaying altruism, but this is exceedingly rare). Note that though altruism is almost always present, it’s usually considered to be very poor taste to comment on it in operational contexts (and it won’t feature in this blog again!).

A contributing factor with regards to altruism is that healthcare interoperability practitioners are generally underpaid compared to their equivalents in the other industries, such as high finance and telecommunications1.  Even if this isn’t actually true, it’s certainly widely believed and in this regard, perception is reality. Many people who work in healthcare continue to do so in spite of the apparent gap because they choose to work in healthcare.

One consequence of this underlying altruism is that things that might otherwise seem simple can be very hard. When a change is proposed, in addition to the question of whether it will save money, and whether it will lead to more profitable business, and whether it’s good and/or interesting for the workers, the question arises as to whether it will be good for patient care. And because the workers are inherently more likely to be driven by altruism to some degree, they’re also (possibly paradoxically) more prepared to consider whether it’s good for themselves or not.

Again, this is not at all unique to healthcare. Most people associate a degree of altruism to why they do what they do, or more often, to how they do it (for the purposes of definition, I’m doubtful that lawyers, politicians, and used car salesman qualify as people). However altruism is more prevalent in healthcare.

Perverse Economic Incentives

Another feature of healthcare, which is partially derived from the two preceding ones, is the existence of perverse economic incentives in healthcare. These are very commonly encountered, and generally fit one of two patterns: either success leads to less funding due to decreased need, or success leads to uncontained cost due to increased utilization. Of course, perverse incentives exist in all industries where the payment cycles are not properly aligned with the costs, but this problem is particularly pervasive in healthcare because of the previous two issues.


While healthcare is not unique in any single regard, the combination of these factors does make healthcare somewhat unique. In addition, healthcare becomes unique simply because it behaves as if it is unique.

Whether any of these various possible reasons is actually proper justification or not, Healthcare in all its scope and variation is special and different enough to other industries that a there are several Standards Development Organizations specially focused on the healthcare domain, and there are several techniques that have emerged initially and distinctly in healthcare, even though their IT characteristics are not restricted to healthcare. Finally, unlike most other industries, the healthcare IT ecosystem is still characterized by a high number of very small specialist companies. In fact, it’s been called a “trillion dollar cottage industry”.

in spite of all this – or perhaps because of it – I love Healthcare Interoperability. As a discipline, Healthcare Interoperability is at the meeting point of IT, Clinical Practice, and Management. In both principles and practice, there’s nothing quite like it. To navigate the stormy waters out there, you need both passion and perspective. And you can be sure of two things: there’s an almost limitless demand for people who know what they’re doing in healthcare interoperability, and you can really make a difference for the better.


Australian HL7 meeting report from Orlando

A requirement of being part of the Australian delegation to the HL7 meeting is that we much submit a report about the meeting to Standards Australia. Then one of the delegation (Heather Grain this time) gathers all our reports into a single report and it is published on the web.

As far as I know, we’re the only country that does this, and our report serves as the only publicly available summary of HL7 meetings. I know that it’s read by a lot of people from other countries too. You can see the reports here

I’ve just finished writing my report about this meeting. It was a little more challenging than usual – I didn’t feel as though this was a normal meeting. In the fullness of time, it will eventually be made public as part of the larger Australian report from that link.

One of the problems of the larger Australian report is that it tends to get long – it’s pretty hard to summarize a meeting, and there’s plenty going on, and a number of different perspectives to gather together.

Happy reading!

When to use GreenCDA on the wire

GreenCDA is a general approach for using an intermediate use-case specific representation to make it easier to produce fully conformant CDA documents. As well as defining this general concept, the greenCDA specification lays down a general examplar of this approach using an intermediate modular XML form with accompanying schemas loosely based on the existing CDA implementation guide.

GreenCDA is a useful way to go about implementing CDA – it makes it easier than writing a straight CDA for an implementer who is not familiar with CDA, and who has one specific use case for producing CDA. There are many variant strategies already in production using a variety of XML or object based technologies that could be described as greenCDA using this technique, since it just makes obvious sense.

However the amount of effort it saves is roughly proportional to the degree to which the use case is fully defined. Let’s take, as an example, the CCD specification. It says (picking a random example):

CONF-145: A problem act (templateId 2.16.840.1.113883. SHALL be represented with Act.
CONF-146: The value for “Act / @classCode” in a problem act SHALL be “ACT” 2.16.840.1.113883.5.6 ActClass STATIC.

CONF-147: The value for “Act / @moodCode” in a problem act SHALL be “EVN” 2.16.840.1.113883.5.1001 ActMood STATIC.

CONF-148: A problem act SHALL contain at least one Act / id.

CONF-149: The value for “Act / code / @NullFlavor” in a problem act SHALL be “NA” “Not applicable” 2.16.840.1.113883.5.1008 NullFlavor STATIC.

CONF-150: A problem act MAY contain exactly one Act / effectiveTime, to indicate the timing of the concern (e.g. the interval of time for which the problem is a concern).

Given this information, we can start simplifying the CDA model – dropping the reviled classCode, moodCode, and nullFlavor attributes, since their values are fixed, and renaming the act to “problemAct” – which is much more recognizable. And so forth. Maybe we restrict effective time to TimingConcern : IVL_TS. Maybe. But this is much simpler for implementers. But when the CCD specification says this:

CONF-153:   The target of a problem act with Act / entryRelationship / @typeCode=”SUBJ” SHOULD be a problem observation (in the Problem section) or alert observation (in the Alert section, see section 3.8 Alerts), but MAY be some other clinical statement.

well, that’s not so useful. There’s nothing you can do to simplify that, unless you make some new rules about what the clinical statement referred to can say, but this would become more specific than the CCD specification – could only be used in fewer circumstances.

In general, greenCDA allows you to trade between usability and re-usability. The more different kinds of CDA you produce (i.e. the more use cases you support), the less useful greenCDA will be, as it will start fractionating your code. For this reason, greenCDA is not a long term solution to making CDA easy to use (and we are starting to consider other strategies for CDA R3).

The next question that arises is when it would be a good idea to actually exchange greenCDA between trading partners instead of exchanging real CDA documents. This is a hotly debated topic in the community at the moment. At present, the official statement is that we are awaiting more implementation experience before making a formal decision, and the debate in the Structured Documents Committee this week only confirmed that we need to await more practical experience between making a decision.

The problem is that ONC might make its own decision – in fact, probably will – before HL7 is in a position to make one based on experience. So what factors are inputs into the question of whether to use greenCDA forms on the wire?

  1. GreenCDA is a mechanism to produce CDA documents. Although the transforms are potentially applicable in reverse, there’s no way to be sure that clinically important information isn’t lost in the transform. The greenCDA specification itself comments about this. If you exchange the greenCDA form, you can be sure that nothing is being lost in the transform. I think this is the strongest reason to exchange greenCDAs – to make it clinically safe to use the simplified form for reading too
  2. You need substantial control over the community to make this work (and a narrow use case, as discussed above). Since the wire form is use case specific, the document cannot be shared with users that don’t implement that particular use case. But there are communities where the infrastructure to do this is already in place. (in a variation to this, the community may provide a perimeter agent that transforms the greenCDA to full CDA for sharing with the wider community, such as a national EHR system)
  3. It’s evident that the interest in using greenCDA formats is associated directly with use of CDA as a message alternative. CDA is first a document, with author and receiver responsibilities around attesting the *narrative* and displaying it properly. It’s evident that greenCDA exchange is data-centric, not narrative centric. Yet greenCDA is logically considered “CDA exchange” – just a different form. So a community should only agree to exchange CDA if the *document*ish nature of the exchange is well understood and described as part of the agreement. This is important because the nominated author of the document is on the hook for the attested narrative, but this may *never* be built, or built much later. implementation convenience cannot trump clinical safety.
  4. There’s a portion of the HL7 community who believe that is important for HL7 to protect users from themselves. These stakeholders naturally hold that user communities cannot be trusted to make appropriate judgements concerning the risks and benefits of greenCDA, so we (HL7) should not allow them to use it at all. Supporting this is the fact that in the few cases I’ve seen, the document aspects of greenCDA trading agreements have been left unsaid. This is where the hotly contested part of the debate arises – to what degree is HL7 responsible for protecting users from misusing its standards (and to what degree can we). It’s not a subject for me to resolve here, but my personal opinion is there’s no reason to make a ruling in this case.
  5. You have to real sure of your use case to be sure about making the right trade-off between use and re-use. of course, as Doug Fridsma says often, the only way to find out whether you’ve got it right is to try it out and see what happens.

Here’s to trying things out 😉

Establishing Compromise

Interoperability is a two step process – first you have to get people to understand each other, and then you have to get them to agree with each other.

I put these in this order for a reason. It’s certainly possible for two people to agree with each without have any understanding of each other. In fact, this is regularly seen in HL7: it’s called “heated agreement”.

And the problem is, it’s accidental; it’s not based on a proper knowledge of the problem at hand, and while the two (or more) people might actually agree with each on a particular matter, it’s almost certain that their disagreement is due to speaking different languages, and that the different language will lead to actual disagreement on the next subject over.

So you have to get actual understanding between people. And often, that’s enough; as soon as people understand each other, they’ll come to agree with each other spontaneously.

But sometimes this doesn’t happen. That’s when the going gets tough.

Mostly this happens when people have already built systems (applications, policy frameworks, books, or even blogs). People can change their minds on a matter pretty quickly (in some cases, years of contention and confrontation can be solved in just 3o seconds of discussion, when the right words are said). But when they’ve already built systems, and the systems are based on differing solutions to the same problems, it’s quite likely that changing the systems will be very costly; too costly to contemplate.

Either people are going to have agree to disagree, or they’re going to have to compromise. And compromise means that one or more people are going to be spending a lot of money.

In these cases, it’s hardly surprising that compromise is hard to achieve, and that getting there features bad feelings, brinkmanship, and diplomacy. Compromise is expensive, and therefore valuable.

Or you can do what HL7 increasingly does: build a complicated framework that allows both solutions to be described within the single paradigm, as if there isn’t actually contention that needs to be resolved, or that this will somehow resolve it. This is expensive – but not valuable; it’s just substituting real progress with the appearance thereof.

Note: I think this is the one of the “beauties” of v2 – there’s nowhere to hide ongoing disagreements but right there in the mess of segments and fields.

The Wreck of HL7

A wonderful contribution from Jean-Henri Duteau:

The legend lives on from ANSI on down
of the SDO they called “HL7”.
Health Standards, it is said, never gives up her dead
when the skies of November turn gloomy.
With a load of designers twenty-six thousand tons more
than when HL7 started early,
that good SDO was a bone to be chewed
when “Semantic Interoperability” came early.

The SDO was the pride of the American side
coming back from some place in Ann Arbor.
As SDOs go, it was bigger than most
with a crew and CEO well seasoned,
concluding some terms with a couple of projects
when they left fully loaded for Orlando.
And later that night when the supper bell rang,
could it be the north wind they’d been feelin’?

The wind in the wires made a tattle-tale sound
and a wave broke over the railing.
And ev’ry man knew, as the CEO did too
’twas witch of Interoperability come stealin’.
The dawn came late and the breakfast had to wait
when Semantic Interoperability came slashin’.
When afternoon came it was freezin’ rain
in the face of a hurricane west wind.

When suppertime came the old Board chair came on deck sayin’.
“Fellas, it’s too rough t’feed ya.”
At seven P.M. a main hatchway caved in; he said,
“Fellas, it’s bin good t’know ya!”
The CEO wired in he had water comin’ in
and the good SDO was in peril.
And later that night when ‘is lights went outta sight
came the wreck of HL7.

Does any one know where the love of God goes
when the requirements turn the minutes to hours?
The searchers all say they’d have made Normative
if they’d put fifteen more miles behind ‘er.
They might have split up or they might have capsized;
they may have broke deep and took water.
And all that remains is the faces and the names
of the wives and the sons and the daughters.

In a musty old hall in Ann Arbor they prayed,
in the “Health Informatics’ Cathedral.”
The church bell chimed ’til it rang twenty-nine times
for each man on the HL7 Board.
The legend lives on from ANSI on down
of the big SDO they call “HL7”.
“Health Standards” they said, “never gives up her dead
when Semantic Interoperability come early!”


NullFlavor follow up

The last post explains NullFlavor. It’s not a short explanation. But incomplete data quality is like that. Half the explanation is about the problem, and half about the solution.

NullFlavor is controversial. The biggest critic is Tom Beale of openEHR (Tom’s a regular reader of the blog. I look forward to a long comment below ;-)). And there’s certainly problems with the way HL7 does it in v3.

Note that there’s no actually much difference in outcome between the way openEHR handles nullFlavor and the way v3 does it – it’s more a matter of philosophy. The only actual model differences are:

  • The use of nullFlavors on interval boundaries
  • v3 allows individual values inside collections to have nullFlavors (but also puts more useful values inside collections)

With regard to philosophy, there’s a lot more difference. Tom won’t put nullFlavor on the base data type because that completely changes the nature of the base data types, and makes them unsuitable for use in a system. It certainly makes them harder to use, but HL7 takes the attitude that this is a matter of semantics – so you just have to do it, and anyway, HL7 is modeling data for interoperability, not for systems. (But this is a head-in-the-sand approach – the datatypes bleed straight into system design, and all sorts of people are trying to use ISO 21090 for system design – which is why Tom and I will eventually get around to publishing a “system implementation profile” for ISO 21090 dealing with issues like this for system design).

openEHR puts the nullFlavor on the element, side by side to it’s data. The intent is similar. If you tried doing this in HL7 v3, it would look real weird, and implementers would hate it more than they hate the current approach (by a factor of lots). The nullFlavor list is much restricted in openEHR – only dealing with missing data. Incomplete data is dealt with in different ways.

I think the openEHR approach works well for EHRs. Not so much for v3. I wouldn’t like to see a v3 where we did things that way. Not that I’m real comfortable with how we’ve done things in v3. If I ever did v3 over again, I’d really like to put nullFlavor up against the wall – I just don’t know if I would be able to.

In v2, we didn’t have a systematic approach to nullFlavor. There’s been several requests to add it in – one serious project proposal which consumed considerable committee time. But it’s just too big a change to introduce it in a standard where we are committed to backwards compatibility.

Really, in a perfect world, nullFlavor wouldn’t be required. But it isn’t a perfect world. And our models sure ain’t perfect.


One of the most obvious design features of HL7 v3 is that every class and data type includes the property “nullFlavor”, which provides a reason why the data is (if it has a value), missing, incomplete, or an improper value. This post is my tutorial documentation for NullFlavor.

Note: The way HL7 uses NullFlavor is quite controversial. I’m not going to discuss the controversy here, and I’d ask my many friends who read this blog and disagree about nullFlavor not to comment below. This is just to explain how it’s supposed to work – let’s not complicate matters (I promise to make a post later where everyone can comment to their heart’s content).

Incomplete Data

Incomplete, erroneous and uncertain data is ubiquitous in healthcare. Everywhere you turn, you find poor quality data. There are a number of reasons why incomplete, partial and uncertain data arises. For example:

  • Provisional diagnosis is not confirmed yet
  • Patient is unconscious, or unwilling to provide information
  • Wrong data was entered into the system, which caused some actions. The wrong data must be retained but clearly labeled as wrong
  • Actual value is not a allowable value (i.e. the patient has a healthcare problem for which no existing concept is defined)
  • The patient had a adverse reaction to something, but it’s not certain what stimulus caused it

Of course, this problem is not unique to healthcare; poor quality data can be found in all industries, especially in the enormous piles of existing data that cannot be improved. However there are a few things that distinguish healthcare in this respect:

  • Care – and the associated clinical processes – will go on whether the information is good, available or not
  • In most cases, the logical response to poor data is to improve the business processes, but because of the previous point, this is often not possible in healthcare (nor does it help with existing data, of course)

So when it comes to healthcare interoperability, we are interested in both the degree to which data is missing, incomplete, and/or erroneous, and also why. These three categories represent very different aspects of data unreliability, but they have overlapping impacts on the data itself.

Incomplete or erroneous data arises in many different contexts. For instance, care may be provided to unconscious unidentified patients, or patients may refuse to provide a variety of different pieces of information. In addition, clinical processes always occur in the absence of some information; the missing information may be slowly filled in, or may never be acquired (e.g. on the basis that the benefit of acquiring the information doesn’t justify the cost). There’s a wide variety of other clinical reasons. There are also some administrative and infrastructural issues as well. For instance, access to a piece of information may be denied due to security/privacy reasons, or the record may have been lost due to some accident or confusion, or the actual value is not a proper value in the given context. Sometimes data is lost or damaged by some transfer error, usually when information is transferred to the wrong patient, though there are many other kinds of transfer errors and other reasons for erroneous data (and much work on this matter). In spite of much effort, these errors have only been reduced, not eliminated, and they occasionally lead to wrong decisions being made about how to care about the patients. Because this can happen, wrong data needs to be preserved in the record, but clearly labeled as erroneous data. On the other hand, real actions may have been taken as a consequence of the wrong data. Though the grounds for the action may have been erroneous, the fact of the action and its consequences are not erroneous.

At this point, alert readers will ask, what does “the actual value is not a proper value in this context” mean? This is a concept that arises specifically in the v3 data types, though the problem it represents is ubiquitious. To explain it, we need to establish some definitions:

Value Domain The set of possible values that an attribute can have. The possible values are firstly established by the definitions of the data types, and then may be further constrained where the attribute that uses the type is defined, by fixing the values of the meta-attributes, or making other constraints
Proper Value A subset of the value domain, excluding some values that are allowed by the value domain but are considered “improper”, in order to help implementers differentiate between allowable error conditions and non-erroneous data. The data types defines an “improper” value as any value that has a nullFlavor.
Actual Value The correct value for the attribute. While the value may not be known, that doesn’t mean that there is no actual value.
Represented Value The value as shown in the instance of the data type.

Of course, the optimal case is that the represented value is the same as the actual value. However for a variety of reasons, this isn’t always possible. For instance, if the value is not known, the represented value has some kind of nullFlavor, but the actual value is (probably) not. (There’s also an advanced use case: In some cases, the instance may provide enough information to allow the reader to calculate the actual value based on information that the reader of the instance knows – most often used for prescriptions, where the actual value of the prescription depends on body mass or some other biophysical value).

Sometimes, if the actual value isn’t a proper value, there is a problem with the definition of the proper values, which can and does happen. A typical example is where a concept reference allows a small set of concepts to be referenced, and the correct reference is not in the small set. Another example is “trace detected” which is a common laboratory measurement result, and “water to 100mL” which is found in formulations. There’s multiple different perspectives from which to judge what is a “proper value”; in appropriate contexts, these are proper values in a clinical sense, but here the perspective is an informational one: these values are usually measurements (a number/unit pair), but “trace detected” is not a measurement.

These definitions are specific to the HL7 data types, but the concepts apply in any interoperability context: the notions of value domain, proper value, actual value and represented value arise from the problems that interoperability is trying to solve, though there is wide variation in how the problems are solved.

Handling Incomplete Data

The first question with regard to handling incomplete or partial data is the general framework approach: should it be done by exception, or as a matter of infrastructure?

In the by exception approach, whenever an element is identified that may be unknown, or have some unexpected or improper value, a specific design element is added to the model to cater for this, like this:

In this case, the Example class has a Gender, with the human genders M and F as possibilities. In 99.9% of cases, the gender of a patient will fall into these two categories. But not all; there are several reasons why a patient may not be either male or female:

  • The information is not available
  • The patient is a newborn with a genetic error, sexually ambiguous, and the clinician and parents have not (yet) chosen what gender will be assigned
  • The patient is undergoing gender reassignment treatment
  • In some social contexts, there are other possibilities than male and female

When these circumstances are encountered, the Gender attribute would be left as null (which requires that it is allowed to be null), and the GenderException attribute would contain some text explaining why it is null.

The biggest problem with this approach is its unpredictability. It’s really hard for the modelers – the people who build the models above – to account for all the exceptions that arise. This is true whether it’s a standard or just a one off project in a local institution – exceptions are often discovered very late in the development cycle. So these Exception attributes are not always present where they may apply. In some cases – perhaps many cases – the presence or absence of the exception field may not be that important. In the case above, if the gender is null, we know it is unknown, whether we know any reason or not. However as the complexity of the element’s data type grows, the potential interaction between the two attributes grows, and the presence or absence of the exception attribute becomes more important – particularly when both are populated.

Another problem with this “by exception” approach is that handling these cases on an ad-hoc basis produces a variety of different approaches in the exception field – plain text, different sets of codes, some mix of both. The inconsistencies this leads to make it very hard for any of the implementations to take a consistent approach internally when handling the data.

Finally, handling incomplete data with this by exception, ad-hoc based approach tends to produce best-case modeling, where whoever is designing the models takes an optimistic view of the data. People generally work this way: as we learn, we build our internal model of use that is flexible and generally handles failures gracefully, usually by adding additional unexpected communications.

A typical case is with clinical models. Ask any clinician to describe a simple what information is needed to correctly represent and communicate about a simple clinical procedure – say, a blood pressure mechanism – and they’ll automatically tell you that you need systolic and diastolic pressures, posture, and possibly some note about the state of the patient (the longer they think about it, the more they come up with. The openEHR archetype had 19 data elements last time I checked). For 99%+ of blood pressure measurements, the simple 3/4 data element version model works fine. And in a paper form, that’s good enough. Say you created a standard form, with slots for systolic, diastolic, and a series of boxes to tick for posture, with the six most common postures. That’s ok because when patient is a 3 year old child who’s already rather panicky, and gets hysterical when the blood pressure measurement is tried, you’d usually just write some note describing why the blood pressure measurement failed next to the slots on the paper. But computers don’t work like that: if it’s not described in the model, it most likely won’t be possible. Even if it’s possible in some system, it certainly won’t be possible to transfer between systems. So as well as thinking about how things should be, you also have to think about all the ways that they might not be how they should be.

Dates are another great case in point. Most people know their date of birth, or have a designated fictitious date of birth that they use (say, a refugee is assigned one upon being admitted as a citizen of another country if their original data of birth is unknown). And all sorts of payment forms and workflows depend on knowing the age of the patient, so systems very often make date of birth a mandatory field. But some people don’t know their date of birth, or won’t say what it is (maybe unconscious or just not co-operative). So what happens in these cases? Usually, the clerical staff will just “know” that a special date is the “unknown” birth date. Mostly it’s the date 01-01-01, but systems vary. When migrating data between systems, or setting up data interchange, there’s always confusion around these special dates.

One problem here is that by the time you account for all these exceptions, the model will no longer be simple or well designed. Real life is Yuck. And the effect of not catering exceptional data may be to prevent communication altogether. What else can you do – invent a legal value?

For these reasons, a consistent design approach is much preferred when designing models of healthcare. The approach taken in the v3 data types is to handle unknown, incomplete, or unreliable data with a single consistent approach where every value may have a proper value, or a nullFlavor of its own. In practice, this means that we declare a type called “ANY” with an attribute called “nullFlavor”, and all the types derive from ANY so that they have the nullFlavor attribute.

The most important feature of ANY is nullFlavor; I’m going to igore the other attributes here. The nullFlavor attribute is an enumeration which may be null, or have one of the following values:

1 NI no information The value is exceptional (missing, omitted, incomplete, improper). 

No information as to the reason for being an exceptional value is provided. This is the most general exceptional value. It is also the default exceptional value

2 INV Invalid The value as represented in the instance is not a member of the set of permitted data values in the constrained value domain of a variable
3 OTH Other The actual value is not a member of the set of permitted data values in the constrained value domain of a variable. (e.g., concept not provided by required code system)
4 PINF positive infinity Positive infinity of numbers
4 NINF negative infinity Negative infinity of numbers
3 UNC Unencoded No attempt has been made to encode the information correctly but the raw source information is represented (usually in originalText)
3 DER Derived An actual value may exist, but it must be derived from the provided information (usually an expression is provided directly)
2 UNK unknown A proper value is applicable, but not known
3 ASKU asked but unknown Information was sought but not found (e.g., patient was asked but didn’t know)
4 NAV temporarily unavailable Information is not available at this time but it is expected that it will be available later
3 NASK not asked This information has not been sought (e.g., patient was not asked)
3 QS sufficient quantity The specific quantity is not known, but is known to be non-zero and is not specified because it makes up the bulk of the material: ‘Add 10mg of ingredient X, 50mg of ingredient Y, and sufficient quantity of water to 100mL.’ The null flavor would be used to express the quantity of water
3 TRC trace The content is greater than zero, but too small to be quantified
2 MSK masked There is information on this item available but it has not been provided by the sender due to security, privacy or other reasons. There may be an alternate mechanism for gaining access to this information. 


Warning: Using this null flavor does provide information that may be a breach of confidentiality, even though no detail data is provided. Its primary purpose is for those circumstances where it is necessary to inform the receiver that the information does exist without providing any detail

2 NA not applicable No proper value is applicable in this context (e.g., last menstrual period for a male)

Null is not Null

There are a number of things to say about this table. The first and most important thing is that “nullFlavor” is not the same as “null”. A “null” pointer is a familiar concept to programmers: a pointer with a value of 0, that points to nothing. By definition, if a value is null, there is nothing else known about the object. This doesn’t apply to nullFlavor (you can tell by the definitions of the enumerations – particularly invalid, other, unencoded, and derived). In such cases, even if nullFlavor has a value, the other attributes of ANY and its descendents may have values (which also follows logically from how they are defined in UML). For example:

 <CD nullFlavor="OTH" codeSystem="SNOMED">
   <originalText>Burnt Ear</originalText>

This is a coded value, with a nullFlavor, and also a codeSystem and an originalText. (In fact, the discussion of “OTH” in the CD section requires a codeSystem attribute to be present if the nullFlavor attribute has a value.)

So if nullFlavor is different to null, why give it such an appallingly confusing name? Well, though nullFlavor is not the same as “null”, the behavior of types with nullFlavors is very similar to null in SQL and OCL. In addition, the data types were defined by HL7, and in the HL7 context, there is string internal continuity between the concept of “null” in the ISO 21090 and earlier HL7 data type definitions. (And also because “ExceptionalValueStatusFlag” is a pig to say – and nullFlavor is a word that used a lot).

For instance, in SQL, given a table like this:

Key Name Count
1 USA 0
2 Canada 1
3 Australia Null

The SQL statement “Select count(*) from test where count != 0” will return a value of 1, and both the SQL statements “Select count(*) from test where count = null” and “Select count(*) from test where count != null” return 0. This is because null values are never equal in SQL. Like SQL null values, v3 data types that have a nullFlavor are never equal, even if all other attributes have the same value.

Note that in select situations, values that have a nullFlavor may be known to not equal; for instance, two values with nullFlavor PINF (positive infinity) and NINF (Negative Infinity) are clearly not equal, even if we cannot say that two values that both have nullFlavor PINF are equal. The very specific uses of infinity in healthcare data are discussed in chapter X.

In OCL, null values propagate through an expression. For instance, in the following expression: CD.displayName.language.equal(x), if CD is null, rather than causing a ‘Null Pointer Exception’ or similar, the null value propagates through the expression, and the result will be null, whatever the value of x is. The ISO 21090 data types are expected to work in a similar fashion. For instance, the following notional expression (in some OCL-like language): “CD.displayName.language.equal(x)”, where CD is null, should generate a null value rather than some “Null Pointer Exception” or equivalent.

So it’s important to understand that in v3 contexts, a data value that has a nullFlavor is called “null”, and that this is not the same as “null” in a programming language: a data value with ANY nullFlavor may have any other attribute populated as well (though a lot of combinations wouldn’t make sense). Semantically, null in a programming language sense (or xsi:nil=”true” in an XML context) is the same as a value with nullFlavor=”NI” and no other attributes valued: either simply says, “We have no information, and no idea why we have no information”. In fact, the rule in the previous paragraph about how the null value propagates through an expression is no different to saying that logically, when accessing a property that is null, you treat it as if it existed with a nullFlavor (which exact nullFlavor varies, and is driven by the semantics. For instance, if the data value is not applicable, then it may reasonably be inferred that all the attributes of the data value are also “not applicable” rather than merely having no information about their state; however there is a great deal of ambiguity in this area; the data types do not make any rules about which nullFlavors should be used, only that one generally should be used).

NullFlavor Enumeration

The second thing to say about this table is that all the values of the NullFlavor enumeration are a simple list, the meaning is more complex – as in a proper terminology, some of the enumeration values subsume others in meaning. So while we can say that OTH <> INV, if the question is, is this value “invalid” (is its nullFlavor “INV”), then this is also true if the nullFlavor is OTH, since OTH implies INV. This is a common feature of enumerations in v3 and also found in healthcare more generally. Another common feature in healthcare is the quality of the definitions: they sort of make sense when you first work with them, but the more data coding you do, the harder it becomes to sift through them. Many of the definitions are particularly opaque until the semantics of the other data types are understood. But even after they are understood, ambiguities remain. For instance, exactly when does something become “not applicable” rather than merely “unknown”?

Some of the nullFlavors in the table apply to most contexts in which a data type is used. For instance, it’s easy to see how “unknown” or “not applicable” might be used in most places where a data element could be used, some of them are rather more specific. For instance, it wouldn’t make sense to use positive or negative infinity in any place but where a numerical value is expected. This is a feature of a general solution to the problem of incomplete or partial data: handling all the different cases in a single infrastructure provides consistency in one way at the cost of allowing inconsistency in other ways. As a consequence, v3 makes a number of rules about the contexts in which some of the nullFlavors may be used.

So nullFlavor is a very strong design pattern in v3: every data type carries a nullFlavor, and when you are working with data types you must always check the nullFlavor. Like checking null becomes a reflexive habit when programming in java, checking the nullFlavor must become a reflexive behavior when working with the data types. The consistent approach to incomplete or partial data that nullFlavor represents certainly carries a cost, but the payoff, in terms of consistency and clinical safety, is considerable.

Like all standards, the nullFlavor approach has grown in complexity, and some of the particular nullFlavors reflect weird use cases that normal implementers will not encounter. In general, though you must always check for the presence of nullFlavor, the particular kind of nullFlavor doesn’t really matter in most cases – it’s just another different reason the data is missing, and it doesn’t really matter. When the workflow makes it matter, then it will make sense to check for particular NullFlavors. However there are three general cases where the type of the nullFlavor does matter:

  • OTH/UNC on coded values
  • TRC/QS/DER on measurements
  • PINF and NINF on interval boundaries

Many implementers look at the NullFlavors above (the explanation is certainly long….) and get filled with an urgent desire to simplify it. That makes sense – it’s a typical response to a mature standard, and this part of v3 might be called more “mature” than others. And feel welcome to simplify it – but my advice is to keep the consistency and ubiquity of the nullFlavor approach, however you handle incomplete or partial data.

End Notes:

  • All v3 classes have nullFlavor as well. There’s some pretty subtle things about that and I don’t think anyone has ever written it up – I’ll try and get to that