Monthly Archives: March 2013

Australian FHIR Connectathon and Tutorial

We’ll be holding a FHIR connectathon here in Australia as part of the IHIC 2013 – International HL7 Interoperability Conference in Sydney in late October 2013 (around 28-30).

This is an opportunity for Australasian implementers and vendors to get practical experience in FHIR. Here’s why you should consider attending:

  • Find out what all the excitement is about
  • Get a head start building FHIR into your products
  • Get a real sense of what FHIR is good for, and what it isn’t
  • Help ensure that FHIR meets real-world Australasian requirements
  • Be a recognised part of the FHIR community
  • Connectathons are real fun

Realistically, this is likely to be the only connectathon held here in Australia (at least, the only one prior to the publishing of FHIR as a DSTU). The connectathon is focused around actual exchange of content, not theory (there is a place for theory, but the connectathon is not part of it). So it’s really suitable for technical staff, though we have had a few non-technical staff brushing off the cobwebs on their development skills just so that they can participate in the HL7 connectathons.

FHIR connectathons are always start with exchanging patient information, and then we build on that depending on the interests of the actual participants. We’ll be putting out an call for expressions of interest in a few months time, after which we’ll start clarifying what our actual scenarios will be. However I expect that MHD (mobile friendly version of XDS) is likely to be in scope.

In about July/August I’ll hold a 1 or 2 day tutorial here in Melbourne looking at FHIR in depth, with a focus on implementation issues. This tutorial will start with a general review of FHIR, and then look at depth at the FHIR reference platforms, how to use http tools to exercise and debug interfaces, how to convert from v2 to FHIR and vice versa, and look in detail at various models for implementing a server.

If you might be interested in attending this tutorial, drop me a line at – I’m trying to get a feel for event hosting issues such as facility and cost.

Question: FHIR and un-semantic interoperability


 I did not understand the blog post about un-semantic interoperability.  Can you elaborate?  Will FHIR provide any of this un-semantic interoperability?


Well, the original post on unsemantic interoperability is just pointing out that many people mis-understand the nature of what semantic interoperability is trying to achieve:

We’ve had semantic interoperability in healthcare since we started having healthcare. Since the beginning of healthcare (by whatever definition you can use), healthcare practitioners have exchanged data using spoken and written words, and the semantic meaning has been clear (well, as clear as it can be given that human knowledge is limited).

So whatever it is that we are doing, it’s not introducing semantic interoperability. In fact, what we are doing is introducing a new player into the mix: computers. And not, in actual fact, computers, but the notion that there is something to be gained by processing healthcare information by persons or devices who don’t properly understand it. So, in fact, what we are actually doing is seeking for unsemantic interoperability.

It’s a matter of perspective. Perhaps, one day, we’ll really be working on true semantic inteoperability. But right now, we can afford to chase a lesser goal, which is exchanging data that can be used usefully in some limited pre-ordained ways.

And FHIR provides lots of this – that’s what it’s good at – getting data that is passably well self-described to be exchanged as easily as possible.

For systems that is really working towards genuine semantic interoperability, FHIR is actually a step backwards (though I’d argue that the easy availability of information that is passably well described is a huge improvement over the non-availability of information that is well described).

Question: Diagnostic reports in CCDA documents


  1. What is the maximum number of clinical lab tests  that can be included in any C-CDA summary?
  2. Is the HL7 2.5.1 (or higher) messaging standard used to transfer the lab test results data?
  3. Can the C-CDA currently transfer the results of imaging or other kinds of diagnostic tests?
  4. How are the C-CDA test results displayed in the style sheet for viewing and sharing by physicians and patients?


1. What is the maximum number of clinical lab tests  that can be included in any C-CDA summary?

There’s no theoretical limit, but there are practical considerations around how big the CDA document can get. The principle limit relates to how long the document takes to render.

2. Is the HL7 2.5.1 (or higher) messaging standard used to transfer the lab test results data?

Not in CDA. A different form is used, using CDA observations and battery organizers. Converting from v2 to CDA is relatively straightforward in theory, if the v2 messages are well constructed, but the variation in source messages is always a challenge.

3. Can the C-CDA currently transfer the results of imaging or other kinds of diagnostic tests?

Yes – see template 2.16.840.1.113883. This is the report (the interpretation) but it can include images as required using ObservationMedia

4. How are the C-CDA test results displayed in the style sheet for viewing and sharing by physicians and patients?

Well, that depends on what the source system puts in the narrative (assuming that the document narrative is displayed. If it’s not, then it’s entirely up to the receiving system to decide how it’s displayed, like with a v2 message).



Guest Post: HL7 Language Codes

This guest post is written by Lloyd McKenzie.  I’ve been meaning to get to it since the January WGM, but I’ve been wrapped up in other things (most recently HIMSS).  However, I agree with what Lloyd says.

Question: I have a need to communicate a wide variety of language codes in HL7 v3 instances, but the ISO Data Type 21090 specification declares that ED.language (and thus ST.language) are constrained to IETF 3066.  This is missing many of the languages found in ISO 639-3 – which I need.  Also, IETF 3066 is deprecated.  It’s been replaced twice.  Can I just use ISO 639-3 instead?


The language in the 21090 specification was poorly chosen.  It explicitly says “Valid codes are taken from the IETF RFC 3066”.  What it should have said is “Valid codes are taken from IETF language tags, the most recent version of which at the time of this publication is IETF RFC 3066”.  (Actually, by the time ISO 21090 actually got approved, the most recent RFC was 4646, but we’ll ignore that for now.)  This should be handled as a technical correction, though that’s not terribly easy to do.  However, implementers are certainly welcome to point to this blog as an authoritative source of guidance on ISO 21090 implementation and make use of any language codes supported in subsequent versions of the IETF Language Tags code system – including RFC 4646 and RFC 5646 as well as any subsequent version there-of.

The RFC 5646 version incorporates all of the languages found in ISO 639-3 and 639-5.  However, be aware that while all languages are covered, there are constraints on the codes that can be used for a given language.  Specifically, if a language is represented in ISO 639-1 (2-character codes), that form must be used.  The 3-character variants found in ISO 639-2 cannot be used.  For example, you must send “en” for English, not “eng”.

Question: But I want to send the 3-character codes.  That’s what my system stores.  Can’t I use ISO 639-2 directly?


No.  In the ISO 21090 specification, the “language” property is defined as a CS.  That means the data type is fixed to a single code system.  The code system used is IETF Language Tags, which is consistent with what the w3c uses for language in all of their specifications and encompasses all languages in all of the ISO 639 specifications plus many others (for example, country-specific dialects as well as additional language tags maintained by IANA.)

Question: Well, ok, but what about in the RIM for LanguageCommunication.code.  Can I send ISO 639-2 codes there?


Yes, though with a caveat.  LanguageCommunication.code is defined as a CD, meaning you can send multiple codes – one primary code and as many translations as you see fit.  You are free to send ISO 639-2 codes (the 3-character ones) or any other codes as a translation.  However, LanguageCommunication.code has a vocabulary assertion of the HumanLanguage concept domain, which is universally bound to a value set defined as “all codes from the ietf3066 code system”.  That means the primary code within the CD must be an IETF code.  So that gives you two options:

  1. Fill the root code with the appropriate IETF code – copying the ISO code most of the time and translating the 3-character code to the correct 2-character code for those 84 language codes found in ISO 639-1; or
  2. Omit the root code property and set the null flavor to “UNC” (unencoded), essentially declaring that you haven’t bothered to try translating the code you captured into the required code sytem.

And before you mention it, yes, the reference to IETF 3066 is a problem.  The actual code system name in the HL7 specification is “Tags for the Identification of Languages”, which is the correct name.  However the short name assigned was “ietf3066” and the description in the OID registry refers explicitly to the 3066 version.  This is an error, as IETF 3066 is a version of the IETF “Tags for the Identification of Language” code system and the OID is for the code system, not a particular version of it.  (There have actually been 4 versions so far – 1766, 3066, 4646 and 5646.)  We’ll try to get the short name and description corrected via the HL7 harmonization process

Question: But I don’t want to translate to 2-character codes and I don’t want to use a null flavor.  Can’t we just relax the universal binding?


We can’t relax the binding because the HumanLanguage concept domain is shared by both the ED.language property in the abstract data types specification (which ISO 21090 is based on) and the LanguageCommunication.code attribute.  The ED.language is a CS and so must remain universally bound.

In theory, we could split into two separate domains – one for data types and one for LanguageCommunication.code.  The second one could have a looser binding.  However, it’s hard to make a case for doing that.  There are several issues:

First, having two different bindings for essentially the same sort of information is just going to cause grief for implementers.  You could be faced with declaring what language the patient reads in one code system, but identifying the language of the documentation the patient’s supposed to read in a second code system.

Second, the IETF code system fully encompasses all languages covered by all the ISO 639-x code systems, plus thousands of others expressible using various sub-tags such as identifying country-specific dialects.  In the unlikely situation that you need a language that can’t be expressed using any of those, there’s even a syntax for sending local codes (and a mechanism for registering supplemental codes with IANA if you want to be more official).  So there should never be a situation where you can’t express your desired language using the IETF Language Tags code system.

Question: I don’t really care that I can express my languages in IETF.  I’ve already pre-adopted using ISO 639-2 and -3 in my v3 implementation and I don’t want to change.  Why are you putting constraints in place that prevent implementers from doing what they want to do?


Well, technically your implementation right now is non-conformant.  And implementers always have the right to be non-conformant.  HL7 doesn’t require anyone to follow any of its specifications.  So long as your communication partners are willing to do what you want to do, anything goes by site-specific agreement.

However, the standards process is about giving up a degree of implementation flexibility in exchange for greater interoperability.  By standardizing on a single set of codes for human language, we’re able to ensure interoperability across all systems.  Natively, those systems may use other code systems, but for communication purposes, they translate to the common code system so everyone can safely exchange information.

If the premise for loosening a standard was “we won’t require any system to translate data from their native syntax”, there’d be no standards at all.  Yes, translation and mapping requires extra effort (though a look-up table for 84 codes with direct 1..1 correspondence is pretty easy compared to a lot of the mapping effort needed in other areas.)  But that’s the price of interoperability.

HIMSS 13 – New Orleans

I am at HIMSS 13 in New Orleans for the next few days.

I’ll be at the HL7 Booth (4325) on

  • Monday 4:30 – 6:00 (FHIR Presentation at 4:30)
  • Tues  3:30 – 4:30
  • Wednesday 11:00 – 12:30 (FHIR Presentation at 11:00)

The rest of the time, I’ll hanging out Booth 4219 (Dynamic Health IT).

Feel free to look me up

Question: Delphi Code for converting a UUID to an OID

This is sort of turning into a pretty common question – I had no idea that so many vendors still use delphi: if you are using delphi, how do you convert a GUID to it’s OID representation? Well, here’s the code:

The algorithm is simple in concept

  • Remove the “-“ seperators from the GUID
  • Treat the resulting string as a hexadecimal number
  • Convert the number to a decimal number
  • Prepend the OID 2.25 to the number

Reference: see

Although the method is simple, the implementation can be a challenge because of the size of the numbers involved. This is a pascal implementation. The implementation depends on bignum, from


Function GUIDAsOIDRight(Const aGUID : TGUID) : String;
  sGuid, s : String;
  r1, r2, r3, r4 : int64;
  c : integer;
  b1, b2, b3, b4, bs : TBigNum;
  sGuid := GUIDToString(aGuid);
  s := copy(sGuid, 30, 8);
  Val('$'+s, r1, c);
  s := copy(sGuid, 21, 4)+copy(sGuid, 26, 4);
  Val('$'+s, r2, c);
  s := copy(sGuid, 11, 4)+copy(sGuid, 16, 4);
  Val('$'+s, r3, c);
  s := copy(sGuid, 2, 8);
  Val('$'+s, r4, c);

  b1 := TBigNum.Create;
  b2 := TBigNum.Create;
  b3 := TBigNum.Create;
  b4 := TBigNum.Create;
  bs := TBigNum.Create;
    b1.AsString := IntToStr(r1);
    b2.AsString := IntToStr(r2);
    b3.AsString := IntToStr(r3);
    b4.AsString := IntToStr(r4);
    bs.AsString := '4294967296';
    bs.AsString := '18446744073709551616';
    bs.AsString := '79228162514264337593543950336';
    result := '2.25.'+b1.AsString;

GUIDToString comes from ComObj.

Repository Based Exchange

Classically, HL7 has divided exchange of information between two applications into 3 different paradigms:

  • Messaging
  • Services
  • Documents

Messaging and services are closely related – you exchange messages between two applications, and the net effect of the messages is that the applications provide services to each other. In “services” mode, messages are still exchanged between the applications, but the way the messages are described is different, and they are described in the context of the overall service (rather than the other way around).

Document based exchange is primarily different by virtue of being looser – the focus is on the content of the messages, and the content is defined in such a way that while it can be used in messages/services, the content is also able to be stored in a database, or exchanged in other less well described forms such as email. (One natural consequence of this is that when you use documents in a service, there’s going to be some duplication between the service and the document).

The FHIR RESTful paradigm introduces a new way of conceptualising exchange between applications: a repository mediated exchange. Although there’s still a service, and messages are still exchanged, they exist in the context of a repository of content. In this paradigm, one of the applications is considered to have a repository of records that describe the content, and the other application can search, retrieve, and potentially update and delete from the repository.

The interesting thing about this exchange paradigm is that there are multiple ways to use it. Let’s take an example, a diagnostic service (i.e. a laboratory) that needs to contribute content to a clinical EHR. There’s a number of ways to use a repository based interface to build this:

  1. Configure the diagnostic service as the repository of diagnostic records. When the clinical EHR wants to access the diagnostic results, it searches the diagnostic service repository
  2. Configure the clinical EHR as the repository of diagnostic records. Whenever the diagnostic service has a new report, it creates a copy in the clinical EHR. The clinical EHR searches it’s own copy of the reports
  3. Use a third party repository; the diagnostic service uploads it’s reports to the repository, and the clinical EHR performs searches on it

Each of these three models works, but provides different pros and cons. Here’s some:

Model Pros Cons
1 No synchronising issues between applications (source is master) Reports not available when diagnostic service down

Diagnostic service needs to collaborate with regard to access control

2 Clinical EHR not dependent on diagnostic service Must ensure synchronisation is error-free
3 Specialised repository can out-perform other systems

More potential for re-use

Additional cost to install & manage access control


Ideally, the institution hosting the exchange should be able to pick between the pros and cons for itself. (i.e. the diagnostic service and clinical EHR will be able to run in either mode depending on configuration, but this has its own costs)

I’ve found that this repository based exchange is one of the hardest things to understand about FHIR for someone who’s used to thinking about exchanging messages between applications. In particular, the notion that either side can act as the repository can be quite obtuse when you are reading the FHIR spec – I’ve still got to figure out how to explain this properly.

Of course, this paradigm isn’t really new, not even in healthcare – XDS is a repository mediated exchange paradigm which can be run in any of the 3 ways described above. #3 is the default around which XDS was designed, but by merging registry and repository, or by using XDR, you can have the other configurations.