Category Archives: Uncategorized

#FHIR is 5 years old today

Unofficial FHIR project historian Rene Sponk has pointed out that it’s exactly 5 years to the day since I posted the very first draft of what became FHIR:

Five years, on August 18th 2011 to be precise, Grahame Grieve published the initial version of FHIR (known as RFH at the time) on his website. The date of the initial version was August 11th – which is the reason for this post today. Congratulations to all involved for helping to create a success – FHIR has gained a lot of interest over the past few years, and a normative version will be published in the near future.

Wow. 5 years! Who would have thought that we’d end up where we are? I really didn’t expect much at all when I first posted RfH back then:

What now? I’m interested in commentary on the proposal. If there’s enough interest, I’ll setup a wiki. Please read RFH, and think about whether it’s a good idea or not

Well, there was enough interest, that’s for sure.

And it’s rather a coincidence, then, that on the 5th anniversary of the first posting, I’ve just posted the ballot version for the STU 3 ballot.  This version is the culmination of a lot of work. A lot of work by a lot of people. Lloyd Mckenzie and I have been maintaining a list of contributers, but so many people have contributed the specification process now that I don’t know if we’re going to be keep even a semblance of meaningfulness for that page. I’ll post a link to that version soon, with some more information about it

p.s. Alert readers will note that the blog post announcing RfH was dated Aug 18th – but it was first posted August 11th.

#FHIR Implementer’s Safety Checklist

One topic that comes up fairly often when I talk procurers and users of interoperability is ‘clinical safety’. Everyone knows why it’s important, but it’s much harder to pin down what it is, and how to measure it, or how to ‘be safe’. With this in mind, the FHIR specification includes an implementer safety checklist. All developers implementing FHIR should run through the content of this safety checklist before and after the implementation process. But the lack of feedback I get about it suggests to me that not many people read it.

With this in mind, I’ll be asking participants in this weekend’s connectathon in Orlando to fill it out. I’m sure we’ll get comments from that. Here’s the safety checklist, with my comments, but first:

Almost all interoperability developments occur in a limited context with one to a few trading partners, and relatively well controlled requirements. In this context, safety consists of testing the known functionality, but all too often, ignoring all the other things that might happen. However experience shows that over time, new participants and new requirements will creep into the ecosystem and safety features that appeared unnecessary in a well controlled system turn out to be necessary after all. These safety checks below are mostly chores, and are easily ignored, but a warning: you ignore them at your peril (actually, it’s worse than that- you ignore them at other people’s peril).

Production exchange of patient or other sensitive data will always use some form of encryption on the wire.
This is a fairly obvious thing to say in principle, but it’s extremely common to find insecure exchange of healthcare data in practice. FHIR does not mandate that all exchange is encrypted, though many implementers have commented that it should. There are some valid use cases not to use encryption, such as terminology distribution etc. Implementers should check that their systems are secure.

For each resource that my system handles, I’ve reviewed the Modifier elements.
In resource definitions, a number of elements are marked as modifying elements. Implementers are not required to support these elements in any meaningful fashion. Instead, implementers are required to ensure that their systems do not inappropriately ignore any of the possible values of the modifier elements. This may be achieved by:

  • Ensuring that these values will never occur through proper use of the system (e.g. documenting that the system only handles human patients)
  • Throwing an exception if an unsupported value is received
  • Ignoring the element that contains the modifier element (so that the value is irrelevant anyway)

Note that applications that store and echo or forward resources are not ‘processing the resources’. Processing the resources means extracting data from them for display, conversion to some other format, or some form of automated processing.

My system checks for modifierExtension elements.

Modifier Extensions are only seen rarely, but when they exist, they mean that an implementer has extended an element with something that changes the meaning of the element, and it’s not safe to ignore the extension. For safety purposes, implementers should routinely add some kind of code instruction like this:

Assert(object.hasNoModifiers, “Object at path %p has Unknown modifier extensions”)

This should be done for each object processed. Of course, the exact manifestation of this instruction will vary depending on the language. Performing these checks is a chore, so it’s frequently not done, but it should be done for safety purposes. Note that one cheap way to achieve this is to write a statement in the documentation of the application: “Do not send this application any modifier extensions”. Like all cheap ways, this is likely to not be as effective as actually automating the check.

My system supports elements labelled as “must-support” in the profiles that apply to my system.

Implementation Guides are able to mark particular elements as ‘must-support’. This means that although the element is optional, an application must be able to populate or read the element correctly. What precisely it means to do this correctly varies widely, so Implementation Guides must indicate exactly what they mean when marking an element as ‘must-support’, and applications that claim to conform need to do whatever is prescribed.

For each resource that my system handles, my system handles the full Life cycle (status codes, record currency issues, and erroneous entry status). Many resources have a life cycle tied to some business process. Applications are not required to implement the full business life cycle – they should implement what is needed. But systems need to fail explicitly if the life cycle they expect does not match the content of the resources they are receiving

A common and important area where applications fail to interoperate correctly is when records are created in error, or linked to the wrong context, and then must be retracted. For instance, when a diagnostic report is sent to an EHR linked to the wrong patient. There are a variety of ways to handle this, with different implications for the record keeping outcomes. Failure to get this right is a well-known area of clinical safety failure.

The FHIR specification makes some rules around how erroneous entry of resources is indicated. Applications should ensure that they handle these correctly.

My system can render narratives properly (where they are used).
The general theory of text vs data is discussed here and here. Resources can contain text, data or both. Systems are not obliged to be able to display the narrative; they can always choose to process the data. But in many cases, it’s a good idea to offer the user a choice to see the original narrative of the resource (or resources, in many cases), particularly for clinical resources. This might be described as ‘see original document’ in a user relevant language.

The FHIR specification makes no explicit requirements in this regard, since the correct behaviour is so variable. Implementers should judge for themselves what is appropriate in this regard.

My system has documented how distributed resource identification works in its relevant contexts of use, and where (and why) contained resources are used.
Many of the clinical safety issues that arise in practice arise from misalignment between systems around how identification and identifiers work. In the FHIR context, this risk is particularly acute given how easy it is to develop interfaces and connect systems together. Any applications that assign identifiers or create resources with an explicit identity should document their assumptions and processes around this. This is particularly important where there is the prospect of more than two trading partners.

The same applies to contained resources: a system should refrain from using contained resources as much as possible, and where it necessary, document the usage.

My system manages lists of current resources correctly.
One important use of the List resource is for tracking ‘current’ lists (e.g. Current problem list). Current lists present a challenge for the FHIR API, because there’s no inherent property of the list that marks it as ‘current’: there may be many ‘medication lists’ in an application, but only a few(or one)  of them are the ‘current’ list. What makes a list current is it’s context, how it used, not an inherent property of the list. The FHIR API defines an operation that can be used to get a current list for a patient:

GET [base]/AllergyIntolerance?patient=42&_list=$current-allergies

The FHIR specification defines several list tokens for use with this operation but there’s a long way to go before these concepts are well understood and exchangeable. If the system has current lists, it must be clear how to get the correct current list from the system and how to tell the lists that are not.

My system makes the right Provenance statements and AuditEvent logs, and uses the right security labels where appropriate.
Provenance and AuditTrail are two important and related resources that play key roles in tracking data integrity. Provenance is a statement made by the initiator of an update to the data providing details about the data change action and AuditTrail is a statement made by the data custodian about a change made to the data. On a RESTful API, the provenance statement is made by the client, and the AuditTrail is created by the Server. In other contexts, the relationships may not be so simple

My system checks that the right Patient consent has been granted (where applicable).
Patient consent requirements vary around the world. FHIR includes the ability to track and exchange patient consent explicitly, which is a relatively new integration capability. Various jurisdictions are still feeling out how to exchange consent to meet legislative and cultural requirements.

When other systems return http errors from the RESTful API and Operations (perhaps using Operation Outcome), my system checks for them and handles them appropriately.
Ignoring errors, or not handling them properly, is a common operational problem when integrating systems. FHIR implementers should audit their system explicitly to be sure that the http status code is always checked, and errors in OperationOutcomes are handled correctly

My system publishes a conformance statement with StructureDefinitions, ValueSets, and OperationDefinitions, etc., so other implementers know how the system functions.
While servers have no choice but to publish a conformance statement, the degree of detail is up to the implementer. The more detail published, the easier it will be for systems to integrate. Clients should also publish conformance statements too, but there is much less focus on this – but the computable definition of system functionality will be just as important.

My system produces valid resources.
It is common to encounter production systems that generate invalid v2 messages or CDA documents. All sorts of invalid content can be encountered, including invalid syntax due to not escaping properly, wrong codes, and disagreement between narrative and data.

In the FHIR ecosystem, some public servers scrupulously validate all resources, while others do not. It’s common to hear implementers announce at connectathon that their implementation is complete, because it works against a non-validating server, and not worry about the fact it doesn’t work against the validating servers.

Use the validation services to check that your resources really are valid, and make sure that you use a DOM (document object model) or are very careful to escape all your strings

Check for implicitRules.
All resources can carry an implicitRules pointer. While this is discouraged, there are cases where it is needed. If a resource has an implicitRules reference, you must refuse to process it unless you know the reference. Remember to check for this.

#FHIR and Postel’s Robustness Principle

An important principle in interoperability is Postel’s Robustness Principle:

Be conservative in what you send, be liberal in what you accept

There’s been quite a bit of discussion recently in various implementation forums about robustness of FHIR interfaces, and I think there’s a few things to say about how to develop robust FHIR principles.

Firstly, the FHIR core team endorses Postel’s principle – the pathway to robust interoperability is to be careful to be conformant in what you send, and to be as accepting as possible in what you receive. However, in practice, it’s not necessarily easy to see how to implement like that.

There’s also some circumstances where this isn’t what you should do. As an example, when I started writing my reference server, I followed Postel’s law, and accepted whatever I could accept. However this fostered non-conformant implementations, so on the behest of the community, I’ve been gradually tightening up the rigor with which my server enforces correctness on the clients. For example, my server validates all submitted resources using the formal FHIR validator. Somewhat unfortunately, the main effect that has had is that implementers use one of the other servers, since their client works against that server. This’ll get worse when I tighten up on validating content type codes in the lead in to the Orlando Connectathon. Generally, if an implementation is used as a reference implementation, it should insist that trading partners get it right, or else all implementations will be forced to be as generous as the reference implementation.

But let’s assume you wanted to follow Postel’s law. What would that mean in practice, using a FHIR RESTful interface?

Reading/Writing Resources

If you’re creating a resource, then you can start by ensuring that your XML or JSON is well formed. It’s pretty much impossible for a receiver to process improperly formed XML or JSON (or it’s at least very expensive), but experience shows that many implementers can’t even do that (or here), and I’ve seen this a lot. So for a start, never use string handling routines to build your resources – eventually, you’ll produce something invalid. Always always use a DOM or a writer API.

Beyond this:

  • Ensure that all mandatory elements are present
  • Ensure that the correct cardinalities apply
  • Ensure that you use the right value sets
  • Always use UTF-8
  • etc

In fact, in general, if you are writing a resource, you should ensure that it passes the validator (methods for validation), including checking against the applicable profiles (whether they are explicit – stated in Resource.meta.profile – or implicit – from the conformance statement or other contextual clues).

If you’re reading a resource, then:

  • Only check the content of the elements that you have to use
  • Accept non-UTF-8 encoding
  • Only check for modifier extensions on the elements you actually use, and don’t check for other extensions (only look for extensions you know)
  • accept invalid codes for Coding/CodeableConcept data types (further discussion below)

However, there’s not that much you can be graceful about with the content; generally, if you have to use it, it has to be right.

Using the RESTful API

In practice, when using the API, clients should ensure that they:

  • use the correct mime types for content-type and accept, and they always specify a mime type (never leave it to the server)
  • construct the URL correctly, and all the escapable characters are properly escaped
  • they use the correct case for the URL
  • they look for ‘xml’ or ‘json’ in the return content-type, and parse correctly without insisting on the correct mime type
  • they can handle redirects and continue headers correctly

Servers should:

  • accept any mime types that have ‘xml’ or ‘json’ in them
  • only check headers they have to
  • accept URLs where not all the characters are escaped correctly (in practice, ‘ ‘, =, ?, and + have to be escaped, but other characters sometimes aren’t escaped by client libraries)
  • always return the correct FHIR mime types for XML or JSON
  • always return the correct CORS headers
  • ignore case in the URL as much as possible
  • only issue redirects when they really have to

Note: the full multi-dimensional grid of request/response mime types, and the _format header is long and complex, so we’ve not specified the entire thing. As a consequence, outside of these recommendations above, there’s dangerous waters to be encountered.

HTTP Parameters

One area that’s proven controversial in practice is how to handle HTTP parameters. With regard to search, the FHIR specification is quite specific: a server SHALL ignore HTTP parameter that it does not understand. This is because there may be reasons that a client has to add a parameter to the request because of requirements imposed by HTTP agents that intercept the request before it hit’s the FHIR server (this may be clients, proxies, or filters or security agents running on the server itself). In the search API, a server specifically tells a client which parameters it processed in the search results (Bundle.links, where rel = ‘self’), but this doesn’t happen in other GET requests (read, vread, conformance).

For robustness, then, a client should:

  • Only use parameters defined in the specification or in the servers conformance statement (if possible)
  • check search results to confirm which ones were processed (if it matters)

A server should:

  • ignore parameters it doesn’t recognise
  • return HTTP errors where parameters it does recognise are inapplicable or have invalid content, or where it cannot conform to the requested behaviour

ValueSet Variance

The things above really deal with syntactical variance. Postel’s Principle is relatively easy to apply in this way. It’s much harder to apply when the underlying business process vary. Typical examples include:

  • exchanging data between 2 business process that use different fields (e.g. they care about different things)
  • exchanging data between 2 business processes that use text/structured data differently (e.g. structured dosage vs a single text ‘dosage instructions’ field)
  •  exchanging data between systems that use different value sets

To grapple with these issues, I’m going to work with the last example; it’s the easiest to understand and apply, though the basic principles apply to the others as well. In this case, we have 2 applications exchanging data between them, and they support different sets of codes. There’s a few different possibilities:

  • A sends B a code it doesn’t know
  • A sends B a code for something which is different to the one B uses
  • Either of those 3 cases, but B edits the record, and returns it to A

The way the CodeableConcept data type works is intimately linked to practical resolutions to these common cases. In order to support these cases, it has a text representation, and 0 or more Codings:

In HL7’s experience, Postel’s Principle, as applied to the exchange of coded information, says that

  • The source of the information should provide text, and all the codes they know
  • The text should be a full representation of the concept for a human reader
  • It is understood that the codings may represent the concept with variable levels of completeness e.g. the Concept might be ‘severe headache’, but the coding omits ‘severe’ and just represents ‘headache’

Note: there’s a wide variety of workflows that lead to the choice of a concept, and the process for selecting the text and the multiple codings varies accordingly. Since the subtle details of the process are not represented, the most important criteria for the correct choice of text is ‘does a receiver needs to know how the data was collected to understand the text’

  • a receiver of information should retain the text, and all the provided codes
  • When displaying the information to a user, the text is always what should be shown, and the formal codings may be shown additionally (e.g. in a hint, or a secondary data widget)
  • Decision support may choose one of the codes, but the user should always have a path back to view the text when (e.g.) approving decision support recommendations
  • When sending information on, a receiver should always send the original text and codes, even if it adds additional codes of it’s own
  • When a user or process changes the code to another value, all the existing codes should be replaced, and the text should be updated

Note: this implies that there’s a process difference between ‘adding another code for the same concept’ and ‘changing the concept’ and this change should be reflected in the information level APIs and surfaced in the workflow explicitly. But if there’s no difference…

  • if a system receives an update to a coded element (from UI or another system) that contains a different text, and codings, but at least one of the codings is the same, then this should be interpreted as ‘update the existing concept’. The text should be replaced and the codings merged

Many, if not most, systems, do not follow this advice, and these often have workflow consequences. Note, though, that we’re not saying that this is the only way to manage this; more specific workflows are appropriate where more specific trading partnership details can be agreed. But the rules above are a great place to start from, and to use in the general case.

Beyond this general advice, specific advice can be provided for particular contexts. Here, for instance, is a set of recommendations for RxNorm:

  1. Don’t throw away codes (as suggested above). The system originating data needs to expose RxNorm codes, but has good reason to include any “real” underlying codes it used, if different (e.g. FDB). And downstream proxies, CDRs, EWDs, interface engines, etc. shouldn’t remove codes. FHIR has a way to indicate which was the “primary” code corresponding to what a clinician saw.
  2. Senders should expose valid RxNorm codes at the level of SCD, SBD, GPCK, or BPCK prescribables, not ingredients or dose forms. Namely, these codes should appear in RxNorm and mean the thing you want to say. It’s possible they may not be in the current “prescribable” list at the time you generate a data payload (e.g. for historical records), but active prescriptions should be. Furthermore, the conservative practice is to always use an up-to-date release of RxNorm. (And by RxNorm’s design, any codes that were once valid should be present in all future versions of RxNorm, even if obsolete.) These codes might not be marked “primary” in FHIR’s sense of the word
  3. Recipients should use the most recent version of RxNorm available, and should look through all codings in a received medication to find an RxNorm SCD, SBD, GPCK, or BPCK. If you find such a code and don’t “understand” it, that’s a client-internal issue and it should be escalated/handled locally. If you *don’t* find such a code, that’s a potential data quality issue; clients should log a warning and use any code they understand, or display the text value(s) to a user.

There’s a whole art form around version management of terminologies. I’ll take that up in a later post.

Dealing with People

One last comment about Postel’s principle: Interoperability is all about the people, and the same principle applies. If you want to interoperate, you need to get on with people, and that means that you need to use Postel’s principle:

Be generous with what other people say, be disciplined with what you say

A community of people who want to interoperate with others – I’d like to be part of that. But no, I already am! The FHIR community has been very good at this over the last few years.


p.s. this post dealt with the RESTful interface, but the basic principles apply in other contexts of use as well.


Underlying Issues for the pcEHR

There’s an enquiry into the pcEHR at the moment. As one of the small cogs in the large pcEHR wheel, I’ve been trying to figure out whether I have an opinion, and if I do, whether I should express it. However an intersection of communications with many people both in regard to the PCEHR, and FHIR, and other things, have all convinced me that I do have an opinion, and that it’s worth offering here.

There’s a lot of choices to be made when trying to create something like the pcEHR. In many cases, people had to pick one approach out of a set of equivocal choices, and quite often, the choice was driven by pragmatic and political considerations, and is wrong from different points of view, particularly with regard to long-term outcomes. That’s a tough call – you have to survive the short-term challenges in order to even have long term questions. On the other hand, if the short term decisions are bad enough, there’s no point existing into the long term. And the beauty of this, of course, is that you only find out how you went in the long term. The historians are the ones who decide.

So now that there’s an enquiry, we all get to second guess all these decisions, and make new ones. They’ll be different… but better? That, we’ll have to wait and see. Better is easier cause you have hindsight, and harder because you have existing structure/investment to deal with.

But it seems to me that there’s two underlying issues that need to be confronted, and that if we don’t, we’ll just be moving deck chairs around on the Titanic.

Social/Legal Problems around sharing information

It always seemed to me that in the abstract, the pcEHR make perfect sense: sharing the patient’s information via the person most invested in having the information shared: the patient. The patient is the sick one, and if they choose to hide information, one presumes that this is the same information they wouldn’t volunteer to their treating clinician anyway, so what difference would it make?

Well, the difference between theory and practice is bigger in practice than in theory.

And the thing I’ve heard most often in the last couple of weeks with regard to the pcEHR is “it’s neither fish nor fowl” – is it a clinical record, or a patient record? I’m sure that the enquiry will be inundated with comments about this, but there’s a deeper underlying question here: what’s the clinician’s liability in regards to sharing information? (both outgoing and incoming). If a clinician does not discover a condition because it’s not listed in the pcEHR, and they didn’t ask the patient, would it (ever) be an acceptable defense that you would expect it to be? Is that something that would come about naturally by osmosis (or something), or is there active cultural and legal changes needed here?

I’m not a clinician, but I rather think it’s the second. But there’s probably a mexican stand-off here – you couldn’t find out whether this would be reasonable till the (x)EHR is a Big Thing, and it won’t ever have any chance of being a Big Thing until this is resolved.

So the enquiry can recommend whatever it wants, but this underlying question isn’t in it’s scope, so far as I can see – and so it probably won’t make much difference?

Now I raise this – in spite of the fact I’m not a clinician – because it actually frames my second issue, and that’s something I do know about. The way it frames the second issue is that I don’t know whether the pcEHR is just a path for sharing information with the patient, or whether it’s actually intended to grow into the real patient care system that everything else is an extension of (the pcEHR documentation is a dollar each way on this issue, so I’m never sure). If the answer is the first – it’s a one way pipe to the patient, then my second issue is irrelevant. But I’ll still raise it anyway because lots of people are behaving as if the goal is a real healthcare provision system.
Lack of Technical Agreement

The underlying pcEHR specifications for clinical agreement are the CDA document specification stack, consisting of the “Core Information Components”, the “Structured Document Templates” (aka Structured Content Specifications), and the CDA Implementation Guides. At interest here is the Core Information Components, which say:

The purpose of the [X] Information Requirements is to define the information requirements for a nationally agreed exchange of information between healthcare providers in Australia, independent of exchange or presentation formats.

Then they say:

Note that some of the data elements included in this specification are required for ALL [X], whereas others need only be completed where appropriate. That is, a conformant [X] implementation must be capable of collecting and transferring/receiving all Information Requirement elements.

What this means is that these documents are a minimum required functional set – all parties redesign their systems to do things this way, and we’ll all be able to exchange data seamlessly.

There’s no discussion in these documents about the idea of systems doing extra pieces of data not discussed by the core information components, but the underlying approach really only works if there aren’t any. The problem is that this is “core” components – and that’s very much a reflection of the process – these are things everyone agreed to (where “everyone” turns out to mean, the people who were talking to NEHTA back then, which is far short of everyone who will implement). That leaves a lot of things out of scope.

So there’s problems here in two directions – what if systems don’t support the core components? What if they have other things?

Now the pcEHR was built on top of these specifications, and some parts of the pcEHR design expected that things would follow the core components – particularly, any part that implied data aggregation, analysis, or extraction. As long as the pcEHR is documents inbound, document navigation, and document view, the conceptual gaps the core information components leave don’t matter.

But as soon as you start wanting to do composite views, summaries, etc, you need to be sure about the data. And deviations from the Core Information Components make that impossible. And, in practice, many of the contributing systems deviate from the core information component specifications by not supporting required fields, adding extra fields, or having slightly different value sets for coded values etc. There was this expectation that all the systems would be “adjusted” to conform to these core information components. And some were, though sometimes in some strictly notional sense that the users will never use. But many systems never even tried, and it just wasn’t going to be practical to make them.

It probably sounds like I think the core information components are flawed, but I don’t really think they are – I think the issues I have listed should be understood as that we have found the limits of agreement within Australia in these regards. Given a much longer development time, a lot more money, and a lot better engagement, we could have added a few more fields, but I don’t think it would’ve made much more difference. The problem is the existing systems – they are different. And may be they could be rewritten, but that would cost a vast amount of money, and what would happen to the legacy data?

So any useful clinical leverage from the pcEHR in terms of using the data is pretty much a non-starter right now. Only the NPDR, where the prescribe and dispense documents are locked right down – only there do we have useful data analysis happening (and so far, we have only a few providers set up to push data to the NPDR. I wonder how others will go – but prescription is a fairly proscribed area, so this might be ok).

I don’t see how the enquiry can make much difference in this area either – this is a generational problem. I guess there can be lots of moving the deck-chairs around, and blaming the symptoms. That’s how these large projects usually go….


Clinical Safety Workshop for Healthcare Application Designers

On November 12 in Sydney, I’ll be leading a “Clinical Safety for Healthcare Application Designers” workshop on behalf of the Clinical Safety committee of the Australian Medical Software Association (MSIA). This is the blurb that went out to MSIA members today:

Ensuring the clinical safety of any healthcare application is vital – but it’s hard. Not only do the economics of the industry work against it, most of the information available is targeted at the clinical users, and often isn’t relevant or useful to application designers. But it’s the designers who make the important choices – often in a context where they aren’t aware of all the consequences of their actions, and where feedback, if it comes at all, is unreliable and often unusable.

Attendees at the MSIA clinical safety workshop will work through practical examples (often real) to help raise awareness of clinical safety issues in application design, and provide attendees with a set of tools and plans to mitigate the issues. Topics covered include general clinical safety thinking, and identification, classification, presentation, and data access, migration and aggregation issues.

The workshop is sponsored by the MSIA Clinical Safety Committee, and will be lead by Grahame Grieve, who has 20 years of experience in healthcare, application design, information exchange, and also served on the PCEHR clinical safety work group.

Not all MSIA members are on the distribution list, and some of the target audience track my blog, so I thought I’d announce it here. Attendence is restricted to MSIA members, and it’s free. If you’re not on the Bridget’s (MSIA CEO) email list, and you’re interested in attending, send me an email.


Australian FHIR Connectathon and Tutorial

We’ll be holding a FHIR connectathon here in Australia as part of the IHIC 2013 – International HL7 Interoperability Conference in Sydney in late October 2013 (around 28-30).

This is an opportunity for Australasian implementers and vendors to get practical experience in FHIR. Here’s why you should consider attending:

  • Find out what all the excitement is about
  • Get a head start building FHIR into your products
  • Get a real sense of what FHIR is good for, and what it isn’t
  • Help ensure that FHIR meets real-world Australasian requirements
  • Be a recognised part of the FHIR community
  • Connectathons are real fun

Realistically, this is likely to be the only connectathon held here in Australia (at least, the only one prior to the publishing of FHIR as a DSTU). The connectathon is focused around actual exchange of content, not theory (there is a place for theory, but the connectathon is not part of it). So it’s really suitable for technical staff, though we have had a few non-technical staff brushing off the cobwebs on their development skills just so that they can participate in the HL7 connectathons.

FHIR connectathons are always start with exchanging patient information, and then we build on that depending on the interests of the actual participants. We’ll be putting out an call for expressions of interest in a few months time, after which we’ll start clarifying what our actual scenarios will be. However I expect that MHD (mobile friendly version of XDS) is likely to be in scope.

In about July/August I’ll hold a 1 or 2 day tutorial here in Melbourne looking at FHIR in depth, with a focus on implementation issues. This tutorial will start with a general review of FHIR, and then look at depth at the FHIR reference platforms, how to use http tools to exercise and debug interfaces, how to convert from v2 to FHIR and vice versa, and look in detail at various models for implementing a server.

If you might be interested in attending this tutorial, drop me a line at – I’m trying to get a feel for event hosting issues such as facility and cost.

HIMSS 13 – New Orleans

I am at HIMSS 13 in New Orleans for the next few days.

I’ll be at the HL7 Booth (4325) on

  • Monday 4:30 – 6:00 (FHIR Presentation at 4:30)
  • Tues  3:30 – 4:30
  • Wednesday 11:00 – 12:30 (FHIR Presentation at 11:00)

The rest of the time, I’ll hanging out Booth 4219 (Dynamic Health IT).

Feel free to look me up

Question: How to use FHIR for ABBI


How could you represent the ABBI text data using FHIR resources?

Background: We (the FHIR project team) have had some interaction with the ABBI development team in the past week or so. They are looking for a more formal representation for ABBI than text, and JSON has emerged as a possible / favored candidate. That lead them to looking at FHIR to see whether we had anything to offer in this space.


Well, FHIR certainly has a lot to offer – something that is developing quickly, but is on the standards track, that offers both XML and JSON (interchangeably, which isn’t as easy as it sounds), works with REST and the web, and is easy to use. But there’s something missing: FHIR doesn’t have a relevant resource for the core of the ABBI data set. So we had a fruitful interchange with some of the ABBI team, and they sent us some definitions and sample (real) data (yay – sample data – that’s what we like, real data), and we knocked up a straw-man Claim resource that would be suitable for use with ABBI data, that includes our own experience with claims data, and that is consistent with the way things are done in FHIR.

Since this is (presently) out of band work done at the behest of some members of the ABBI team, it’s not going to be part of the FHIR spec published at (for now, at least). Instead, it’s published here:

Claim Resource

Covers both initial claims (invoices) and adjudicated claims (EOBs – Explanations of Benefits). UML:


Note: there’s a problem with the representation of subline – it should be a relationship back to Line (Something for me to investigate in the tooling).

The XML definition for this is:

<Claim xmlns="">
 <identifier><!-- 0..1 Identifier Id for this claim instance --></identifier>
 <adjudicating><!-- 0..1 Resource(Claim) For adjudication, what invoice is it --></adjudicating>
 <type><!-- 0..1 CodeableConcept Type of claim --></type>
 <period><!-- 0..1 Period Timeframe covered by claim --></period>
 <patient><!-- 1..1 Resource(Patient) Covered party --></patient>
 <billingProvider><!-- 1..1 Resource(Person|Organization) Who is billing --></billingProvider>
 <coverage><!-- 1..1 Resource(Coverage) What coverage is claim against --></coverage>
 <precedingClaims><!-- 0..* Resource(Claim) Other coverages billed --></precedingClaims>
 <netAmount><!-- 1..1 Money Amount to be paid --></netAmount>
 <allowedAmount><!-- 0..1 Money Revised amount to be paid --></allowedAmount>
 <coveredAmount><!-- 0..1 Money Amount paid by coverage --></coveredAmount>
 <nonCoveredAmount><!-- 0..1 Money Left to pay by patient and/or subsequent coverage --></nonCoveredAmount>
 <line>  <!-- 0..* Breakdown of claim -->
  <code><!-- 1..1 CodeableConcept Billing code --></code>
  <modifier><!-- 0..* CodeableConcept Qualifiers for billing code --></modifier>
  <unitAmount><!-- 0..1 Money Amount paid/unit --></unitAmount>
  <unitQuantity><!-- 0..1 Quantity Number of units --></unitQuantity>
  <netAmount><!-- 1..1 Money unitAmount * quantity + sublines --></netAmount>
  <allowedAmount><!-- 0..1 Money Cap on amount allowed to be billed --></allowedAmount>
  <coveredAmount><!-- 0..1 Money Allowed net amount with adjustments --></coveredAmount>
  <service>  <!-- 0..* Healthcare service item is for -->
   <type><!-- 0..1 CodeableConcept Service code --></type>
   <bodySite><!-- 0..* CodeableConcept Where on body was service done? --></bodySite>
   <period><!-- 0..1 Period Start and end date --></period>
   <performer><!-- 0..1 Resource(Agent|Organization) Who delivered service --></performer>
   <location><!-- 0..1 Resource(Location) Where performed --></location>
   <indication><!-- 0..* CodeableConcept Diagnosis or clinical reason --></indication>
   <details><!-- 0..1 Resource(ANY) Details of service --></details>
  <adjustment>  <!-- 0..* What's not covered & why -->
   <reason><!-- 0..1 CodeableConcept Reason for adjustment --></reason>
   <amount><!-- 1..1 Money Amount of adjustment (usually negative) --></amount>
  <subLine><!-- 0..* Content as for Claim.line Sub-elements of line item --></subLine>
 <extension><!-- 0..* Extension  See Extensions  --></extension>
 <text><!-- 1..1 Narrative Text summary of resource (for human interpretation) --></text>


Note that this is just a strawman for illustration purposes, to see what this would look like in json. So let’s see a couple of examples. First of all, let’s get the patient details out of the way. (and also, the formatting of this sucks compared to the standard FHIR representation, but the WordPress editor has defeated me in that regard this time around).

What the ABBI team provided:

Date of Birth: 4/15/1941
Address Line 1: 415 EAST 71ST ST
Address Line 2:
State: NY
Zip: 10021
Phone Number:
Part A Effective Date: 11/1/2008
Part B Effective Date: 2/1/2009

Other than the Effective dates, which are really coverage information, this goes in the Person resource, and is represented in JSON like this:

  "Person" : {
    "name" : [{
      "family" : "BOURNE",
      "given" : "JACYLN"
    "telecom" : [{
      "system" : "email",
      "value" : ""
    "birthDate" : "1941-04-15",
    "address" : [{
      "line" : "415 EAST 71ST ST",
      "city" : "NEW YORK",
      "state" : "NY",
      "zip" : "10021"
    "text" : {
      "status" : "generated",
      "div" : "..." 

Note that I was going to fill out the narrative in the div properly, but trying to represent html elements in the preformatted text throws the WordPress editor out, so you’ll just have to imagine something proper there.

Moving on, to the core of the data, here’s sample data provided by the ABBI team:

Claim Number: 9427984358334
Provider Billing Address: CL #4685  PO BOX 95000 PHILADELPHIA PA 191954685
Service Start Date: 9/21/2012
Service End Date: 
Amount Charged: $386.60
Medicare Approved: $386.59
Provider Paid: $386.59
You May be Billed: $0.00
Claim Type: PartB
Diagnosis Code 1: V1272
Diagnosis Code 2: 56210

Line number:  1
Date of Service From:  9/21/2012
Date of Service To:  9/21/2012
Procedure Code/Description:  G0105 - Colorectal Cancer Screening; Colonoscopy On Individual At High Risk
Modifier 1/Description:  
Modifier 2/Description:  
Modifier 3/Description:  
Modifier 4/Description:  
Quantity Billed/Units:  1
Submitted Amount/Charges:  $386.59
Allowed Amount:  $386.59
Non-Covered:  $0.00
Place of Service/Description:  24 - Ambulatory  Surgical Center
Type of Service/Description:  F - Ambulatory Surgical Center
Rendering Provider No:  A300070363
Rendering Provider NPI:  1124324181

Line number:  2
Date of Service From:  10/22/2012
Date of Service To:  10/22/2012
Procedure Code/Description:  G8907
Modifier 1/Description:  
Modifier 2/Description:  
Modifier 3/Description:  
Modifier 4/Description:  
Quantity Billed/Units:  1
Submitted Amount/Charges:  $0.01
Allowed Amount:  $0.00
Non-Covered:  $0.01
Place of Service/Description:  24 - Ambulatory  Surgical Center
Type of Service/Description:  F - Ambulatory Surgical Center
Rendering Provider No:  A300070363
Rendering Provider NPI:  1124324181

Represented in JSON following the definitions in the Claim resource above, this looks like this:

 "Claim" : {
    "identifier": {
        "id" : { "value" : "9427984358334" } 
    "type" : {
        "coding" : {
            "code" : "PartB"
    "period" : {
        "start" : { "value" : "2012-09-21" }
    "patient" : {
        "type" : "Patient", 
        "id" : "example-abbi"
    "billingProvider" : {
      "Organization" : {
        "identifier" : [ {
           "label" : "Provider No",
           "identifier" : {
             "id" : "A300070363"
        }, {
           "use" : "official",
           "label" : "Provider NPI",
           "identifier" : {
             "system" : "",
             "id" : "1124324181"
        } ]
        "name" : { value : "MANHATTAN ENDOSCOPY CENTER L" },
        "address" : {
          "line" : "CL #4685  PO BOX 95000",
          "city" : "PHILADELPHIA",
          "state" : "PA",
          "dpid" : "191954685"
    "netAmount" : {
        "value" : "386.60".
        "units" : "$"
    "allowedAmount" : {
        "value" : "386.59",
        "units" : "$"
    "coveredAmount" : {
        "value" : "386.59",
        "units" : "$"

    "line" : [ {
      "code" : {
        "coding" : {
          "system" : "",
          "code" : "1",
          "display" : "Medical Care"
      "unitAmount" : {
        "value" : "386.59",
        "units" : "$"
      "unitQuantity" : {
        "value" : "1"
      "netAmount" : {
        "value" : "386.59",
        "units" : "$"
      "allowedAmount" : {
        "value" : "386.59",
        "units" : "$"
      "service" : [ {
        "type" : {
          "coding" : {
            "code" : "G0105",
            "display" : "Colorectal Cancer Screening; Colonoscopy On Individual At High Risk"
        "period" : {
          "start" : "2012-09-21",
          "end" : "2012-09-21"
        "location" : {
          "Location" : {
             "code" : {
               "coding" : {
                 "code" : "24",
                 "display" : "Ambulatory  Surgical Center"
      } ]
      "code" : {
        "coding" : {
          "system" : "",
          "code" : "2",
          "display" : "Administrivia"
      "unitAmount" : {
        "value" : "0.01",
        "units" : "$"
      "unitQuantity" : {
        "value" : "1"
      "netAmount" : {
        "value" : "0.01",
        "units" : "$"
      "allowedAmount" : {
        "value" : "0.00",
        "units" : "$"
      "service" : {
        "type" : {
          "coding" : {
            "code" : "G8907"
        "period" : {
          "start" : "2012-10-22",
          "end" : "2012-10-22"
        "location" : {
          "Location" : {
             "code" : {
               "coding" : {
                 "code" : "24",
                 "display" : "Ambulatory  Surgical Center"
    "text" : {
      "status" : "generated",
      "div" : "..."

Though there’s plenty of discussion to have around the exact design of the resource, and what elements there should be, etc, this is enough to show roughly what the output would look like using JSON resources as defined by FHIR. Coming cold at this, most readers would probably comment about a couple of apparently spurious levels of nesting, which equate to places for redirection or abstraction – such as code…coding…code, or location…Location. There’s good reasons for these nesting, though we’re still looking at cleaning this up so it looks better in the final instance.

p.s. Lloyd McKenzie did much of the preparative work for this post, including the original claim resource design – thanks.


GPL v3 and Java programs

The GPL v3 includes this definition:

A “covered work” means either the unmodified Program or a work based on the Program.

and states that covered works are also covered under the GPL v3 (which is what makes it a viral license). But the big question is, what is work that is based on the program? The only clarification that I could find with regard this is from the FAQ:

If the program uses fork and exec to invoke plug-ins, then the plug-ins are separate programs, so the license for the main program makes no requirements for them. If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program.

“fork and exec” is a unix term. Dynamic linking is a not a natural java concept. So how does that apply to a java program? I couldn’t find any good information about this (lots of opinion, but nothing reliable). So I sought clarification from the FSF around this. Here, reproduced in the spirit of free speech, is their response, from Joshua Gay, the licensing & compliance manager at FSF:

The latest version of Oracle’s Java ProcessBuilder class provides the functionality for you to make *simple* fork and exec function calls that can be run on most operating systems and that are equivalent to those you would expect to find on any UNIX-like operating systems. If your program uses only the *simple* fork and exec functionality provided by the ProcessBuilder class to invoke and communicate with a GPL covered work, then your program and the GPL covered program can most likely be considered separate programs; therefore, the GPL would make no requirements on your program.

Note the emphasis on *simple*. The ProcessBuilder class allows the calling program to pass variables to the invoked program. You could use this functionality to “share data structures”, though presumably passing a file name reference wouldn’t qualify as “sharing data structures” between programs.


I was interested in using some code covered by the GPL v3 in the FHIR build process (PlantUML, to automatically draw UML diagrams). But we could not take the risk that using GPL licensed code would create GPL based obligations for the build program and it’s source, because that includes the java reference implementation, and I didn’t want that to be covered under the GPL (it’s covered under a much more permissive licence that creates a different sort of freedom). However the question ended up being a moot question because the provider of PlantUML released a modified version that is licensed under LGPL, for which I am deeply appreciative.


I am not a lawyer, and advice I got from a lawyer isn’t legal advice to you either.

From a legal sense, the GPL is probably to be understood as a contract between the user of some software, and the provider of the software. There is very little case law around the world with regard to how this kind of contract would be understood by the courts. Note that the definition of “covered work” as provided at the top would be interpreted by the court, and it would be at the discretion of the court whether to give weight to any advice such as that provided above. Add to this the breadth of variation in understanding contract and IP law around the world, and the situation is very unclear to me. (I recommend Heather Meeker‘s “The open source alternative: Understanding risks and leveraging opportunities” for interested parties)

Still, the opinion here has value for me, because it’s not so much the actual legal position that matters, as of people’s opinions (perception is reality…). And here, the FSF’s opinion has real weight.