Category Archives: Uncategorized

Gender Participation in the #FHIR Community

This is post #3 in my series about why to participate in the FHIR standards process.

A few weeks ago, I attended the AIIA awards night at the kind invitation of the eHealth team from CSIRO. One of the speakers was the Victorian Minister for Small Business, the Hon Philip Dalidakis. The presentation was the day after the sad passing away of another Victorian minister, Fiona Richardson, and in her memory, he made an inspired plea for us all to actively consider whether there’s anything that we can or should do to improve the gender imbalance that’s typical in IT.

HL7 – and the FHIR community – does have the gender imbalance that’s characteristic of IT communities – though it’s also a health community, and so the gender divide is not as stark as it is in some communities. But it’s not anywhere close to 50:50, and his words made me wonder whether we/I are in a position to do anything more about that. After the presentation, I spoke to Minister Dalidakis about what levers we might have in an open standards community to encourage more balanced participation – they’re different to those you can/should use in a commercial setting.  He graciously gave me several pieces of advice, and since then I’ve been discussing this with the FHIR team, and particularly with our female leaders in the community.

FHIR and HL7 are great communities to be involved with, and that’s particularly true if you’re a woman – that’s what our female leaders tell me.

They say this is because:

  • We have a strong governance model that is good at managing contention (we have a fair bit of it to manage!)
  • Everyone is treated equally, and mostly treated well (the issues mentioned here are gender neutral)
  • Our discussions are thoughtful and respectful
  • The healthcare vertical is inherently non-confrontational, non-violent

And FHIR is a great place to contribute. Paula Braun says:

Many of the important indicators about our health…e.g., blood pressure, abnormal lab results, etc…are invisible to us. Without access to this data, we and the professionals we entrust to take care of us, are operating in the dark. The older, outdated ways of accessing and exchanging health information have an “I know better than you” feel to them. It was the equivalent of somebody saying, “Hey there girl, don’t worry your pretty little head about how this all works. It’s much too complicated for you.” FHIR is different. FHIR is a community where motivated people self-select to un-break healthcare…at least the IT part of healthcare. I don’t consider myself a “techie” but I choose to participate in the FHIR community because of the possibilities that FHIR enables, the professionalism that is maintained, and, most importantly, because its fun to be part of a movement that is transforming the dominant assumptions and expectations about healthcare

Michelle Miller (who is rightfully one of the Health Data Management’s 2016 Most Powerful Women in Health Care IT) says:

I participate in the FHIR community because:

  • Even with my bright pink “First-Time Attendee” ribbon, I quickly learned that my input was valued.
  • HL7 FHIR has a focus on those who adopt and implement the standard specification, such that implementer involvement and input is very much respected and encouraged.
  • After getting energized by the fantastic collaboration that occurred during the HL7 work group meetings, I started attending weekly work group conference calls to continue the discussion
  • I feel strongly that all changes, big and small, help make the FHIR specification that much better for the next implementer or developer to adopt.  Adoption is what makes a specification a standard because without adoption, we haven’t really achieved interoperability
  • I have been so impressed with the knowledge, collaboration and overall friendliness of the HL7 FHIR community. The discussion is always thoughtful and respectful, such that I have high confidence in FHIR’s ability to maximize interoperability.

In sum, it is energizing for me to collaborate with such knowledgeable experts on a subject (healthcare) that is so meaningful and impactful (bigger than just me, my job, or even my country).  Despite the diversity in our perspectives (country, vendor, government, technical vs clinical etc.), the FHIR community is genuinely interested in reaching the best conclusion because adoption is what makes a specification a standard and achieves interoperability

Michelle has a full write up about her involvement on the Cerner Blog.

So the FHIR community is a great place for women who want to make a difference to contribute. If you’re thinking about it – we’d love to have you involved; please get in contact with me, or one of:

(though there’s many other valued contributers as well).

Still, there’s plenty we can do to improve:

  • One particularly pertinent issue around gender participation is about time requirements. HL7 is both good and bad here – most participation is remote, and really flexible in terms of hours and location – that’s a big deal. But there’s also face to face meetings that require travel – that can be a real problem, and HL7 has struggled to find a practical solution around remote participation (it’s an inherently hard problem).
  • There’s general agreement that we could do a lot better with regard to welcoming, induction, and initial training procedures – these are actually issues for both genders – so that’s something that we’re going to work on
  • We need to communicate better that the FHIR community is not just engineers and hackers (who lean male) – it’s about health, and clinicians and nurses (and business managers) are just as much implementers with valuable contributions to make. Of course, the FHIR community is comprised of both genders across all these roles
  • Good consultative leadership is hard to find, and we need/want more of that
  • We have good leaders – we need to recognize the ones we have.
  • We could keep an eye on statistics around assignment of administrative duties (“housework”) at HL7 – but we don’t

Note that all these are really about maximizing our human capital. So, we have plenty of potential, but we aren’t really capitalizing on it. Increasingly, we are investing in human capital as our community grows, so watch this space.

Btw, this image from the Madrid meeting shows that we can do better on balance (though, in fact, we are on the whole more balanced than this particular photo):

Contributers recognized for contributions way beyond expectations to getting FHIR R3 published – featuring both Melva and Michelle

p.s. A note about the FHIR and HL7 communities: these are tightly related communities with a good % overlap, but they are also different in nature, processes, and composition, so we had to consider them both.

#FHIR is 5 years old today

Unofficial FHIR project historian Rene Sponk has pointed out that it’s exactly 5 years to the day since I posted the very first draft of what became FHIR:

Five years, on August 18th 2011 to be precise, Grahame Grieve published the initial version of FHIR (known as RFH at the time) on his website. The date of the initial version was August 11th – which is the reason for this post today. Congratulations to all involved for helping to create a success – FHIR has gained a lot of interest over the past few years, and a normative version will be published in the near future.

Wow. 5 years! Who would have thought that we’d end up where we are? I really didn’t expect much at all when I first posted RfH back then:

What now? I’m interested in commentary on the proposal. If there’s enough interest, I’ll setup a wiki. Please read RFH, and think about whether it’s a good idea or not

Well, there was enough interest, that’s for sure.

And it’s rather a coincidence, then, that on the 5th anniversary of the first posting, I’ve just posted the ballot version for the STU 3 ballot.  This version is the culmination of a lot of work. A lot of work by a lot of people. Lloyd Mckenzie and I have been maintaining a list of contributers, but so many people have contributed the specification process now that I don’t know if we’re going to be keep even a semblance of meaningfulness for that page. I’ll post a link to that version soon, with some more information about it

p.s. Alert readers will note that the blog post announcing RfH was dated Aug 18th – but it was first posted August 11th.

#FHIR Implementer’s Safety Checklist

One topic that comes up fairly often when I talk procurers and users of interoperability is ‘clinical safety’. Everyone knows why it’s important, but it’s much harder to pin down what it is, and how to measure it, or how to ‘be safe’. With this in mind, the FHIR specification includes an implementer safety checklist. All developers implementing FHIR should run through the content of this safety checklist before and after the implementation process. But the lack of feedback I get about it suggests to me that not many people read it.

With this in mind, I’ll be asking participants in this weekend’s connectathon in Orlando to fill it out. I’m sure we’ll get comments from that. Here’s the safety checklist, with my comments, but first:

Almost all interoperability developments occur in a limited context with one to a few trading partners, and relatively well controlled requirements. In this context, safety consists of testing the known functionality, but all too often, ignoring all the other things that might happen. However experience shows that over time, new participants and new requirements will creep into the ecosystem and safety features that appeared unnecessary in a well controlled system turn out to be necessary after all. These safety checks below are mostly chores, and are easily ignored, but a warning: you ignore them at your peril (actually, it’s worse than that- you ignore them at other people’s peril).

Production exchange of patient or other sensitive data will always use some form of encryption on the wire.
This is a fairly obvious thing to say in principle, but it’s extremely common to find insecure exchange of healthcare data in practice. FHIR does not mandate that all exchange is encrypted, though many implementers have commented that it should. There are some valid use cases not to use encryption, such as terminology distribution etc. Implementers should check that their systems are secure.

For each resource that my system handles, I’ve reviewed the Modifier elements.
In resource definitions, a number of elements are marked as modifying elements. Implementers are not required to support these elements in any meaningful fashion. Instead, implementers are required to ensure that their systems do not inappropriately ignore any of the possible values of the modifier elements. This may be achieved by:

  • Ensuring that these values will never occur through proper use of the system (e.g. documenting that the system only handles human patients)
  • Throwing an exception if an unsupported value is received
  • Ignoring the element that contains the modifier element (so that the value is irrelevant anyway)

Note that applications that store and echo or forward resources are not ‘processing the resources’. Processing the resources means extracting data from them for display, conversion to some other format, or some form of automated processing.

My system checks for modifierExtension elements.

Modifier Extensions are only seen rarely, but when they exist, they mean that an implementer has extended an element with something that changes the meaning of the element, and it’s not safe to ignore the extension. For safety purposes, implementers should routinely add some kind of code instruction like this:

Assert(object.hasNoModifiers, “Object at path %p has Unknown modifier extensions”)

This should be done for each object processed. Of course, the exact manifestation of this instruction will vary depending on the language. Performing these checks is a chore, so it’s frequently not done, but it should be done for safety purposes. Note that one cheap way to achieve this is to write a statement in the documentation of the application: “Do not send this application any modifier extensions”. Like all cheap ways, this is likely to not be as effective as actually automating the check.

My system supports elements labelled as “must-support” in the profiles that apply to my system.

Implementation Guides are able to mark particular elements as ‘must-support’. This means that although the element is optional, an application must be able to populate or read the element correctly. What precisely it means to do this correctly varies widely, so Implementation Guides must indicate exactly what they mean when marking an element as ‘must-support’, and applications that claim to conform need to do whatever is prescribed.

For each resource that my system handles, my system handles the full Life cycle (status codes, record currency issues, and erroneous entry status). Many resources have a life cycle tied to some business process. Applications are not required to implement the full business life cycle – they should implement what is needed. But systems need to fail explicitly if the life cycle they expect does not match the content of the resources they are receiving

A common and important area where applications fail to interoperate correctly is when records are created in error, or linked to the wrong context, and then must be retracted. For instance, when a diagnostic report is sent to an EHR linked to the wrong patient. There are a variety of ways to handle this, with different implications for the record keeping outcomes. Failure to get this right is a well-known area of clinical safety failure.

The FHIR specification makes some rules around how erroneous entry of resources is indicated. Applications should ensure that they handle these correctly.

My system can render narratives properly (where they are used).
The general theory of text vs data is discussed here and here. Resources can contain text, data or both. Systems are not obliged to be able to display the narrative; they can always choose to process the data. But in many cases, it’s a good idea to offer the user a choice to see the original narrative of the resource (or resources, in many cases), particularly for clinical resources. This might be described as ‘see original document’ in a user relevant language.

The FHIR specification makes no explicit requirements in this regard, since the correct behaviour is so variable. Implementers should judge for themselves what is appropriate in this regard.

My system has documented how distributed resource identification works in its relevant contexts of use, and where (and why) contained resources are used.
Many of the clinical safety issues that arise in practice arise from misalignment between systems around how identification and identifiers work. In the FHIR context, this risk is particularly acute given how easy it is to develop interfaces and connect systems together. Any applications that assign identifiers or create resources with an explicit identity should document their assumptions and processes around this. This is particularly important where there is the prospect of more than two trading partners.

The same applies to contained resources: a system should refrain from using contained resources as much as possible, and where it necessary, document the usage.

My system manages lists of current resources correctly.
One important use of the List resource is for tracking ‘current’ lists (e.g. Current problem list). Current lists present a challenge for the FHIR API, because there’s no inherent property of the list that marks it as ‘current’: there may be many ‘medication lists’ in an application, but only a few(or one)  of them are the ‘current’ list. What makes a list current is it’s context, how it used, not an inherent property of the list. The FHIR API defines an operation that can be used to get a current list for a patient:

GET [base]/AllergyIntolerance?patient=42&_list=$current-allergies

The FHIR specification defines several list tokens for use with this operation but there’s a long way to go before these concepts are well understood and exchangeable. If the system has current lists, it must be clear how to get the correct current list from the system and how to tell the lists that are not.

My system makes the right Provenance statements and AuditEvent logs, and uses the right security labels where appropriate.
Provenance and AuditTrail are two important and related resources that play key roles in tracking data integrity. Provenance is a statement made by the initiator of an update to the data providing details about the data change action and AuditTrail is a statement made by the data custodian about a change made to the data. On a RESTful API, the provenance statement is made by the client, and the AuditTrail is created by the Server. In other contexts, the relationships may not be so simple

My system checks that the right Patient consent has been granted (where applicable).
Patient consent requirements vary around the world. FHIR includes the ability to track and exchange patient consent explicitly, which is a relatively new integration capability. Various jurisdictions are still feeling out how to exchange consent to meet legislative and cultural requirements.

When other systems return http errors from the RESTful API and Operations (perhaps using Operation Outcome), my system checks for them and handles them appropriately.
Ignoring errors, or not handling them properly, is a common operational problem when integrating systems. FHIR implementers should audit their system explicitly to be sure that the http status code is always checked, and errors in OperationOutcomes are handled correctly

My system publishes a conformance statement with StructureDefinitions, ValueSets, and OperationDefinitions, etc., so other implementers know how the system functions.
While servers have no choice but to publish a conformance statement, the degree of detail is up to the implementer. The more detail published, the easier it will be for systems to integrate. Clients should also publish conformance statements too, but there is much less focus on this – but the computable definition of system functionality will be just as important.

My system produces valid resources.
It is common to encounter production systems that generate invalid v2 messages or CDA documents. All sorts of invalid content can be encountered, including invalid syntax due to not escaping properly, wrong codes, and disagreement between narrative and data.

In the FHIR ecosystem, some public servers scrupulously validate all resources, while others do not. It’s common to hear implementers announce at connectathon that their implementation is complete, because it works against a non-validating server, and not worry about the fact it doesn’t work against the validating servers.

Use the validation services to check that your resources really are valid, and make sure that you use a DOM (document object model) or are very careful to escape all your strings

Check for implicitRules.
All resources can carry an implicitRules pointer. While this is discouraged, there are cases where it is needed. If a resource has an implicitRules reference, you must refuse to process it unless you know the reference. Remember to check for this.

#FHIR and Postel’s Robustness Principle

An important principle in interoperability is Postel’s Robustness Principle:

Be conservative in what you send, be liberal in what you accept

There’s been quite a bit of discussion recently in various implementation forums about robustness of FHIR interfaces, and I think there’s a few things to say about how to develop robust FHIR principles.

Firstly, the FHIR core team endorses Postel’s principle – the pathway to robust interoperability is to be careful to be conformant in what you send, and to be as accepting as possible in what you receive. However, in practice, it’s not necessarily easy to see how to implement like that.

There’s also some circumstances where this isn’t what you should do. As an example, when I started writing my reference server, I followed Postel’s law, and accepted whatever I could accept. However this fostered non-conformant implementations, so on the behest of the community, I’ve been gradually tightening up the rigor with which my server enforces correctness on the clients. For example, my server validates all submitted resources using the formal FHIR validator. Somewhat unfortunately, the main effect that has had is that implementers use one of the other servers, since their client works against that server. This’ll get worse when I tighten up on validating content type codes in the lead in to the Orlando Connectathon. Generally, if an implementation is used as a reference implementation, it should insist that trading partners get it right, or else all implementations will be forced to be as generous as the reference implementation.

But let’s assume you wanted to follow Postel’s law. What would that mean in practice, using a FHIR RESTful interface?

Reading/Writing Resources

If you’re creating a resource, then you can start by ensuring that your XML or JSON is well formed. It’s pretty much impossible for a receiver to process improperly formed XML or JSON (or it’s at least very expensive), but experience shows that many implementers can’t even do that (or here), and I’ve seen this a lot. So for a start, never use string handling routines to build your resources – eventually, you’ll produce something invalid. Always always use a DOM or a writer API.

Beyond this:

  • Ensure that all mandatory elements are present
  • Ensure that the correct cardinalities apply
  • Ensure that you use the right value sets
  • Always use UTF-8
  • etc

In fact, in general, if you are writing a resource, you should ensure that it passes the validator (methods for validation), including checking against the applicable profiles (whether they are explicit – stated in Resource.meta.profile – or implicit – from the conformance statement or other contextual clues).

If you’re reading a resource, then:

  • Only check the content of the elements that you have to use
  • Accept non-UTF-8 encoding
  • Only check for modifier extensions on the elements you actually use, and don’t check for other extensions (only look for extensions you know)
  • accept invalid codes for Coding/CodeableConcept data types (further discussion below)

However, there’s not that much you can be graceful about with the content; generally, if you have to use it, it has to be right.

Using the RESTful API

In practice, when using the API, clients should ensure that they:

  • use the correct mime types for content-type and accept, and they always specify a mime type (never leave it to the server)
  • construct the URL correctly, and all the escapable characters are properly escaped
  • they use the correct case for the URL
  • they look for ‘xml’ or ‘json’ in the return content-type, and parse correctly without insisting on the correct mime type
  • they can handle redirects and continue headers correctly

Servers should:

  • accept any mime types that have ‘xml’ or ‘json’ in them
  • only check headers they have to
  • accept URLs where not all the characters are escaped correctly (in practice, ‘ ‘, =, ?, and + have to be escaped, but other characters sometimes aren’t escaped by client libraries)
  • always return the correct FHIR mime types for XML or JSON
  • always return the correct CORS headers
  • ignore case in the URL as much as possible
  • only issue redirects when they really have to

Note: the full multi-dimensional grid of request/response mime types, and the _format header is long and complex, so we’ve not specified the entire thing. As a consequence, outside of these recommendations above, there’s dangerous waters to be encountered.

HTTP Parameters

One area that’s proven controversial in practice is how to handle HTTP parameters. With regard to search, the FHIR specification is quite specific: a server SHALL ignore HTTP parameter that it does not understand. This is because there may be reasons that a client has to add a parameter to the request because of requirements imposed by HTTP agents that intercept the request before it hit’s the FHIR server (this may be clients, proxies, or filters or security agents running on the server itself). In the search API, a server specifically tells a client which parameters it processed in the search results (Bundle.links, where rel = ‘self’), but this doesn’t happen in other GET requests (read, vread, conformance).

For robustness, then, a client should:

  • Only use parameters defined in the specification or in the servers conformance statement (if possible)
  • check search results to confirm which ones were processed (if it matters)

A server should:

  • ignore parameters it doesn’t recognise
  • return HTTP errors where parameters it does recognise are inapplicable or have invalid content, or where it cannot conform to the requested behaviour

ValueSet Variance

The things above really deal with syntactical variance. Postel’s Principle is relatively easy to apply in this way. It’s much harder to apply when the underlying business process vary. Typical examples include:

  • exchanging data between 2 business process that use different fields (e.g. they care about different things)
  • exchanging data between 2 business processes that use text/structured data differently (e.g. structured dosage vs a single text ‘dosage instructions’ field)
  •  exchanging data between systems that use different value sets

To grapple with these issues, I’m going to work with the last example; it’s the easiest to understand and apply, though the basic principles apply to the others as well. In this case, we have 2 applications exchanging data between them, and they support different sets of codes. There’s a few different possibilities:

  • A sends B a code it doesn’t know
  • A sends B a code for something which is different to the one B uses
  • Either of those 3 cases, but B edits the record, and returns it to A

The way the CodeableConcept data type works is intimately linked to practical resolutions to these common cases. In order to support these cases, it has a text representation, and 0 or more Codings:

In HL7’s experience, Postel’s Principle, as applied to the exchange of coded information, says that

  • The source of the information should provide text, and all the codes they know
  • The text should be a full representation of the concept for a human reader
  • It is understood that the codings may represent the concept with variable levels of completeness e.g. the Concept might be ‘severe headache’, but the coding omits ‘severe’ and just represents ‘headache’

Note: there’s a wide variety of workflows that lead to the choice of a concept, and the process for selecting the text and the multiple codings varies accordingly. Since the subtle details of the process are not represented, the most important criteria for the correct choice of text is ‘does a receiver needs to know how the data was collected to understand the text’

  • a receiver of information should retain the text, and all the provided codes
  • When displaying the information to a user, the text is always what should be shown, and the formal codings may be shown additionally (e.g. in a hint, or a secondary data widget)
  • Decision support may choose one of the codes, but the user should always have a path back to view the text when (e.g.) approving decision support recommendations
  • When sending information on, a receiver should always send the original text and codes, even if it adds additional codes of it’s own
  • When a user or process changes the code to another value, all the existing codes should be replaced, and the text should be updated

Note: this implies that there’s a process difference between ‘adding another code for the same concept’ and ‘changing the concept’ and this change should be reflected in the information level APIs and surfaced in the workflow explicitly. But if there’s no difference…

  • if a system receives an update to a coded element (from UI or another system) that contains a different text, and codings, but at least one of the codings is the same, then this should be interpreted as ‘update the existing concept’. The text should be replaced and the codings merged

Many, if not most, systems, do not follow this advice, and these often have workflow consequences. Note, though, that we’re not saying that this is the only way to manage this; more specific workflows are appropriate where more specific trading partnership details can be agreed. But the rules above are a great place to start from, and to use in the general case.

Beyond this general advice, specific advice can be provided for particular contexts. Here, for instance, is a set of recommendations for RxNorm:

  1. Don’t throw away codes (as suggested above). The system originating data needs to expose RxNorm codes, but has good reason to include any “real” underlying codes it used, if different (e.g. FDB). And downstream proxies, CDRs, EWDs, interface engines, etc. shouldn’t remove codes. FHIR has a way to indicate which was the “primary” code corresponding to what a clinician saw.
  2. Senders should expose valid RxNorm codes at the level of SCD, SBD, GPCK, or BPCK prescribables, not ingredients or dose forms. Namely, these codes should appear in RxNorm and mean the thing you want to say. It’s possible they may not be in the current “prescribable” list at the time you generate a data payload (e.g. for historical records), but active prescriptions should be. Furthermore, the conservative practice is to always use an up-to-date release of RxNorm. (And by RxNorm’s design, any codes that were once valid should be present in all future versions of RxNorm, even if obsolete.) These codes might not be marked “primary” in FHIR’s sense of the word
  3. Recipients should use the most recent version of RxNorm available, and should look through all codings in a received medication to find an RxNorm SCD, SBD, GPCK, or BPCK. If you find such a code and don’t “understand” it, that’s a client-internal issue and it should be escalated/handled locally. If you *don’t* find such a code, that’s a potential data quality issue; clients should log a warning and use any code they understand, or display the text value(s) to a user.

There’s a whole art form around version management of terminologies. I’ll take that up in a later post.

Dealing with People

One last comment about Postel’s principle: Interoperability is all about the people, and the same principle applies. If you want to interoperate, you need to get on with people, and that means that you need to use Postel’s principle:

Be generous with what other people say, be disciplined with what you say

A community of people who want to interoperate with others – I’d like to be part of that. But no, I already am! The FHIR community has been very good at this over the last few years.


p.s. this post dealt with the RESTful interface, but the basic principles apply in other contexts of use as well.


Underlying Issues for the pcEHR

There’s an enquiry into the pcEHR at the moment. As one of the small cogs in the large pcEHR wheel, I’ve been trying to figure out whether I have an opinion, and if I do, whether I should express it. However an intersection of communications with many people both in regard to the PCEHR, and FHIR, and other things, have all convinced me that I do have an opinion, and that it’s worth offering here.

There’s a lot of choices to be made when trying to create something like the pcEHR. In many cases, people had to pick one approach out of a set of equivocal choices, and quite often, the choice was driven by pragmatic and political considerations, and is wrong from different points of view, particularly with regard to long-term outcomes. That’s a tough call – you have to survive the short-term challenges in order to even have long term questions. On the other hand, if the short term decisions are bad enough, there’s no point existing into the long term. And the beauty of this, of course, is that you only find out how you went in the long term. The historians are the ones who decide.

So now that there’s an enquiry, we all get to second guess all these decisions, and make new ones. They’ll be different… but better? That, we’ll have to wait and see. Better is easier cause you have hindsight, and harder because you have existing structure/investment to deal with.

But it seems to me that there’s two underlying issues that need to be confronted, and that if we don’t, we’ll just be moving deck chairs around on the Titanic.

Social/Legal Problems around sharing information

It always seemed to me that in the abstract, the pcEHR make perfect sense: sharing the patient’s information via the person most invested in having the information shared: the patient. The patient is the sick one, and if they choose to hide information, one presumes that this is the same information they wouldn’t volunteer to their treating clinician anyway, so what difference would it make?

Well, the difference between theory and practice is bigger in practice than in theory.

And the thing I’ve heard most often in the last couple of weeks with regard to the pcEHR is “it’s neither fish nor fowl” – is it a clinical record, or a patient record? I’m sure that the enquiry will be inundated with comments about this, but there’s a deeper underlying question here: what’s the clinician’s liability in regards to sharing information? (both outgoing and incoming). If a clinician does not discover a condition because it’s not listed in the pcEHR, and they didn’t ask the patient, would it (ever) be an acceptable defense that you would expect it to be? Is that something that would come about naturally by osmosis (or something), or is there active cultural and legal changes needed here?

I’m not a clinician, but I rather think it’s the second. But there’s probably a mexican stand-off here – you couldn’t find out whether this would be reasonable till the (x)EHR is a Big Thing, and it won’t ever have any chance of being a Big Thing until this is resolved.

So the enquiry can recommend whatever it wants, but this underlying question isn’t in it’s scope, so far as I can see – and so it probably won’t make much difference?

Now I raise this – in spite of the fact I’m not a clinician – because it actually frames my second issue, and that’s something I do know about. The way it frames the second issue is that I don’t know whether the pcEHR is just a path for sharing information with the patient, or whether it’s actually intended to grow into the real patient care system that everything else is an extension of (the pcEHR documentation is a dollar each way on this issue, so I’m never sure). If the answer is the first – it’s a one way pipe to the patient, then my second issue is irrelevant. But I’ll still raise it anyway because lots of people are behaving as if the goal is a real healthcare provision system.
Lack of Technical Agreement

The underlying pcEHR specifications for clinical agreement are the CDA document specification stack, consisting of the “Core Information Components”, the “Structured Document Templates” (aka Structured Content Specifications), and the CDA Implementation Guides. At interest here is the Core Information Components, which say:

The purpose of the [X] Information Requirements is to define the information requirements for a nationally agreed exchange of information between healthcare providers in Australia, independent of exchange or presentation formats.

Then they say:

Note that some of the data elements included in this specification are required for ALL [X], whereas others need only be completed where appropriate. That is, a conformant [X] implementation must be capable of collecting and transferring/receiving all Information Requirement elements.

What this means is that these documents are a minimum required functional set – all parties redesign their systems to do things this way, and we’ll all be able to exchange data seamlessly.

There’s no discussion in these documents about the idea of systems doing extra pieces of data not discussed by the core information components, but the underlying approach really only works if there aren’t any. The problem is that this is “core” components – and that’s very much a reflection of the process – these are things everyone agreed to (where “everyone” turns out to mean, the people who were talking to NEHTA back then, which is far short of everyone who will implement). That leaves a lot of things out of scope.

So there’s problems here in two directions – what if systems don’t support the core components? What if they have other things?

Now the pcEHR was built on top of these specifications, and some parts of the pcEHR design expected that things would follow the core components – particularly, any part that implied data aggregation, analysis, or extraction. As long as the pcEHR is documents inbound, document navigation, and document view, the conceptual gaps the core information components leave don’t matter.

But as soon as you start wanting to do composite views, summaries, etc, you need to be sure about the data. And deviations from the Core Information Components make that impossible. And, in practice, many of the contributing systems deviate from the core information component specifications by not supporting required fields, adding extra fields, or having slightly different value sets for coded values etc. There was this expectation that all the systems would be “adjusted” to conform to these core information components. And some were, though sometimes in some strictly notional sense that the users will never use. But many systems never even tried, and it just wasn’t going to be practical to make them.

It probably sounds like I think the core information components are flawed, but I don’t really think they are – I think the issues I have listed should be understood as that we have found the limits of agreement within Australia in these regards. Given a much longer development time, a lot more money, and a lot better engagement, we could have added a few more fields, but I don’t think it would’ve made much more difference. The problem is the existing systems – they are different. And may be they could be rewritten, but that would cost a vast amount of money, and what would happen to the legacy data?

So any useful clinical leverage from the pcEHR in terms of using the data is pretty much a non-starter right now. Only the NPDR, where the prescribe and dispense documents are locked right down – only there do we have useful data analysis happening (and so far, we have only a few providers set up to push data to the NPDR. I wonder how others will go – but prescription is a fairly proscribed area, so this might be ok).

I don’t see how the enquiry can make much difference in this area either – this is a generational problem. I guess there can be lots of moving the deck-chairs around, and blaming the symptoms. That’s how these large projects usually go….


Clinical Safety Workshop for Healthcare Application Designers

On November 12 in Sydney, I’ll be leading a “Clinical Safety for Healthcare Application Designers” workshop on behalf of the Clinical Safety committee of the Australian Medical Software Association (MSIA). This is the blurb that went out to MSIA members today:

Ensuring the clinical safety of any healthcare application is vital – but it’s hard. Not only do the economics of the industry work against it, most of the information available is targeted at the clinical users, and often isn’t relevant or useful to application designers. But it’s the designers who make the important choices – often in a context where they aren’t aware of all the consequences of their actions, and where feedback, if it comes at all, is unreliable and often unusable.

Attendees at the MSIA clinical safety workshop will work through practical examples (often real) to help raise awareness of clinical safety issues in application design, and provide attendees with a set of tools and plans to mitigate the issues. Topics covered include general clinical safety thinking, and identification, classification, presentation, and data access, migration and aggregation issues.

The workshop is sponsored by the MSIA Clinical Safety Committee, and will be lead by Grahame Grieve, who has 20 years of experience in healthcare, application design, information exchange, and also served on the PCEHR clinical safety work group.

Not all MSIA members are on the distribution list, and some of the target audience track my blog, so I thought I’d announce it here. Attendence is restricted to MSIA members, and it’s free. If you’re not on the Bridget’s (MSIA CEO) email list, and you’re interested in attending, send me an email.


Australian FHIR Connectathon and Tutorial

We’ll be holding a FHIR connectathon here in Australia as part of the IHIC 2013 – International HL7 Interoperability Conference in Sydney in late October 2013 (around 28-30).

This is an opportunity for Australasian implementers and vendors to get practical experience in FHIR. Here’s why you should consider attending:

  • Find out what all the excitement is about
  • Get a head start building FHIR into your products
  • Get a real sense of what FHIR is good for, and what it isn’t
  • Help ensure that FHIR meets real-world Australasian requirements
  • Be a recognised part of the FHIR community
  • Connectathons are real fun

Realistically, this is likely to be the only connectathon held here in Australia (at least, the only one prior to the publishing of FHIR as a DSTU). The connectathon is focused around actual exchange of content, not theory (there is a place for theory, but the connectathon is not part of it). So it’s really suitable for technical staff, though we have had a few non-technical staff brushing off the cobwebs on their development skills just so that they can participate in the HL7 connectathons.

FHIR connectathons are always start with exchanging patient information, and then we build on that depending on the interests of the actual participants. We’ll be putting out an call for expressions of interest in a few months time, after which we’ll start clarifying what our actual scenarios will be. However I expect that MHD (mobile friendly version of XDS) is likely to be in scope.

In about July/August I’ll hold a 1 or 2 day tutorial here in Melbourne looking at FHIR in depth, with a focus on implementation issues. This tutorial will start with a general review of FHIR, and then look at depth at the FHIR reference platforms, how to use http tools to exercise and debug interfaces, how to convert from v2 to FHIR and vice versa, and look in detail at various models for implementing a server.

If you might be interested in attending this tutorial, drop me a line at – I’m trying to get a feel for event hosting issues such as facility and cost.

HIMSS 13 – New Orleans

I am at HIMSS 13 in New Orleans for the next few days.

I’ll be at the HL7 Booth (4325) on

  • Monday 4:30 – 6:00 (FHIR Presentation at 4:30)
  • Tues  3:30 – 4:30
  • Wednesday 11:00 – 12:30 (FHIR Presentation at 11:00)

The rest of the time, I’ll hanging out Booth 4219 (Dynamic Health IT).

Feel free to look me up