Monthly Archives: September 2012

Question: Australian PCEHR HPI-I in XCN

Question:

In a previous post (http://www.healthintersections.com.au/?p=721), you recommend this representation for an HPI-I (Australian Health Provider Identifier / Individual):

8003610537409456^[surname]^[given]^[etc]^^[title]^^^AUSHIC^^^^NPI

However the PCEHR specification for the XCN data type is:

^Dolittle^John^^^Dr^^^&1.2.36.1.2001.1003.0.800361xxxxxxxxxx&ISO

Whereas I am presently using:

800361xxxxxxxxxx^Dolittle^John^^^Dr^^^&1.2.36.1.2001.1003.0&ISO

How should an HPI-O be represented in a HL7 v2 XCN data type?

Background:

The XCN data type has the following fields related to identification in HL7 v2.4:

ID Name Definition
1 ID number (ST) This string refers to the coded ID according to a user-defined table, defined by the 9th component. If the first component is present, either the source table or the assigning authority must be valued
8 source table Used to delineate the first component (User defined values)
9 assigning authority a unique identifier of the system (or organization or agency of department) that creates the data
13 identifier type code (IS) A code corresponding to the type of identifier. In some cases, this code may be used as a qualifier to the <assigning authority> component
14 assigning facility The place or location identifier where the identifier was first assigned to the person. This component is not an inherent part of the identifier but rather part of the history of the identifier

On this basis, the recommendation I made for representing a patient’s IHI was:

XCN: |8003610537409456^[surname]^[given]^[etc]^^[title]^^^AUSHIC^^^^NPI|

8003610537409456 is the identifier, AUSHIC is the Assigning Authority (a code defined as part of HL7 v2 itself), and the identifier type is “National Health Identifier”.

XDS uses the same type for the Patient Identifier. Concerning this (XCN) the documentation says:

This data type describes a person along with the identifier by which he is known in some domain (either the source domain or the XDS affinity domain), using the HL7 v2.5 XCN data type. This data type contains, amongst others,

  • Identifier
  • Last Name
  • First Name
  • Second and Further Given Names
  • Suffix
  • Prefix
  • Assigning Authority

All of the HL7 v2.5 fields may be specified as optional components with the following restrictions:

  • Either name or an identifier shall be present. Inclusion of other components is optional provided the slot value length restrictions imposed by ebXML3.0, 256 bytes, is not exceeded.
  • If component 1 (ID Number) is specified, component 9 (Assigning Authority) shall be present if available.
  • The XDS XCN Component 9 is subject to the same the restrictions as defined for the XDS CX data type component 4. Thus: the first subcomponent shall be empty, the second subcomponent must be an ISO OID (e.g., 1.2.840.113619.6.197), and the third subcomponent shall read ‘ISO’.
  • Any empty component shall be treated by the Document Registry as not specified. This is in compliance with HL7 v2.5.
  • Trailing delimiters are recommended to be trimmed off. Document Registries shall ignore trailing delimiters. This is in compliance with HL7 v2.5.

An example of person name with ID number using this data type is as follows:

11375^Welby^Marcus^J^Jr. MD^Dr^^^&1.2.840.113619.6.197&ISO

Issue

The XCN is the same datatype, but IHE add a specific extra restriction: that the assigning authority must be identified by an OID rather than a code. This is because the XDS eco-system makes extensive use of OIDs, and so this is a natural restriction to make. However this runs into a problem, which is the way that the OID for the HPI-I is defined.

An HPI-I is a 16 digit number starting with 800361, such as 8003610537409456. When it is represented as an OID, it is represented as 1.2.36.1.2001.1003.0.8003610537409456. This is the recommended representation for a HPI-I in a CDA document, and is enforced throughout the pcEHR.

This means that the scoping OID for the HI service identifiers is 1.2.36.1.2001.1003.0. Technically, this is not the same as the assigning authority – it’s not actually a number that identifies the HI Service assigning authority, though it kind of does, since no one can use it for anything else than to refer to the space of the HI Service – which kind of implies the HI Service. So from that perspective, these 2 representations are nearly the same:

800361xxxxxxxxxx^[surname]^[given]^[etc]^^[title]^^^AUSHIC^^^^NPI
8003610537409456^Dolittle^John^^^Dr^^^&1.2.36.1.2001.1003.0&ISO

However, the system designers of the PCEHR had a problem: it had previously been decreed that the OID form of the HI service identifiers is 1.2.36.1.2001.1003.0.800361xxxxxxxxxx. Now this representation works in any place where you have a single identifier – and these are common – but doesn’t work in any place where an identifier is a scoped identifier – and these are common too. XCN is one of those. You can see some discussion around this issue in the comments on my original post (http://www.healthintersections.com.au/?p=721).

Given the constraint that the assigning authority is an OID, and the constraint that the HPI-I is represented as a single OID, this leads naturally to this representation:

^Dolittle^John^^^Dr^^^&1.2.36.1.2001.1003.0.800361xxxxxxxxxx&ISO

Where the HPI-I OID goes in component 9. Even though this is not really valid use of the XCN data type by either HL7 or IHE rules, in that the HPI-I is not actually the assigning authority.

Resolution

I don’t know how this should be resolved. You could do it properly (per the forthcoming Australian identifiers handbook, whichever form it recommends) in the v2 message and differently in the pcEHR XDS metadata, or you could do it both the same – I’m not sure what should be done. The one thing you can sure of is that the PCEHR isn’t going to change for this now.

The comments thread on this post might be interesting (or empty – I’ve given up trying to guess what’s going to generate comments).

NEHTA Clinical Documents: Notes on the usage of templateIds

Normal NEHTA CDA Clinical Documents carry two templateId values at the root of the document:

One of the templateIds (the first, in this case) indicates which CDA implementation guide that document is based on, and the other indicates which version of the rendering specification the document conforms to.

Note that there is no order associated with the templateId entries – they can come in any order. Also note that while NEHTA only describes two templateIds for each document, other implementations are permitted to add their own templateIds – which would have to be different root/extension values, of course. At present, there is no known use for this, but implementers should be aware that it is possible and legal.

Both templateIds are used to indicate that the document conforms to a set of rules on the content of the document. Knowing that the document conforms to the rules about the content enables the receiving/displaying system to safely handle the document. The following sections describe the use of the CDA implementation guide identifier and the rendering specification identifier respectively.

CDA Implementation Guide Identifier

Each NEHTA CDA Implementation guide specifies a particular root/extension combination for a templateId that must be found in the ClinicalDocument.templateId list. The root identifies the kind of document (eReferral, Discharge Summary, etc), and the extension identifies the version release of the CDA implementation guide. In theory, if non-version backwards compatible changes are made between successive releases of the same CDA IG, then the root value will change (this is current NEHTA versioning policy), but implementers would be wise not to assume that this would be the case.

This template Id identifies a particular document type, and therefore overlaps with the LOINC code in ClinicalDocument.code which identifies the document type. However the two fields are not equivalent in meaning. To illustrate this, consider an institution producing a discharge summary – it should use the LOINC code “18842-5 (Discharge Summarization Note)” to identify the document – this is whether it uses the NEHTA design for a discharge summary or not.

So if you see a document with a LOINC code of 18842-5, then you know it’s a discharge summary. That tells you more or less what the document’s going to have in it, but not exactly what the section and data contents are. To know what they are, you have to know exactly what implementation guide the document conforms to – the templateId.

CDA compliant systems can process documents by simply rendering them for a human. In this case, the CDA Implementation Guide is irrelevant and can be ignored. Such systems might use the document code to categorise a document in a human display. But if a system wants to extract data from the document, it needs to note the CDA Implementation Guide and act accordingly.

Rendering Specification Identifier

The rendering system Id has a root  of “1.2.36.1.2001.1001.100.149” and an extension which is the version of the rendering specification.

A CDA document requires a stylesheet – a piece of executable code that readies it for display to a human. In an ideal world, the author could simply send this code along with the document, but this is not possible for security and engineering reasons. However there must be a way to manage the display safety – it cannot simply be a matter of tossing a document into the pcEHR and hoping for the best.

The solution that NEHTA used is the Rendering Specification – this specifies the base set of rules that must be implemented by any system that displays CDA documents. These rules have a matching set of constraints on the CDA document itself, the things it can try and use from a rendering system. So when a document declares that it conforms to a particular set of constraints from a particular rendering system specification, it means that it is safe to display it in any rendering system that also conforms to the matching set of rules.

NEHTA publishes a stylesheet (XSLT transform) along with the rendering specification. The stylesheet is a partial implementation of the rendering specification – does all the things that can be done in the stylesheet, but there are other obligations that have to be handled outside the stylesheet, so it’s not a full solution. When an implementer engineers the stylesheet into their system, they know which rendering specification the stylesheet is associated with. Note, though, that implementers are not required to use the NEHTA provider stylesheet, or even any stylesheet – as long as the document is presented according to the rules defined in the Rendering Specification, and that the version of the rendering specification is known so it can be checked.

What this means is that the system that is displaying the CDA document knows the rendering specification(s) to which it conforms.

If a document conforms to a more recent rendering specification than the system – i.e. it uses a style that wasn’t defined and isn’t known, then it’s not safe to show the document.

So a system that wants to display the document needs to check the Rendering Specification Identifier, but can ignore the CDA Implementation Guide Identifier.

The PCEHR Template Service

The PCEHR includes a template service that returns a package containing details about each template by the templateId. So you could retrieve a package for each of these two template Ids when a CDA document is encountered.

But it’s not clear exactly what you will get if you retrieve that package for either of the two templateIds described here, and even more, what would be the point? For the CDA Implementation Guide, you might get a set of schematron that can validate the document – but the pcEHR has already done that for you. And that doesn’t help you navigate the document and process it’s contents.

As for the rendering specification templateId – perhaps you could get a stylesheet that’s a partial implementation of the rendering specification – NEHTA provides one – but you would still need to engineer this into your system to be sure that it’s a proper implementation that can be relied on – and you can’t do this on the fly.

So there’s no a lot of point retrieving content from the PCEHR template service for these two identifiers – they are hard coded into the application. Note that gateway services and the PCEHR do retrieve the schematrons by CDA IG TemplateId from the template service for validation, but this is probably not useful functionality for clinical information systems.

Note: This is a draft of text that is proposed to be included in a forthcoming Australian standard explaining the use of the CDA templateIds. It will change somewhat in form to integrate properly into the document context, but is published here to get early comment – comments on the blog or to me personally are welcome.

NEHTA Clinical Documents: UCUM alert

The PQ data type has two important properties, value and unit:

 <value xsi:type="PQ" value="1.3" unit="mg/mL"/>

The unit most be a UCUM code. UCUM is a formal representation of a coded unit that makes the specified code able to parsed and understood by a computer. UCUM codes are also easy to read for a person too – but they are not the same as the normal human representation. Mostly, this is because humans use convenient short-hand for units in a given context, and are careless about case and formality. That works for humans – we are context aware processors who can almost always determine what is meant. Computers can’t do that.

For a variety of reasons, UCUM units are not the same as human units as used in medicine, particularly diagnostic reports:

  • mcg/mL is ug/mL. This is fixing the grammar to use only SI prefixes. Note that there are legal requirements in Australia to use mcg instead of ug due to the potential to get mg and µg mixed up in hand-writing. Eventually this will get unwound now that we’re all using computers, but it’s kind of a generational change required.
  • U/L becomes IU/L – this is distinguishing between the various uses of “U”
  • There’s a few bizarre arbitrary units that are used in diagnostic medicine that can’t be represented using UCUM, such as a unit that has a power with decimal points.
  • it’s common to find something like this in a pathology report: leucocytes/mL. This unit doesn’t have a direct equivalent in UCUM, though you could do {leucocytes}/mL

PQ has a problem, then: the unit attribute doesn’t differentiate between a computable representation and the human representation, though these are different things. In principle, CDA has a framework for this – you would use the human readable form in the narrative, and the computable form in the structured data. However, in practice, this has a couple of problems:

  • The feed frameworks to CDA – the primary diagnostic applications, their reporting formats, and the clinical information stores – all only store a single unit. They are presented with a binary choice: human or computable form. Because of legacy reasons, their choice must be the human readable form
  • the point of providing the structured data is so that it can be extracted and shown to the user in some presentable form – that’s why CD has originalText, for example. PQ doesn’t have the same

In practice, none of the clinical systems that create CDA documents for the pcEHR are in a position to provide valid UCUM codes in the documents, either for medications or diagnostic data items. So for now, the NEHTA validation framework does not validate UCUM codes, and pcEHR connected systems should not expect UCUM codes to be valid. (We’d rather have lots of partially useable data instead of vanishingly small amounts of more useable data). Note that various parts of the community are working towards having more computable units (shout out to the RCPA PUTS project).

I’m making this post because I’ve become aware in the last couple of days that some systems are choosing not to provide data because they can’t do UCUM codes.

 

 

Question: GTS cardinality in CDA

Question:

I have an RMIM that states “effectiveTime” is a GTS[0..1], that implies a SET<TS>. (CDA R2, SubstanceAdministration.effectiveTime). Furthermore, I have a (Schematron) constraint that effectiveTime is [1..1].

Thee following snippet is ok by schema, but the schematron constraint fails:

 <effectiveTime xsi:type="IVL_TS">
  <low value="20110301"/>
  <high value="20120301"/>
 </effectiveTime>
 <effectiveTime institutionSpecified="true" operator="A" xsi:type="PIVL_TS">
  <period unit="h" value="6"/>
 </effectiveTime>

What does the snippet mean, and is it legal?

Answer:
The snippet identifies the intersection of “every 6 hours” and “1-Mar 2011 to 1-Mar 2012”. It’s meaningful, and it’s valid CDA. Whether it’s valid against the schematron.. that’s a much harder question. As a background, the cardinality stated against an item in the RIM is the UML cardinality. There’s a little bit of ambiguity here, with collections – does a collection<T> with cardinality 1.2 mean one or items in a collection, or 1 or 2 collections of items? UML is unclear, but most interpretations consider it to be the former, as does the HL7 v3 methodology that underlies the RIM.

So, the cardinality in the CDA document is the UML cardinality. When the CDA document is represented in XML, using the XML representations defined in the base v3 methodology, there is one xml element for each data type, and therefore there is a direct correspondance between the UML cardinality and the XML cardinality: UML cardinality 1..2 means 1..2 XML elements… for all cases. Except, that is, for GTS.

GTS is a set of items. A single set – a set with a cardinality of 1 – can be built by specifying an element, and then adding repeated elements, each of which define an operation that is to be performed on the set using the operator attribute. No matter how many XML elements there are, there is only one set. So the UML cardinality is 1, irrespective of the XML element cardinality.

Aside – I need to make several caveats to this statement:

  • All the repeating elements must have an operator. If there’s a repeating element that doesn’t have an operator, then this is an invalid instance (Act.effectiveTime has an upper cardinality of 1 in the RIM itself)
  • Strictly, there are ways to build a bag<T> using BXIT<T> and operators where the UML cardinality doesn’t match the XML element cardinality – but there’s no use for this in the context of CDA (and no rational use for this anywhere)
  • some UML attributes are represented as XML attributes, and cardinality applies differently

So, given this, the question is, what does it mean to say “I have a (Schematron) constraint that effectiveTime is [1..1]” – does it intend to address the UML or the XML cardinality? That’s not clear. Consider this, from the CCDA ballot:

The cardinality indicator (0..1, 1..1, 1..*, etc.) specifies the allowable occurrences within a document instance.

Figure 2: Constraints format – only one allowed

1. SHALL contain exactly one [1..1] participant (CONF:2777).

a. This participant SHALL contain exactly one [1..1] @typeCode=”LOC”
(CodeSystem: 2.16.840.1.113883.5.90 HL7ParticipationType)
(CONF:2230).

The language in this extract is unclear – do they mean, the XML representation of the document instance, or a logical document at the UML level? Not clear. “shall contain one participant”… I’m inclined to think that this is different to “shall contain one participant element”, and that the cardinalities should be understood to line up with the UML cardinalities. Of course, there’s only one case where this matters, which is this one.

So I reckon that fragment at the top should be valid. But there’s a way to work around this – this fragment is equivalent in meaning:

 <effectiveTime xsi:type="SXPR_TS">
  <comp xsi:type="IVL_TS">
   <low value="20110301"/>
   <high value="20120301"/>
  </comp>
  <comp institutionSpecified="true" operator="A" xsi:type="PIVL_TS">
   <period unit="h" value="6"/>
  </comp>
 </effectiveTime>

But it has a cardinality of 1 in the XML. So I recommend to use this form, and not the other. Bingo, problem solved.

Standards – we’re doing so well…

Vince MacCauley has written an article about standards development in Healthcare IT in Pulse IT. He starts with an interesting claim:

Software standards in general and eHealth software standards in particular provide a methodology and governance framework to encapsulate community agreed best practice in a readily accessible and stable specification.

It’s Vince’s tense that interested me: they “provide” these things – that is, “community agreed best practice” in “readily accessible and stable specifications”.

Err, well, that’s what we aspire to. That we actually provide it… I had no idea Vince thought standards were doing so well!

The article is worth reading (if you are Australian). Vince’s conclusion:

The prospect of a cutting edge, richer eHealth standards landscape is tantalisingly close. However, it will require the eHealth community in general, the eHealth software industry and MSIA members in particular, to provide significant support in order to build effectively on the foundations provided by NEHTA and DoHA.

Tantalising… that’s for sure.

 

 

 

FHIR Issue: Invariants based on dataAbsentReason

This is a ballot comment made against the FHIR specification:

The presense of a DAR is used in several cases in the datatypes as a means of loosening the rules for what datatype properties need to be present. However, this is mixing two things. DAR is relevant when there’s a use-case for knowing why an element is missing. This is a distinct use-case from choosing to allow partial or incomplete data. For example, I might want to allow a humanId that doesn’t allow unique resolution without wanting to capture “why the id isn’t fully specified”. We need to separate “Partial data allowed” from “reason for absent/incomplete data allowed”.

Background

In the v3 data types (+ISO 21090) you can label a data type as “mandatory”. If you do so, it must be present, and it must have a proper value. Specifically, this means that it must be not be null – there must be no nullFlavor, either applied explicitly, or implied by simply leaving the attribute right out of the an XML representation. Each type definition can link into this rule and make extra rules about what other data type attributes must have values if there’s no nullFlavor. For instance, with the type II, which has a root and an extension, if there’s no nullFlavor, there must be at least a root:

invariant(II x) where x.nonNull { root.nonNull; };

By implication, the root or root/extension must also be globally unique: this must be a proper identifier. This system makes it easy to say that an instance has to have a proper identifier for something: simply label the id : II attribute as mandatory.

FHIR follows this same pattern, though the presentation is different. When you include an element in a resource, you can indicate a minimum cardinality, and say whether a dataAbsentReason (which equates to a nullFlavor) is allowed.

 <identifier d?><!-- 1..1 Identifier A system id for this resource --></identifier>

This says that the resource must have an identifier, but it can have a dataAbsentReason. So you could do something like this:

 <identifier>   
  <system>http://acme.org/patient</system> 
  <id>12345</id> 
 </identifier>

Ok, an identifier. But you could also do this:

 <identifier dataAbsentReason="notasked">  
   <system>http://gov.country/health/id</system> 
 </identifier>

This indicates that the identifier (a national healthcare one in this case) simply wasn’t asked. So, how does a resource definition say that there must be an identifier – that you can’t get away with providing an incomplete identifier? like this:

 <identifier><!-- 1..1 Identifier A system idfor this resource --></identifier>

Because the identifier doesn’t allow a dataAbsentReason (no “d?”), the second form is not allowed. Only, what stops this following form from being allowed?:

 <identifier>  
   <system>http://gov.country/health/id</system> 
 </identifier>

The answer is this constraint made on the Identifier type:

Unless an Identifier element has a dataAbsentReason flag, it must contain an id (xpath: exists(@dataAbsentReason) or exists(f:id))

Response to Comment

The issue that the commenter has is that two separate ideas are conflated: whether you can allow incomplete data, and whether you need to say why incomplete data is provided. These are two different things, but we always conflated them in v3. And we did that because it’s easy: if you unhook these things, then it’s much more difficult to say that a proper value (i.e. identifier) must be provided. Instead of saying that you just can’t provide a dataAbsentReason, in addition (or alternatively), you have to define what a ‘proper’ that is required is – and potentially, therefore, how this relates to the expected use of dataAbsentReason. This will be much more complicated than the current system.

So there are two separate things to discuss with relation to this comment:

  • Does the case – providing incomplete data without having to/being able to provide a reason for incomplete data – justify making the implementation experience much more complicated?
  • If it does, how much would you provide these rules must effectively

1. Is the case justified?

I don’t think it is – it’s never come up all my v3 experience, either in my own implementations, in my experience as the go-to guy for the v3 data types, or in committee. I’m pretty sure I would remember it if it had. Why not just use unknown? I guess there’s some fractional use case where it might not be unknown, but you can’t say what it is, and you can’t use some other dataAbsentReason. Maybe we should add a dataAbsentReason “?” for use in this case?

2. How else to specify the rules

Well, in the end, the rules are specified by XPath (exists(@dataAbsentReason) or exists(f:id)) – this is what enforcement is based on. So the most obvious thing is to take this out of the Identifier datatype and push it to the resource definition. We’re going to get XPath all over the place… I think that this is a real cost for the implementers.

An alternative approach is to define a profile for each data type that says what a “proper” value is. This offers re-use and flexibility, but would mean that a key aspect of many resources – basic pre-conditions for validity – is moved to somewhere more obscure, which will make for more complexity. (note that you can’t profile data types at the moment – the commenter made a request to be able to profile data types as well, which we had not allowed at this point because the potential complexity seemed unjustified)

Discussion

This is a FHIR ballot issue. Comments on this blog post are discouraged – unless to point out outright errors in this analysis. Discussion will be taken up on the FHIR email list, with substantial contributions added to this wiki page. A doodle poll will be held for the final vote – this will be announced on the FHIR email list.

 

FHIR Report from Baltimore Meeting

Well, the Baltimore HL7 Working Group Meeting has (finally) come to end. It’s been an extremely busy meeting, and HL7 is certainly facing some new and difficult challenges in the near future.

Now that it’s over, here’s my FHIR progress report.

Ballot

Prior to the meeting, we held a draft for comment ballot. Combined with the issues list from the connectathon, and a few other late submissions, we around 130 issues on the list. These range from questions about the scope of FHIR write down to typographical errors. I thank everyone who contributed to this list greatly – it will help us improve the quality of the specification greatly. I hope that we can get all the issues resolved to everyone’s satisfaction prior to the release of the next ballot.

Our plan is to issue a draft for comment ballot in the January Meeting. This ballot will focus on the resources rather than the infrastructure, and I’m hoping that we’ll have 20 or so resources to review, from a several different committees.A number of committees have enthusiastically embraced the challenges that FHIR presents for them, and are working on resources that will start appearing soon.

Note that the specification found at hl7.org/fhir will be updated on a regular basis prior to the ballot, and comments are welcome at any time – they may be made at any time using the community input link at the top of every page, or on the FHIR mailing list.

In the longer term, we hope to go to ballot for DSTU in the May cycle, and I’m hoping that we’ll do a couple of them – so FHIR would be a full Draft Standard for Trial Use by the end of next year.

Issues

We have a list of issues to resolve. None of them are large, but many of them will cause enthusiastic and robust discussion. Rather than try and resolve the issues on teleconferences, here’s how we’re going to do it:

  • A post will be made on this blog (by me, or a guest post by someone else) describing the issue, and the possible resolutions, along with their pro’s and con’s
  • Debate will follow on the FHIR mailing list (so sign up if you are interested, and not already a member)
  • A wiki page will be created to capture the salient parts of the email discussion (note that you can subscribe to specific wiki pages using rss/atom on the HL7 wiki)
  • A doodle poll will be held (announced on the mailing list) to confirm the resolution of the list

I’ll start posting the issues shortly, though we’ll only be doing one or two at a time.

Collaborations

One of the most exciting outcomes of this meeting for me was the new collaborations between HL7 and other SDOs that arose during this meeting. Since they are joint developments, and some of them aren’t confirmed yet, I’m not going to talk about the individual projects yet, but this really was the best part of the meeting for me. If they come to fruition, it’s going to be big news. Of course, any collaboration between SDO’s is a fragile beast, at the mercy of clashing procedures that can’t be changed – but the intent and the good will are there, and most of the parties have done joint balloting before.

Aside: Of particular note, the fact that FHIR is free for use is a real factor in these collaborations. For me, this confirms the wisdom of the board in choosing to change the HL7 rules so that the standards and implementation guides are free to use. I posted a couple of blog posts (one a guest post) that gave some people the impression I didn’t approve. I do – I do think this the right decision, though I’m concerned about the short term effects. This seemed to be the majority opinion amongst the HL7 membership at this meeting. And I learnt at this meeting that HL7 may have enough reserves to ride out the short term consequences. There couldn’t have been a better time to do this anyway.

One collaboration that I can talk about is with the W3C – we are working with the life sciences group to jointly develop semantic web expressions of the resource definitions, the instances, the RIM, and the mapping from the resource definitions to the RIM. Hopefully, Snomed-CT will be included to. This looks like it will provide a powerful connection between the operational and research branches of healthcare.

Awards

The FHIR project core team (Ewout Kramer, Lloyd McKenzie, and myself) gave out the following awards on Tuesday morning:

  • Most Egregarious Error Discovered in FHIR Spec: David Hay (David won this several times over for a variety of issues reported while he developed his solution)
  • Most Enthusiastic Specification Review: Rik Smithies
  • Most Fervent Evangelist: Rene Spronk (See Rene on twitter)
  • Best Contribution: Keith Boone (A ready-to-go spreadsheet generated from a HITSP specification)
  • Most Entertaining Connectathon participant: Jean Duteau

Each participant won a bottle of win (this started as a joke between us at the last meeting, and then cascaded, as these things do)

Connectathon

We’re going to start planning the next connectathon – how could we not, after it was such fun, and we learnt so much? What a great way to develop standards…

So I’ll be putting out a call for participation shortly. I think that there’s a good chance that at least one of the collaborations will feature at the the next connectathon, but we’re not sure yet. Watch this space.

Conclusion

Overall – FHIR is going ahead and leaps and bounds, and I’m really pleased and excited about that – I’m starting to believe that we’re really going to make a difference to healthcare.

 

In which I steal the Ad-Hoc Harley Award…

It’s the annual HL7 Plenary workgroup meeting. This means that there’s a few little extra administrative-type events here, and one of those events is the presentation of the “Ed Hammond Volunteer of the year” award. This is a particularly prestigious award, made to only a very few people who have gone beyond the call of duty to push the cause of HL7 – that is, development of healthcare standards and interoperability. The recipient list is a small and select bunch.

This year, there were two recipients: Keith Boone, and myself.

I’m very honoured to have received this recognition. Actually, I used to watch the awards, and wonder about how people made that much time to volunteer for the cause… well, now I know… so it’s nice to be recognised. Thank you very much, HL7.

Keith was the other recipient. I was particularly privileged to share the stage with Keith, who has a long record of high quality and high impact contributions to healthcare standards over many years. In recent years, Keith has shown real mastery of social media, and used it very effectively to move the cause along. I started to make a list of the things Keith has contributed to, and it’s pretty impressive – CDA, The CDA Book, templates, HITSP, IHE, Meaningful use, social advocacy… and that’s just a start.

Now Keith has a got a blog, and he has this practice of giving out these Harley awards to people that deserve particular recognition for their contributions to Healthcare. Well, I’m going to steal that idea and award one of my own:

This certifies that
Keith Boone of GE Healthcare
Has hereby been recognized for outstanding contributions to the forwarding of Healthcare Standardization.
Keith, Congratulations and thank for all your contributions everywhere. Since you can’t give yourself a Harley award, I’m going to do it.
p.s. I now return the award to it’s rightful owner.

FHIR Connectathon Press Release

Health Level Seven® International

For Immediate Release

Contact: Andrea Ribick +1 (734) 677-7777 andrea@HL7.org

FIRST HL7 FHIR CONNECTATHON A SUCCESS

Ease of use of FHIR standard results in successful deployments,
leads the way to future events

Baltimore, September 12, 2012 – Health Level Seven® International (HL7®), the global authority for interoperability and standards in healthcare information technology with members in 55 countries, today announced that it has successfully completed its first connectathon supporting its Fast Healthcare Interoperability Resources (FHIR) Initiative.

FHIR is a new HL7 draft standard for data exchange in healthcare that is based on current industry principles, including the cloud, web 2.0 and RESTful principles.  It defines a set of “resources” representing granular clinical concepts that can be managed in isolation, or aggregated into complex documents. This flexibility offers coherent solutions for a range of interoperability problems. HL7’s FHIR webpage, http://HL7.org/fhir, contains a more detailed introduction to FHIR as well as links to FHIR development and implementation resources.

Sixteen HL7 members participated in the inaugural FHIR Connectathon on Saturday, September 8, in Baltimore, including representatives from Kaiser Permanente, GE Healthcare, Orion Health, Mohawk College and Thrasys. The purpose of the event was to test the infrastructural components of FHIR (principally its representational state transfer (REST) interface and profiles) using a few relatively stable resources.  Participants demonstrated three types of workflows: the creation and exchange of Profiles, of Persons, and of Lab Reports.

Participants cited the ease of interpreting the FHIR standard as the most important factor in successful server and client deployment. The FHIR Connectathon demonstrated that even quickly developed client applications were able to connect successfully with multiple servers and synchronize server data. In addition, the connectathon established that rapidly developed clients could be used for testing and validation. Finally, the standard’s use of REST for FHIR’s web services was cited as an important success factor for demonstrating connectivity.

Plans are underway to hold additional FHIR connectathons at future HL7 working group meetings. Future events may include additional features such as certification, pre-qualifications and educational offerings.

About Health Level Seven International (HL7)

Founded in 1987, Health Level Seven International is the global authority for healthcare information interoperability and standards with affiliates established in more than 30 countries. HL7 is a non-profit, ANSI accredited standards development organization dedicated to providing a comprehensive framework and related standards for the exchange, integration, sharing, and retrieval of electronic health information that supports clinical practice and the management, delivery and evaluation of health services. HL7’s more than 2,300 members represent approximately 500 corporate members, which include more than 90 percent of the information systems vendors serving healthcare. HL7 collaborates with other standards developers and provider, payer, philanthropic and government agencies at the highest levels to ensure the development of comprehensive and reliable standards and successful interoperability efforts.

For more information, please visit: www.HL7.org

FHIR Extensions

HL7 faces a fundamental challenge in formulating it’s standards: the business of providing clinical care is wildly variable around the world. The process flows, and even the ways people think about the problems they are trying to solve and describe vary wildly. Further, there’s no central standards body that sets standards for the provision of healthcare process around the world – indeed, for cultural and political reasons, the different countries solve the problems of healthcare radically differently. Even within countries, standards, regulations and funding policies with regard to forcing common practices have dubious success.

That’s the context within which HL7 makes standards for exchanging healthcare information. And so, not surprisingly, HL7 can’t write really tight, easy to use specifications.They are full of flexibility to support variable business practices and different ways of understanding and describing the same things. The idea is that countries and projects etc will then take the specification and add their own rules about how it’s used based on whatever agreement they are able to get from the eco-system in which the exchange is going to occur. This means layers, and complexity. There’s no way around this.

The question is, how do you manage this? Generally, you could try one of two different ways:

  1. Define every data element that anyone is ever going to use, and then let the particular implementations exclude things that they don’t use
  2. Define a basic set of data, and let particular implementations add extra stuff when they need it.

Either approach has it’s problems. The first is going to produce a huge, comprehensive specification, and it will take lots of time (and $$$) to produce and use it. Many people will prefer not to use it at all due to it’s unwieldy size. The second approach will mean that every implementation will just add their own extra stuff, and none of them will be able to talk to each other using the extra stuff – and it’s going to matter.

In v2, HL7 took the second approach, and implementers or projects (and even jurisdictions) are allowed to define Z-segments, which allow them to add any additional data to messages, or even to define Z-events that have entirely custom messages. Without the flexibility offered by Z-Segments, v2 wouldn’t really be a workable standard (that’s not to say that every message includes a z-segment. Not at all – just that if this mechanism wasn’t available, it would be very difficult to  commit to it).

But Z-segments are a notorious problem as well – use of Z-segments is very often ill-disciplined, and poorly documented, so that you often work with messages where you are guessing what the content is – not a good place to be in healthcare. Also, it’s hard to get vendors to exchange content because they’ve used z-segments differently. Note that you can produce this problem without using Z-segments – that’s easy, but it is very often associated with Z-segments.

Having had a good look at the way option #2 works, in v3, HL7 decided to go with option #1 – model everything known in the base resources. This would avoid the very well-known problems of Z-segments in v2. Only… it turned out to have the predictable problems I described above: lots of time and $$$ to produce and implement the specifications. Note that this is not the only reason for that, but it is a key contributor. And even then, there was still the option to include content in other namespaces – even worse than Z-segments in some ways. In fact, the worst aspect of this is that many schema-driven implementations simply can’t handle these alternative namespace extensions at all.

So when we started working on FHIR, this was a central question: what are we going to do about this problem? The one thing we knew for sure was that whatever we did would be controversial.

To start with, we knew that we needed to allow implementers to use extensions in the message. Further, we wanted it to be ok to use them -unlike Z-segments, which attract fierce criticism whenever they are use. What we have done is:

  • Defined a stable method for representing extensions in the instance, so that any generated code – whether from schema or something else, and including the reference implementations – is able to read and write all valid instances, including ones that include any kind of extensions
  • identified extensions by a URI reference, which needs to resolve to a definition of the extension (so unlike v2, you can find out what an extension means)
  • decreed by policy, that implementers are expected to accept extensions (not that all extensions are acceptable, but that the concept of extensions is ok)

Here, we are trying to strike a balance, a midway path between options #1 and #2. Extensions are allowed, but they must follow a set of rules that ensure that everyone can read and understand them (though of course it’s unlikely to be something you can do with a machine).

So far, so good. But this creates a governance problem: just who is allowed to define extensions? Anyone? HL7 only? Maybe HL7 affiliates? Any of those answers have different pros and cons, and no obvious answer. Even more difficult is the question about who is allowed to change the definitions later, and what kind of changes are you allowed to make?

The initial RFH proposal was that only HL7 would be allowed to define extensions, but it rapidly became clear that it didn’t matter what HL7 could say, implementers wouldn’t believe that HL7 would provide timely support for doing this – and really, I don’t see how it could. And nor would it always be appropriate – why should two developers exchanging content within a strictly controlled environment need to consult HL7 to add an extension? Also, we hope that usage of FHIR scales up to a very high volume….

So in practice, we allow implementers to define their own extensions. We encourage them to register them with HL7 through their local affiliate (if outside USA) or HL7 itself, or even to ask HL7 to define them. But they don’t have to.

This allows implementers to choose how much governance they want to opt-in to. This position doesn’t please everyone – and it’s not perfect because while the price of choosing not to be interoperable is partly born by the implementers, it’s only partly borne by them. Other implementers, system purchasers, national programs – they bear part of the price too.

So how can we handle this – to discourage misuse of the extensions facility?

  • Make it easy to register extensions (this is a technical problem – tooling and registries)
  • Make it easy to find existing extensions (this is largely a informatics problem – search in appropriate terms)
  • Create expectations around acceptable user behaviour (this is largely a social problem)

Other ideas are welcome in the comments.