Monthly Archives: February 2014

#FHIR and confusion about the 80/20 rule

One of the most common misconceptions out there about the FHIR is about the 80/20 rule we use when debating whether fields should be included in resources.

This is how I often hear it formulated:

FHIR only includes 80% of the elements that are used

But this is not correct – it’s not how the rule is formulated. This implies that the intent is to limit functionality, to ensure that it won’t actually work properly. That we prefer simplicity over actually solving the problem properly (e.g. we are externalising the complexity).

That’s not the intent at all. The intent absolutely is that FHIR supports 100% of the functionality that people use.

The 80/20 rule is about something different: it’s about keeping edge-cases and disagreement out of the specification. It’s about making the complexity live where it belongs.

Properly formulated, the 80/20 rule is that

FHIR only includes an element if 80% of systems implement it

Some initial comments about this rule:

  • It’s actually only a guideline. There are other considerations around safety and consistency that are also considered
  • It’s not a statistical thing, it’s a rule of thumb.
  • It is deliberately based on current implementations in order to avoid the problem of wishful thinking or futuristic designs that don’t relate to what really happens

There’s an aspect to this rule that most people don’t consider: if a domain is well-agreed, then what 80% of systems implement is 100% of the domain. That’s how we prefer things, really. And consider, what would a 100% rule say?

  • We only add a field to the specification if it’s implemented in 100% of systems
  • We add 100% of fields that are used in some system somewhere

I’m pretty sure that no specifications written this way would be useful in real life. So there has to be rules along these lines, but I’ve never heard them formally elucidated.

We made the rule explicit in FHIR in order to make it easy (well, possible) for committees to say “no” to requests for additions to the resources that are not mainstream features (committees get these all the time, often from large vendors or national projects). We don’t want edge cases – things that only one particular country does, or one particular project did – added  to the specification.

Nor do we want things that people disagree about added to the specification. In the past, I have argued that FHIR is not the right place to float trial balloons around which to create such agreement, but that has proven impractical – there’s real enthusiasm to add new domains to FHIR, where there’s still change to how the domain is understood (e.g. genetics), and a DSTU is exactly the place to try things out. But we need to be careful: the FHIR rules for inter-version compatibility are very restrictive (they need to be, for a repository standard), so it’s important to keep experimentation well away from normative status in the FHIR specification.

 

 

 

#FHIR at #HIMSS14

I have just returned from an exhausting 3 days at HIMSS in Orlando. For general news, see HISTalk for Feb 24, 25, and 26, or Inga for Feb 24, 25, 26.

FHIR got quite an airing at HIMSS. Stand out items:

In addition to this, many people me that FHIR was a regular point of discussion with EHR/EMR/PHR/etc vendors across the exhibition floor. And it wasn’t just standards geeks telling me that either.

Breakfast Event

I thought the breakfast event went well. Chuck J ran a tight ship and I thought it was definitely worth attending for delegates who chose to – and registration rapidly filled up, so many people weren’t able to attend (though I did think we could have squeezed many more in).

In terms of subjects, we focused in issues around FHIR – why it exists, and where it is likely to go, rather than what it is. The other panelists – John Halamka (starring as a particularly snazzy suit), Doug Fridsma, Dave McCallie, and Wes Rishel (I was in exalted company) spoke generously about FHIR, and it’s future, though Wes did point out that FHIR is at the peak of the Gartner Hype Cycle. While I did wonder how Wes knows that the hype has peaked – I’m not convinced it has, yet – I also wondered how to ensure that the fall from the peak is lesser rather than greater (more on that later).

We had good questions from the floor, and some of them will generate follow-up blog posts. Also, I’ll announce here if the video of the event is publicly released.

The most Common Question

The most common question I got from vendors was about the relationship between FHIR and CCDA. CCDA is something the vendors have to do, but it isn’t exactly a hit, and they’re searching for a simpler way to implement. This is the message I had for the US EHR vendors:

  • CCDA is mandated by Meaningful Use
  • FHIR is a new specification
  • FHIR is not a replacement for CCDA (yet)
  • This year, FHIR and Structured Documents HL7 committees have a project to migrate CCDA content to FHIR, but this hasn’t started yet
  • In the future, FHIR may gradually replace CCDA, but this will take time, and depend on whether FHIR lives up to the hype

 

#FHIR and the cost of standardization

In an article about FHIR, Eliot Muir is quoted as saying:

Not everyone is enthusiastic about FHIR. Eliot Muir, CEO of the Toronto interface engine developer Interfaceware, was initially quite positive about the FHIR standard “since at a high level the message is very positive,” but he says that when you look closely the standard is too complex and overly prescriptive. “I think manufacturers and software companies can build better RESTful web service APIs into their products, which will be simpler and more cost-effective that trying to follow the FHIR standard,” Muir says. “My own economic interests are served best in an environment with lots of APIs with useful data, which increases the usefulness of good integration technology. Clunky standards inhibit my business model.”

This is pretty much what Eliot said to me personally. And really, this is the core of the standards problem. If the vendor develops something bespoke, something that matches their internal model exactly, then of course that will be simpler and more cost-effective – that is, for the vendor, the first time.

But it’s not good for their customers – every one of them will now face an integration problem, where different vendors who produce different products with different APIs need to be integrated. They’ll have different information models, implicit business processes, business events, and transaction boundaries. Really, the vendor is simply engaging in moving complexity around – that is, externalising the cost to their customers.

It makes for a good sales price, but their customers will pay more overall. So in the end, vendors are driven to come to HL7 to collaborate and drive their customers costs down. Unless, that is, they can capture the market and overcome the cost a different way – but they will have an epic challenge to maintain that approach. I find it interesting to see how the old Apple and the new Apple have a different approach to standards – it really does make a difference to your bottom line, even if many people wish it were otherwise.

If the venders externalise the cost of interoperability to their customers, then the beneficiaries of that are businesses that help customers with integrating data – that is, interface engine companies. Like Interfaceware. So when Eliot says “My own economic interests”, he’s speaking the truth, but he’s not talking about his customer’s economic interests at all.

Actually, though, Eliot is wrong at the end: Clunky standards empower his business model, because people can’t make them work well. On the other hand, good standards – ones that vendors can use well – that’s what threatens Eliot’s business model. And that’s what we’ve tried for with FHIR.

So why do some interface engine companies support FHIR? I guess because selling interface engines is only part of their business, and they are more aligned with their customer’s interests. 

 

 

Question: v2 Encoding type field

Question:

As I understand the field 18 of the MSH segement defines the encoding of the complete HL7 message. But when receiving a HL7 message e.g. via TCP/IP, how can we know the encoding to interpret the received message, extract the MSH segement and analyse the field 18 to know which encoding is used?

Answer:

Yes, this is a tricky thing – you have to know what the encoding is, in order in read the encoding. At least that’s how it seems at first glance. However all the characters up until that point are ASCII, or safe to treat as ASCII. So this is how my parser works:

  • check to see if the stream is XML (use the XML/character detection routines described in appendix F of the XML specification)
  • Not XML? assume it’s vertical bars, and use equivalent logic to determine whether it’s 1, 2, or 4 byte encoding (first 3 characters are “MSH” or “FHS”)
  • Now that I know the characters/byte, read up to MSH-18 and read the encoding
  • Reset the stream to the start of the message, set up the right character encoding based on MSH-18, and reparse the message

Resetting the stream and re-parsing is actually not necessary – given that you can read to MSH-18, it’s (almost) safe to keep what you’ve read and just do character conversion of any strings read subsequently. Why don’t I do that? Two reasons:

  • It’s vaguely possible to get non-ASCII characters in MSH fields 3-6, and it might somehow matter (I’ve never seen it myself, but my parser is used in all sorts of contexts I’m not aware of)
  • The standard way to do things in most languages is to use the stream -> character routines (StreamReader in java etc). It’s easier, and I’ve found it simpler overall to maintain a very simple string based processor that reads MSH-18, and then to reset based on the standard class libraries – the performance hit is minimal

I think this is only an issue for general parsers that read messages without first knowing by arrangement what that character encoding is. XML works the same, btw – you have to read the xml character stream in order to read what the encoding is.

 

#FHIR – still a long way to go yet

While working with one of my Customers (NEHTA), this diagram went past, a powerpoint mock-up of a GP system went past. They’ve given me permission to take it out of it’s context, and use it for something else (thanks). So here’s the GP system mock-up:

CIS

 

What I thought I’d do with this image is indicate which fields on here correspond to existing resources, and which don’t. That would be helpful to implementers, and also give some ideas as to where the project team needs to go yet.

So, in this image:

  • The top banner: Name, DoB, Gender, the first two panels (address, phone, work, mobile) and (identifiers, marital status, email address), indigenous status (race, for non-Australians), and the image are all from the patient resource
  • “Known Allergies and Adverse Reactions” – a set of AllergyIntolerance resources
  • Past Medical Hx – a set of Conditions and Procedures
  • Medicines – probably a mix of MedicationPrescription and MedicationStatement, but it would depend on the application (this one is fictional, so we can make up whatever answer we want)
  • Investigations – a set of DiagnosticReports and possible some DiagnosticOrders – depends on the application, again
  • Immunisation – a set of Immunization resources
  • Correspondence – a list of Document or DocumentReferences. Possibly a reference into an XDS. Or maybe just a list of emails (for which there is no resource, nor would we plan one)
  • Observations/Examinations (on the right side) – a list of Observation Resources, some of which would be simple, and some could be pretty complex

That leaves a number of things for which there’s no immediately obvious answer.

  • There’s a number of observations – Social History, and the Smoking/Alcohol ones, which are simple observations, but they are part of a curated list of observations (not just the most recent observation of a particular type plucked out of a big bucket of observations). We have the List resource for this, but we really need to start maintaining a fixed list of special list codes so that it can be used in an organised manner
  • The financial standing – we haven’t got anything in the financial space yet, though we’re starting to work on this now
  • Follow up date and time – is this an appointment? I think that’s overkill, but what is it if it isn’t? that’s an open question
  • The actual progress notes – what are these? we don’t know yet. We haven’t got a resource for this. Do we need one? We are arguing about that now
  • And finally, the whole thing – is this an Encounter? or something else?

So the 1st DSTU was just a starting point: there’s plenty of work for us to do yet (and this is a simple case single screen from fictional example).

 

 

Terminology derived invariants in FHIR

Over on David More’s generally controversial Australian Health IT Blog, I’ve been drawn into a discussion about a number of weaknesses in the FHIR examples with Eric Browne, who says (among other things):

The FHIR development community is still driven predominantly by technical software people. It shows in the quality of the few clinical resources and samples that I’ve looked at. E.g. instance data not matching the resource profiles and incorrect LOINC codes and mismatches to units – mmol/L is a substance concentration not a mass concentration

Eric made a number of other points and subsequent discussion etc-  you read it on David’s blog, and I thank Eric for his contribution (I would’ve preferred this stuff prior to closing the DSTU, but late is better than never, and these things will be fixed in the development version, once I do the work to publish that).

In this post, I want to pick up on the last point, about mismatches between LOINC codes and units on observed values.

Firstly, several general points to make:

  • The offending examples mostly came from the Standard Australian example set (they used to be posted on ahml.com.au, but I can’t find that or them right now), but they may have come from elsewhere too
  • This is a very subtle point. I very much doubt it will matter in clinical practice – since it’s so common, if you really cared about the difference enough, you’d have to handle the error
  • This is a common in error in practice
  • LOINC often has a code for mass concentration and not molar concentration, or vice versa
  • Practice in USA tends to be mass concentration, while practice in Australia tends to be molar concentration, so it’s particularly common to see a mass code with a molar unit in Australia, since LOINC usually follows US practice
  • I don’t see how this demonstrates the point about technical software people, btw – though scientists might be less likely to make it (though I am one, and I glossed over it when I reviewed the example)

However, even given all that, we certainly should get the official examples correct. And our natural impulse in the FHIR project would be to write a validation rule for the validation engine. That way, it runs on all the published examples, and also on every other example that is ever validated. In this case, the rule would say something like:

If one of the codes in observation.name implies a unit kind, then the unit in the value should match that kind

Or, if we can assume UCUM, then the rule becomes:

If one of the codes in observation.name implies a canonical UCUM unit, then the unit in the value SHALL compare to that unit

However I’m not entirely sure we can assume UCUM, though. Of course, in v3, HL7 insists on UCUM so you can do this type of reasoning, but we backed away from that in FHIR because the real world doesn’t actually align with UCUM (much as I think it should).

Right now, in FHIR, we have no systematic way of making this kind of assertion at the element level. We do have a way of stating XPath assertions, but this rule above, which involves multiple terminological look ups, and a UCUM code comparison – is far outside the scope of what you could do with XPath. (Well, I lie – there’s an xpath implementation of UCUM, though as far as I can find, it’s not public.)

Generally, there’s a few different paths we could take for this kind of thing:

  • Assume that you can get there in XPath
  • Define a sophisticated web service to support the xpath invariants, and assume people will implement it (is there a standard one? I don’t think so, though FHIR itself is getting there)
  • Choose some other language to make these type of assertions in (and assume people will develop tooling for it)

I don’t know which of those to pursue. I know I hate doing real programming in XPath. Comments are welcome…

 

Question: openEHR and FHIR

Question from Heather Leslie:

How to get more cooperation bw FHIR resource devt & clinically verified openEHR archetypes to shared data roadmap for future?

Answer Questions in response:

Well, my immediate question is, “what does clinically verified mean?” Is there any archetypes that are clinically verified, and how would we know? The openEHR eco-system has several different versions of most archetypes, each with different clinical stake holders involved to a variable degree. Which, if any of them, are clinical verified , and by who? And what does “verified” mean – other than that it’s being used (happily?) in practice?

I’m sure I’ll get vigorous response to these questions on the openEHR blogs – I’ll link to responses from here.

Having said that, I would love to get more involvement between openEHR and HL7 on the design of the clinical resources. The openEHR community has a higher level of clinician involvement, and does tend to have a practical focus, and these are good things that I’d love to link into the FHIR process, and if that drives further convergence between the communities, then that’s even better.

As for a process, well, it’s a good time to kick off a new process – the FHIR DSTU has been published, and we’re back to asking big questions around what changes and improvements that next version of FHIR will have. I would think that the process starts with a mapping between the FHIR resources and the relevant related openEHR archetypes. In fact, a few members of the openEHR community were going to initiate that process themselves, but nothing has eventuated yet. If we focus on the existing resources, and get the process and the concepts sorted out there, then this will automatically grow into proposals for new resources.

If anyone in the openEHR community wants to have a go at that, I and other members of the FHIR community will be happy to review the mappings and help out with the process.

 

 

 

 

FHIR: The Future of Interoperability

Attending HIMSS in Orlando? Don’t miss this

sponsored by HL7

This is your special invitation to attend the Interoperability & Standards Committee Networking Breakfast and Panel session on FHIR: The Future of Interoperability.

Where: HIMSS14
When: Wednesday, February 26, 7:30–9:30 a.m.
Location: Interoperability Showcase Theater, Exhibit Hall F
Cost: Free with HIMSS registration

Designed with a focus on simplicity and ease of implementation, Fast Health Interoperability Standard (FHIR) is an exciting new addition to the HL7 standards platform. FHIR combines the best features of existing HL7 standards with the latest web technologies to make interoperable healthcare applications dramatically simpler, easier, and faster to develop.

HL7 CEO, Charles Jaffe, MD, PhD will moderate a panel of interoperability and standards experts including the following:

Keith Salzman, MD, MPH

CACI International Inc.
MASTER OF CEREMONIES

Charles Jaffe, MD, PhD

HL7 CEO
MODERATOR

Grahame Grieve

Health Intersections

John Halamka, MD, MS, CIO

Beth Israel Deaconess Medical Center

Wes Rishel

Gartner

Doug Fridsma, MD, PhD

The Office of Science and Technology in the Office of
the National Coordinator for Health Information Technology

David P. McCallie, Jr.

Medical Informatics
Cerner Medical Informatics Institute


FHIR DSTU is published

We have now published the DSTU version (draft standard for trial use – effectively a beta) of FHIR at http://hl7.org/fhir.

Note that this is now stable and suitable for production implementations, and that development moves to http://hl7.org/fhir-develop, though I think we’ll all be taking a rest before starting work again.

Please note that we are serious about the draft standard for trial use. Implementers should read this section before depending on the specification: http://hl7.org/implement/standards/fhir/dstu.html.

Getting to this point has been a huge team effort, and so many people have contributed. There’s a formal credits page here: http://hl7.org/implement/standards/fhir/credits.html, though this doesn’t cover many people who contributed in a less formal – but not less real – fashion. I also wrote a brief foreword as a separate post.