Category Archives: RFH

FHIR Report to HL7 Architecture Co-ordination Council

This is a written version of the report I gave to the HL7 Architecture Co-ordination Council (technically, the SAIF roll-out project), at HL7 concerning the development of the FHIR project (It has considerable similarity to the last post, made before the HL7 meeting):

At the last meeting, in San Diego, the RFH project (as it was known then) was an idea, with considerable smoke and mirrors. This meeting, we (myself, Lloyd McKenzie, and Ewout Kramer) have rounded it out, and it’s now a complete methodology (where complete means, no missing holes, not finished and ready to use). FHIR has been fairly thoroughly reviewed in various working groups within HL7 and while there are many identified specific issues with the specification, the overall shape of the specification has generally gained enthusiastic support.

The FHIR development team now has the following 4 priorities for ongoing development in the near future:

  • Collaborating with several domain content work groups in HL7 to either review or develop resources. In particular, we have a formal collaboration with the Pharmacy workgroup, and informal ones with Patient Administration and Orders and Observations. In addition to building actual resources, these collaborations are as much about building processes, knowledge and culture around doing good resource design
  • Further developing the underlying tool chain, the RIM mapping framework, the way terminology is handled, and the integration with existing HL7 content and publishing frameworks
  • Doing implementations to test the basic framework ideas. (there is already a server publicly available, but more on that later)
  • Writing a SAIF mapping between the canonical SAIF (HL7’s internal master architecture framework) and the FHIR specification (this will in effect be the formal FHIR methodology definition)

We have provisional agreement around transferring the FHIR ownership and development resources to HL7 – this should be resolved soon, and then the FHIR specification will more from it’s current place to either http://www.fhir.org or somewhere on the HL7 website.

We’ve also taken this opportunity to strengthen the nascent governance council, and I’m very pleased to welcome Bo Dagnall to the team – FHIR already owes some it’s overall design to Bo, and he’ll provide some real insight and discipline to the council and the overall architecture of FHIR

FHIR Report for the San Antonio Meeting

The prototype HL7 standard I am working on with a few others used to be called “RFH” (Resources for Health) but the marketing committee has now branded it “FHIR” (pronounced “Fire”, as in HL7 Fire). It can be found here.

At the last HL7 meeting (San Diego), RFH was just an idea, with a lot of smoke and mirrors behind it, and a whole lot of unresolved questions about how the specification would be developed and implemented. Since the last meeting, I’ve worked with several friends (principally Lloyd McKenzie and Ewout Kramer) to resolve all these outstanding issues, and polish up the rough parts. Now FHIR is ready for wider review.

Major changes:

  • There’s now a methodology for producing resources, and we’re ready to start working with a few key committees to start producing resources
  • There’s messaging, document and SOA frameworks now, and the RESTful framework is much better defined
  • Policy around extensibility – which is a central piece of managing the complexity of standards – has been extensively discussed and described
  • A whole bunch of implementation collateral has been added – schemas, examples, reference implementations, UML definitions, resource dictionaries
  • The terminology binding/definition part has been simplified and finished
  • The data types were overhauled and simplified further with substantial further input from CIMI members

What comes next?

  • Defining the key clinical resources – synopsis, medications, problem lists, vitals and other diagnostic reports (Lab already exists) and validating the existing resources with the relevant domain committees
  • Finishing out the reference implementations
  • Validation and conformance tooling
  • testing the specification in the real world
  • ..starting the long march towards ballot

When will FHIR be discussed at the San Antonio meeting?

This is the (possible) times I know about:

  • Mon Q2, RIMBAA: RIM with DDD principles, includes discussion about FHIR(?) (George De La Torre, and no, I don’t know anything more about this)
  • Mon Q3, RIMBAA/Tooling: implementation tools, standardized resources to be incorporated into software as API’s as beiing discussed in MnM from Grahame’s FHIR (again, I’m not sure when this will be discussed)
  • Tues Q1, MnM: Ongoing FHIR development (MnM sponsors the FHIR project)
  • Tues Q6, Tooling BOF (after the Corepoint party, thanks to Dave): introduction to FHIR (aimed at content developers and implementers)
  • Wed Q1, ITS: ITS consideration of FHIR (technology focus)
  • Wed Q6, RIMBAA BOF: FHIR topics of interest to RIMBAA (Ewout/Lloyd)

In addition, we are trying to find a time to discuss FHIR with the pharmacy workgroup, but I haven’t yet got a time for that.

Who is FHIR for?

Finally, a note about the target audience for FHIR: The HL7 v2, CDA, and v3 specifications are in the latter part of the development cycle. Adopters have invested substantial amounts in them, and aren’t lightly going to simply adopt a new specification. Yet most implementers and HL7 members believe that there must be a better way to define standards, and that HL7 needs to find it.

I’m regularly approached by projects or companies who are doing new types of integration, building new architectures around web and mobile technologies, looking to exchange data cheaply. They ask me what specification to use – they’re looking for an option that fits their architecture and technology and offers a good balance between useability and re-useability. I don’t have anything good to recommend to them – v2 is old tech, venerable, but not flexible and adaptable. CDA is a document spec, and while bits are useful, it’s too heavy weight to be a good option. And v3 messaging… no. Looking wider afield, the SOA specifications, IHE specs, nothing really hits the sweet spot… yet there’s a lot of this work going on.

For now, we are aiming to meet these requirements with FHIR – green fields sites. Also, we’ll be trying to position FHIR as a logical option for augmenting existing v2 implementations. If FHIR works, if it’s a real good way to produce an implementable specification, then it will start to get a good rap, and then we’ll start looking at the wider questions around HL7 product life – but we’d like to get there as soon as possible, because there are large questions in that regard.

Data quality requirements in v3 data types are both necessary and spurious

There’s three design features in the v3 data types that help make v3 very hard to implement. And they’re so low level, they undercut all the attempts at simplification by greenCDA, etc, and none of that stuff makes much difference.

There’s 3 features I have in mind:

  • CD(etc).codeSystem must be an OID or a UUID
  • II.root must be an OID or a UUID
  • PQ.unit must be a UCUM unit

Along with these features, the requirements around the interaction between the codeSystem/root values and the HL7 OID Registry.

They really make it hard to implement v3, particularly if you are in a secondary use situation – you’re getting codes, units, or identifiers from somewhere else, and you don’t really know authoritatively what their scope and/or meaning is, or in the case of units, you can’t change them, and they’re not UCUM units. You can’t register OIDs on someone else – or if you do, the HL7 OID registry is so far out of control that no one will notice or know (on that subject, 200+ OIDs are registered daily, and any curation is on volunteer time, i.e. it doesn’t happen).

I’ve spent an inordinate amount of time this year working on the problems caused by these 3 features – they just consume so much time when generating proper CDA content. And when I look at the CDA documents that I get sent for review, these are beyond the average implementer who knows v2 well.

And often, we just have to fold on the units, because this is not resolvable until the primary sources can adopt UCUM – and they have their own standards that work to prohibit UCUM adoption. For example, the Australian prescribing recommendations – which are followed directly by many people – prohibit using ug for micrograms, since it is easily confused with mg. Instead, mcg is required. That’s a hand writing based recommendation, but the recommendation doesn’t make that differentiation. I think that this is resolvable, but it’s going to take years of work with the various communities before they’ll go to UCUM.

Necessary Requirements

The problem is that the requirements are thoroughly based on requirements that are necessary to establish an interoperable healthcare record. If you don’t consistently identify codes and identifiers, then you can’t collate all the health information into a single big logical repository (no matter how distributed it is architecturally). If you don’t use UCUM, then units are not computable or reliable – and this is important. So these are necessary requirements. Here in Australia, we are using CDA to build the distributed (pc)EHR. That’s been controversial – there’s still people claiming that we should have used v2. Well, if we had used v2, then we’d still have to solve the data quality requirements somehow – in fact, several other posts I’ve made are about that, because fixing the data quality in v2 messages is worthwhile anyway.

So these requirements for base data quality are necessary – but they sure add hugely to the project cost. And the costs aren’t particularly visible. And there’s a huge amount of legacy data out there for which it is difficult to bring the base data up to the required level

Spurious Requirements

The problem is that the requirements are also spurious in a point to point messaging context. In this context, it’s easier to resolve the data quality issues retrospectively, by local agreement, instead of having to sort these things out unambiguously in advance. But v3 imposes these costs anyway, even when the requirements are spurious. I wonder how much this data quality issue – which I haven’t really heard a lot about – contributes to the resistence to migrate to v3 messaging from v2 messaging, since the benefits aren’t there in the short term.

In particular, these data quality requirements are part of ISO 21090, and when that gets used for internal exchange within a very focused community (my Peter and George example), these data quality requirements are just tax.

RFH

In the RFH data types, I’m going to back off the pressure – it will be possible to represent data with less quality than the v3 data types allow (though they will also allow the same high quality data as well.

 

Speaking in RIM grammar

On Thursday and Friday this week, I held a a two day course teaching how to map from Australian v2 messages to Australian CDA documents. My course went a lot further than Keith’s excellent chapter in his CDA book on the subject, because the mappings were made in the presence of both v2 and CDA implementation guides (Australian Standards and NEHTA specifications). As part of the course, I tried to teach the attendees how to speak in RIM grammar – you need to, in order to map miscellaneous v2 concepts- there’s a lot of them – into the clinical statement pattern. Obviously there’s a lot of prior art about -the NEHTA specifications and more widely, the combined IHE/HL7 implementation guides and most of all the consolidated health story Implementation Guide – but it’s common to encounter data concepts that simply haven’t been mapped to any clinical statement pattern, let alone the simple one in CDA.

So here’s a quick and simple way to learn to speak things in the RIM grammar.

Step #1: Basic Grammar

The basic grammar of the RIM is Act-Participation-Role-Entity. Learning how to speak this language is as easy is filling in the following sentences:

  • Act: “[W] is something that happens or could happen”
  • Participation: “[Y] is the [X] in [W]”
  • Role: “[Z] is capable of being a [Y]”
  • Entity: “[Z] is something that actually exists at least for a while” (i.e. doesn’t exist as a record)

where:

  • W is a display name from the ActClass heirarchy
  • X is a display name from the ParticipationType heirarchy
  • Y is a display name from the RoleClass heirarchy
  • Z is a display name from the EntityClass heirachy

It’s simply a case of looking through those code systems and picking a code. Here’s an example:

  • E: “[person] is something that exists at least for a while”
  • R: “[person] is capable of being a [patient]”
  • A: “[encounter] is something that happens or could happen”
  • P: “[patient] is the [subject] in [encounter]”

Except that this isn’t simple. There’s a few complicating factors

  • put like this, it’s not clear how the act code, role code, and entity code interact with the class codes – this makes it hard to know quite what kind of term of you’re looking for in the act heirarchy particularly
  • Also, verbs and nouns have nothing to do with acts or not, and somethings that look like entities (Documents, Accounts) are actually acts
  • it’s hard to pick the right role code – the role code hierarchy is a mess
  • RIM classCodes and RIM classes are interlaced (the actual RIM class hierarchy doesn’t match the actual RIM class hierarchy)

The only way to work your way through these is practice, with a v3 specialist to help.

Step #2: Mood Codes

The second step is to pick the right mood code. This is easy:

  • Intent: “[W] is something that we intend to make happen”
  • Appointment: “[W] is something that is expected to happen at a set time”
  • Appointment Request: “Please give us a time for [W] to happen”
  • Promise: “[W] is promised to happen” (or I promise to do [W])
  • Proposal: “It would be good if [W] happens”
  • Request: “I would like it if [W] happens” (or “Please do [W]”)
  • Definition:    “This is what [W] looks like if/when it happens”
  • Event:“[W] is something that happened”
  • Criterion: “If [W] happens”
  • Goal: “Getting [W] to happen is our goal”

Step #3: Act Relationships

This sentence is easy:

  • “[source act sentence ]” [V] “[target act sentence]”
where V is a displayName taken from ActRelationship type.

Easy…?

That’s enough – you could go on and define RoleLink, and playing and scoping entity, but by the time people have mastered these, they’ve got the principles. It’s useful to understand the methodology, but you always have to consult prior art:
  • Domain Models
  • CDA Implementation Guides
  • National Program specifications (if you can get them)
It also helps to have a v3/RIM specialist on hand. Even then, you can still get confusion and contention about the “right way” to represent a notion using RIM grammar – because like all good grammars, it’s ambiguous. There’s a confusion here -the idea of the RIM isn’t to create a meaningful ontology that you can reason with, nor to offer useful leveragable logical grammar for implementors – it’s a tool to ensure consistent model definitions (and it addresses only some of those issues). That’s why, in RFH, we’re keeping the this whole RIM grammar away from the resource models, in the background where it’s exercised by RIM specialists to ensure the models are robust.

RFH: Outcomes from San Diego meeting

Previously, on this blog, I proposed an idea for consideration by the fresh look task force: “Resources For Healthcare“. The proposal was widely discussed here at the San Diego meeting, and many people have asked me for a report on how the discussions went. This is the report.

Firstly, RFH was very well received. Very well received. Most of the RFH sessions were packed, and one session had people crowded out the door. There’s a real appetite in HL7 for change, and while there were many questions and open concerns – see below – very few people were unhappy with the intent or general shape of RFH. In fact, given the RFH was just an idea, with heaps of parts not done, I kind of feel as though I proposed an aeroplane, laid out the seats, marked the cockpit and wings with some rope, and then, when I looked back, the seats were full of people wanting to know when the plane takes off.

In fact, several projects are already considering using the data types portion of RFH. Comments on the data types are welcome here.

In fact, the fresh look taskforce decided that there was no need for the fresh look task force to be involved in RFH: it’s migrated straight into committee business. Modeling and Methodology will create and sponsor an official project to develop RFH towards adopting it as official methodology. Of course, that can only happen if lots of open issues can be resolved.

In addition to thoroughly considering the differences between RFH and existing HL7 methodology, the following issues were discussed at length:

  • In it’s current form, RFH appears to mandate that REST is to be used. It needs to be rewritten to make it obvious that REST (http) is not required, but is optional – and often not even relevant
  • There was discussion about how RFH relates to HData. RFH offers contents for hData (or HData is a record aggregation profile for RFH resources).
  • hData is also REST based – I will work with Gerald (hdata author) to see if both hdata and RFH can use the same transport (i.e. http) spec
  • RFH proposes a new model for extensibility; this offers real advantages but raises a new issue about how this is best governed in order to balance between the requirements of various implementation concerns and scopes
  • There was much discussion about how RFH will resolve the tension within HL7 around complexity, and allowing people to do what they want without making it too complex for everyone
  • RFH needs to align better with classic RESTful documentation styles
  • There was contention around how RFH resources are aggregated (very technical), and the importance of maintaining resource boundaries. I’ll make a separate post on this in depth later
  • We discussed how to manage the definitional layer (data dictionary / ontology) at length. There’s a real risk of this complexity slipping out of control, and that will be a serious focus of the next round of development

Where to now? MnM is going to create a project proposal to give RFH a home at Hl7. On the technical side, I’m planning to work with Ewout Kramer, Lloyd Mckenzie, and perhaps one other to develop RFH into a serious proposal for consideration at the next HL7 meeting at San Antonio in January. For now, much of our work will be mediated through the wiki on this site. You can track changes to the wiki here.

 

Question: What is IHMSDO and what does Resources for Health have to do with it?

I attended your RFH session earlier today which was both rather crowded as well as informational.  In the video available at http://www.youtube.com/user/IHMSDO , at 6 minutes into the video, it talks about “RFH R2”. In effect the entire video, set in the future, is a kind of Fresh Look proposal. I’m relatively new to HL7 – what do you make of it? I’d appreciate a review.

It’s a parody video – set in the future looking backwards in time to today. It’s quite funny, though not everyone will be amused. The fact that it mentions RFH R2 shows that it’s pretty recent indeed. Whoever made it thinks RFH might be part of the future – I’m honoured. But they don’t say who they are. Still given the accent and some aspects of the video, I’ll be chatting to my dutch friends to see if any of them will own up to knowing who made the video.

And yes, today’s RFH session was packed. I’ll make a post about RFH tomorrow

A comparison of v3 and RFH

Klaus Veil asked me for a comparison of v3 and my alternative proposal (RFH), to help people understand the differences. This is a list of some of the differences.

  • the v3 standard is primarily focused on explaining the standard – the process. And with reason – you have to understand the process, from the RIM down, to be able to implement it well. RFH turns that on it’s head: the focus is on explaining what you have to do to conform to the standard. It’s not that there’s actually less to understand – it’s just a different focus on the order in which you encounter things, and why. (My favorite example of the problem the v3 approach causes is the A_SpatialCoordinate cmet. It contains a coherent description of how the parts map to the rim, but the parts are not coherent – and it has passed ballot)
  • the v3 process starts from the RIM – the definitions are in the RIM, and you constrain the RIM down by removing optionality and capability to focus on a particular use case, and the instances that are exchanged are RIM instances. In this way, the RIM stands between the business analysis and the implementation model. RFH turns this on it’s head; the implementation is the business analysis model – if there’s a difference between the way business analysts think about the model and the implementation, this difference is explained directly in the implementation model. The implementation model is also secondarily mapped to the RIM – this is required to ensure correctness, but not for interoperability.
  • v3 is exploring various forms of simplified xml. The mantra has become, “interoperability is a transform away from the RIM”. RFH embraces this – interoperability is the focus, and the RIM is a transform away. And compromise is fought out in the implementer space, not the definitional space
  • n v3, the RIM provides the formalism by defining the base types. Typing is innately an exclusive process, so it doesn’t make sense to also define the concepts against different models/perspectives at the same time. The consequence of this is interminable debates about the ontology of the RIM itself, and the perspectives it enshrines. RFH pushes the RIM into the ontology space, where the definitions are open, not closed. This enables additional definitional perspectives to be introduced where appropriate. It would also allow different parts/patterns/concerns of the RIM to be teased apart (where now they are bound together by being contained in a “class” with all that entails)
  • in v3, we were forced to model the universe in order to avoid subsequent breaking changes. RFH doesn’t know how to solve this problem, but can leverage the fact that so much of it has already been done. Without this preexisting work, RFH would have a problem….
  • in v3, the definitions are “abstract” so that they can be technology independent. This is wonderful in theory, but requires custom tooling to map between the models and the reality (especially in conformance) – and hardly anyone actually understands this in the data types, let alone elsewhere. How painful this has been… RFH is focused on XML first – the concrete model that is exchanged. Other definitional forms are secondary. And the exact form is based closely on a recognized best of breed model (highrise API)
  • in v3, there are multiple overlapping static models, some of which are sort of incompatible from some perspectives. These overlapping models represent a failure of governance – the cat slipped out of the bag before we understood what was at stake, and we never got on top of it. It’s a major issue for CDA R3, for instance. RFH can’t solve this – instead, it elevates this problem to a central concern by insisting that resources are modular and separate, and we have to govern it from the start.
  • in v3, we have still not figured out a workable behavioral model. RFH starts with a restful infrastructure – this meets many requirements out of the box. Where it doesn’t, additional service models can be built on top by either HL7 or other adopters as desired, by reusing the resources. RFH defines an aggregation framework for resources to enable this, and this allows documents such as CDA to be natural outcomes of the methodology.
  • V3 requires custom tooling – both to produce the specification, and to adopt the specification. RFH still requires tooling to manage the definitional space, and ensure it is coherent, though more of this tooling arises logically from existing ontology toolkits. RFH doesn’t require custom tooling to adopt the specification, though the definitions are there to be leveraged, and that would require tooling.
  • v3 treats extensions as entirely foreign content. RFH, on the other hand, makes extensions part of the base content and insists that they be related to the existing definitional content. Since most extensions are based on content properly defined elsewhere, this is more natural and palatable to adopters
  • by pinning implementor concerns down, and being open definitionally (rather than the converse as in v3), RFH should offer many disconnected potential members of the community to re-engage.

These are the differences I had in my head as I set out to write RFH.

Not all of these differences will be to everyone’s liking. In particular, there’s no necessity for all the parts of the wire format to map to the RIM – and the resources I’ve already put in RFH don’t – they explicitly represent identification and management concerns that the RIM has never engaged with. So it may be that RFH can’t interoperate with v3 across a gateway.

Also, not all these differences I had in mind are properly represented in RFH – or it may be that they even not properly achieved. RFH isn’t offered up as a final specification, ready to ballot. It’s an idea, an example of what could be achieved if we approached the standards process from this different direction, and maybe it’s also a usable basis for further work.

Resources For Health: A Fresh Look Proposal

The remit of the HL7 Fresh Look Taskforce is:

If HL7 started again from scratch with a new specification, what would a good specification look like?

Even if you’re going to be critical of the v3 Process, you still need to recognise that there’s much value there too. But as I said, what are we going to do?

At Orlando, I spoke with several people about a new way of looking at v3 – taking all the existing ideas, but moving them all around. A big part of that proposal was to move all the complexity of the RIM derivation away from the wire format (i.e. the implementers) into the definitions, and to leverage ontology tools (i.e. OWL) instead of custom tools and class designs. But I could see that it was too hard to explain what I had in mind. So when I got back from Orlando, I cast around for ideas. If you started writing a standard now, it would certainly be focused entirely on the web. Many people pointed me at the highrise specfication as state of the art – simple and easy to implement and manage. So I started there. I took that and wrote “Resources For Health”. It’s not complete, of course (including all the broken links and todos) – but it’s enough to actually exchange lab reports with.

It’s enough to show what a different kind of standard that HL7 could produce. It’s no less sophisticated than the HL7 version 3 specification, but it seems really simple – life doesn’t need to be more complicated than it already is. This specification is altogether different from other HL7 specifications, yet in many ways, it has real continuity with much of where we are. While the patterns have been deeply re-organised, most of the ideas here are already present:

  •   This specification uses the vocabulary model as described by the core principles (though we’d love to simply it even more for implementers)
  •   The modeling is still based on the RIM – all the implementable models are mapped to the RIM (though it’s a little further away than before)
  •   The resources are based on the CMETs and RMIMS defined in v3, with input from v2 and openEHR models
  •   The conformance model is based on the HL7 one, but it’s been reorganised to focus on XML, not abstract attributes
  •   HL7 has a strong interest in simplifying the XML – this specification takes that to the next step
  •   The data types are very closely aligned to v3 data types, but simplified (sometimes by moving things out of the datatypes)
  •   CDA has introduced the notion of text as a fallback for human interpretation. this specification generalises that

Anyway, enough of that. Here’s the links:

Main Introduction

Also, make sure to read the letter to readers at

RFH Covering Letter

A key part of this specification is that there needs to a constrained number of resources that are useful to everyone. I don’t know whether appropriate governance can be put in place to build a specification which has just the right number of resources, at the right point between useable and reusable, and whether we can get meaningful compromise around the, – but if we can’t, what value are we bringing to the process? (note though, an important part of this specification is that the agreement and compromise happens on the implementer side of the transform to the RIM, not the RIM side, which is the key difference between this and greenCDA as a methodology).

This specification was written by hand – all of it. That’s not a process that scales. I haven’t put a lot of thought into the question of how to produce this specification, or what tools would be needed. The point was to produce an examplar of what the output should be – and then to worry about the input. If this specification achieves it’s goal, end implementers – from programmers to CIOs – can look at it for a few minutes, and then say, “right, I understand this. Let’s go build something.”

Maybe this proposal would be v4. Maybe it’s just v3A. Certainly several people who’ve looked at it said it was just a better way to implement v3 (or the only possible way). But there’s some real change too. Perhaps the biggest is that in the existing v3 process, the RIM is a jealous idol – it suffers no competition. In this world, where the design is done in an open space (ontology) rather than a closed space (classes), there’s space for other inputs, other perspectives.

What now? I’m interested in commentary on the proposal. If there’s enough interest, I’ll setup a wiki. Please read RFH, and think about whether it’s a good idea or not.

Update: Rene, who’s been working with me on this, has contributed this introductory video:

Is GreenCDA the answer?

As expected, comments to my previous post are running hot – both public and private. I’ll try and deal with some of them. Firstly, I’m going to deal with the idea that greenCDA is *the* answer. Robert Worden posted a comment in response to my previous post:

One-size-fits-all semantics is bound to be clunky and will fail the market test. We need clean simple models for sub-domains – DSLs for clinicians – which are precisely linked, probably via the clunky universal model. Green CDA technology now makes this possible

Is this true? Because I don’t think it is. One-size-fits-all applies to HL7 v2, and there’s still an active camp who think v2 is the *the answer* (yet another post coming up, I guess). One size fits all is a challenge – but the alternative is worse. Quoting from a previous post about compromise:

you can do what HL7 increasingly does: build a complicated framework that allows both solutions to be described within the single paradigm, as if there isn’t actually contention that needs to be resolved, or that this will somehow resolve it. This is expensive – but not valuable; it’s just substituting real progress with the appearance thereof

And this is my core problem with green CDA.

By the way, I like greenCDA. It’s a good idea, a good way to simplify the creation of CDA documents. The problem is, as explained in another previous post of mine about greenCDA, is that you end up trading usability vs reusability:

The more different kinds of CDA you produce (i.e. the more use cases you support), the less useful greenCDA will be, as it will start fractionating your code.

And this is true of a single implementation, or of any community: the more CDA is used, the more greenCDA will start fractionating the community.

I think greenCDA is a good methodology for writing CDA documents. But I don’t think it’s a methodology that offers HL7 a useful path forward for solving our general case.

 

HL7 needs a fresh look because V3 has failed

A few months ago, I posted a call for input into the HL7 Fresh Look Taskforce. HL7 wouldn’t need to have a fresh look task force if it hadn’t lost it’s way, and it wouldn’t have lost it’s way if v3 hadn’t failed. But it has:

HL7 v3 has failed.

Now a few readers are going to stop reading at this point and rush off and go quoting me saying “v3 is broken” or “you shouldn’t use v3” or “national program XXX shouldn’t have used v3”, but I haven’t said any of those things – v3 can be made to work if you provide enough skills and resources, and for some things, it’s the best solution (CDA, for instance). But overall, v3 has failed to achieve the goals that HL7 has (being the general best solution to everything), and is not a vehicle that can take the organization forward from here.

That’s a sad and painful admission for an organization that has invested *so* much work into an idea that seems to have so much promise. And HL7 really has worked hard on v3. But where we’ve ended up hasn’t been where we expected to end up.

I see it as if, back in the mid nineties, we stood on top of the mountain called v2, and said, “from here, we can see a great mountain over there, let’s climb that one.” And v3 was a monumental climb, much bigger than we expected (and I have a huge amount of respect for the people who pursued the climb with an awesome vigor and determination, most especially Woody Beeler). But we’re at the top now, standing on top of the mountain looking around. And to be honest, the view isn’t what we thought it was going to be.

Why?

  • the V3 design process imposes consistency on our standards (it’s primary goal) but the inconsistency is mostly not of HL7’s making (example)
  • the V3 design process is technology and platform agnostic – but that just makes implementation more costly
  • the V3 design process depends on design by constraint – see my previous comments on this
  • the v3 process really required all the possible modeling to be done up front – the price of change is too scary – and therefore produced models that weren’t implementable directly (too much! too big!) without support from a large program and highly skilled insiders
  • the V3 models (RIM) created semantic interoperability but not clinical interoperability, and it was only once people tried v3 that we discovered what the difference was
  • we tried to satisfy too many different implementer interests, and ended up satisfying none of them (see my comments on context of interoperability, the HL7 community, the xml consensus, and lack of real compromise)
  • we just never got anything compelling in the space of specifying the behavioural aspects of interoperability

Now I’m not saying that you can’t make v3 work – in fact you can, quite well indeed, if you buy into the “sandbox”. If you’re prepared to adopt the entire specification stack and invest in making it work, then v3 works quite well. But the walls to the sandbox are quite high, and v3 is not an opt-in project where that makes sense (unlike openEHR). It’s a general interoperability standard, to be implemented in lots of ways by lots of different people, mostly by other people’s choice.

I’ve worked hard, with others, to try and lower the walls over the last few years. But we’ve by and large failed. Changing a specification is very difficult. The people inside the sandbox don’t like us diluting the purity of the original ideas, and for most users, the outcome hasn’t been much different – it was too little, too late.

It’s not that there hasn’t been good outcomes from HL7 v3. CDA is obviously a good outcome. But we have to be realistic – CDA isn’t magically successful because of the parts it gets from the v3 process, but almost in spite of them (I can’t tell you how many people have told me that they’d rather have v2 segments embedded inside an html like document).

So it’s time to look again (that’s what the fresh look taskforce is all about) – what are we doing? Where are we going now? I, for one, don’t think we can stand happily where we are. This is IT, and the only constant is change. So what are we going to do now?

p. s. I have my own ideas – check back tomorrow…

Update: please be to sure to read the counterpoint argument

Update 2: See my own ideas, and I posted a summary of comments