HL7 standards V2 vs V3: a letter to “S”

So, though a long process I don’t care to think too much about, I got dragged into a discussion with an anonymous HL7 member (“S”) over on Barry Smith’s HL7 Watch blog. It’s hard to engage with someone you know anonymously, and then the discussion suddenly (and unexpectedly) broke out into useful content. But it was getting unwieldy responding through Barry’s blog, so I chose to make a longer response here.

So, “S” asks me a series of questions. I’m going to quote them here and respond:

I can understand, and see, that Grahame is a true V3 loyalist (I respect that), but unfortunately to the degree that he is perhaps unable (or unwilling?) to look at its fundamental flaws.

Well, I might be co-chair of MnM, but I’m not really a true believer. In almost everything people do (with the possible exception of banking 😉 there’s some merit. But since everything is done by people, everything also has at least some flaws, and I’m not unwilling to look at it’s flaws. And you ask a series of good questions below, ones that I wish we spent more time looking at.

Thank you Grahame for agreeing to listen to my critique:Here are a few issues that I have with the fundamentals of V3 messaging (CDA not included) vis-a-vis the V2.6 standard.

Uh? CDA not included? It’s not fair to exclude the single most useful part of v3 from the discussion of whether v3 is useful or not. But ok, we’ll confine the discussion to V3 Messaging only, as long as everyone is aware that this is less than all of v3. (And you don’t even grapple with the biggest issue with v3 messaging, the reason why I don’t think it will get any more adoption as it is, which is that the behavioral aspects are deeply broken)

(1) One of the strong points of V3 is conformance. Why does V3 bring about ‘Conformance’ in an extremely opaque, convoluted and complex process (using multiple tools which are proprietary (some open source, some commercial)), when V2.6 can do the same with just use of the standards document and an ASCII editor?

Everything about V3 is complex, layered, and opaque. Let’s agree to that straight away. It’s a direct and inevitable outcome of “Design by Constraint”. Design by constraint is a wonderful process for ensuring semantic design consistency and engineering implementation confusion. I have a powerpoint presentation on that subject that I have given to many HL7, ISO, and OMG standards designers. I should probably turn it into a blog post here too.

And let’s also straight away deal with the complexity argument. So many people simply say, “V2 is less complex that v3, so it’s better”. It’s not so simple. If it were that simple, then we should expect to just use text – it’s even simpler, and we can do everything with an ASCII editor (or, perhaps, a Unicode editor ;-)).

The question we should be asking about where something lies on a simplicity/complexity gradient is whether the outcome meets the purpose. In v2.6, you can develop conformance specifications with an ASCII editor, sure, but turning those into test tools capable of assessing whether a message meets it’s conformance statement is the business of expensive testing services such as AHML, where as in V3, you can get an open source tool to do it for you (I wrote it myself, with support from UK NHS and Canada Health Infoway).

So it’s not true that v2.6v has the same conformance outcome. In fact, to start getting the same outcome, you need to start with messaging workbench, and a much more structured approach, which comes with greater complexity. And then the approaches start to converge – but v2.6 will never offer semantic consistency because of it’s history and shallow design structure.

(2) As a clinician I require my clinical message to be transferred accurately to the recipient. How does V3 do a better job as compared to V2.6? (Please do not bring in points like conformance, security, permissions, confidentiality and encryption, since all of this can easily be configured in V2.6)

Err, easy? I suppose but there’s no standard way to do it. But agree, that’s not the core subject.And now I have a problem. I’ve never implemented a clinical message using v2.6. Nor have I implemented one using v3. I’ve used v2.4 and I’ve used CDA – so I could compare them with confidence, but that’s not the same question.

I think that in principle, v2 suffers terribly from it’s own history. There’s no layering of design, just a single level (in spite of my comments above about layering, absence of layered design is also a terrible way to do design). And v2 has been permanently backwards compatible, which means it’s a hostage to past mistakes. The two message types I have the most practical experience in are ADT messages and observation messages. The core segments (PID, PV1, ORC, OBR, and OBX) are filled with a profusion of fields that are only sometimes well defined. Many are ignored in most implementations, and most are misused in most messages. In fact, I suspect that the simplicity of presentation of v2 makes it worse, because implementors dive in and just do stuff, without having to learn what community consensus about how to use them is. (You might say that it’s greatest strength is also it’s greatest weakness).

This was graphically illustrated to me in a recent round of work undertaken by the Australian MSIA to resolve why we have such troubles with clinical messaging here in Australia. Here in Australia, we have a very solid ecosystem that delivers diagnostic service reports to clinicians, and to a lesser degree, disacharge summaries (& referrals) from clinician to clinician. There are several vendors who spend inordinate amounts of time fiddling with the message content to make a random sender communicate with a random receiver. I hope to make a post later on the MSIA conformance profile that arises out of this work, since there is much to learn from it – but it’s not complete yet.

So asking whether V3 is better than v2 is actually setting the bar rather low. Does v3 clear the bar? I don’t know. CDA certainly clears it easily – but it does have a narrower scope. v3 generally – it has an ocean of semantic consistency features built in, but carries this heavy load when it comes to engineering.

(3) V3 XML vis-a-vis V2.6 XML is massive and convoluted. How is this better?

XML is good because it is self describing. So More tags =more self description. Better!

No, seriously,the massive and convoluted is a direct consequence of design by constraint. And I don’t know anyone who’s actually using v2.xml, though I’m sure some people are. So why compare to that instead of the vertical bar we now and love?

a) The human readable component is so deeply buried in the code that it requires super human effort to extract it manually (even for a small message) so is it really human readable?

Same for HL7 v2. In spades. CDA forever 😉

b) Coming to the next part, machine readable information – the effort required to human-engineer an application to ergonomically extract information from clinical data (as required by V3) puts a huge load on the end user (clinician).

um? Why is this required by v2 and not v3? I think that this is because particular messages are designed to a different philosophy, and I don’t see why particular v2 message implementations should be different.

There is certainly a desire for more structure and more coding. And where else is this going to come from but the clinician? But this impost on the clinician is not for the sake of the IT Systems, but for the sake of the downstream outcomes humans can derive from leveraging the structure and codes. I’m not sure whether the benefit will truly prove to exist, but let’s not blame v3 for this. v3 has more complexity than v2, but it also has a lot more consistency and this makes a real difference.

End of day there are other methods by which data can be extracted from free flowing text for BI, CDSS and EBM (evidence based medicine). So why complicate?

Yes, I wonder that myself. V3 does not require all this structure and coding – but it was certainly designed to make it possible (unlike v2). There’s a solid community who want that. So v3 is better for their intent. In as much as you disagree with that intent, you’ll dislike that “feature” of v3.

(4) V2.6 can easily transport any kind of media with lower overheads as compared to V3. So why V3?

what lower overheads? I don’t know what they are. Do you mean, transport other media in content? or do you mean transport content across media?

(5) V2.6 can, from the end user point of view, transfer the same clinical content as robustly as V3. So why V3?

My experience shows that this is only true for a particular end user point of view. For the sum of end user points of view, v3 is much better (more rich, more robust, can meet more use cases) than v2. The increased robustness of v3 comes from the tighter specification. For some use cases (see previous section), this increased robustness is neither here nor there.

Which is one problem with this debate – it’s context free. V3 is not meant to solve the same problems as v2. And partially your comments are simply a restatement of that, no? “v2 meets my use cases, why use v3?”

(6) V2.6 messages can be coded by nothing more than Notepad and the standards document.

Y. So can v3 messages. Not fun though!

Why this complex process in V3? Since in the end after all this goes through the grinder (Funnel the RIM to a DMIM (actors, story boards, acts, moods….) to a RMIM, CMETs yada, yada, yada, run it through automated tools) we are finally handed over ready message templates for each domain – as is present in V2x.

Yes, as is present in v2. Except these things are not the same. What v3 produces have many layers of quality built into them that v2 doesn’t.

This message (from the end user point of view) is nothing more than the equivalent of a ready made message from one of the chapters of the V2.6 standard (it is as as simple as simply selecting the appropriate V2.6 message from the standard and going ahead). So why would one waste time, effort and scarce manpower to do the same thing in a more complex way?

Because it’s a better outcome. At this point I should come clean and say, I’m not sure it’s better enough. But it’s certainly better in important ways to do with consistency, and you can’t just ignore that. (Really, at this point, I should spend another hour writing up all this consistency with actual examples, but I’m out of time.)

(7) Has a study been done on the cost overheads of V3 (in $ terms) vis-a-vis V2.6 (including conversion of data, maintaining of 2 standards, training, new apps,etc)? If so what is it and are the costs commensurate?

I don’t know of one. And it would be hard because you’d be comparing apples and some type of bread.

(10) I am sure you agree that the more the parts, the more are the chances of a breakdown (or error or worse still a SILENT error). Complexity, if it improves outcomes to the end user, would be worth the risk. But how does V3, realistically, improve outcomes for the enduser (as compared to V2.6)?

Really, I should’ve answered this point first, cause now I’m out of time. Another post, maybe. Or maybe I can get Lloyd to answer this one.

(11) Do you think it would be a good idea to wrap up all the RIM based complexity into a black box (only to be handled by the WGs) and simply present ready message templates to the end user in the form of a ‘V3Simplified’ Standards Document, something similar to a V2x document? (This is, in any case, gradually happening.)

Yes, I do. And I’m involved in a number of the parts of it gradually happening. Design by constraint has it’s strong points. But we need to internalize it’s weaknesses as well as it’s strong points, instead of externalizing the costs. That would certainly help greatly with the problems of V3 you’ve asked about – which are very real. I’ve argued against them, but that doesn’t mean that your points aren’t real and don’t have at least some merit. Most of all, v3 has real advantages, but it’s too hard to implement.

One last point: “S”, did the V2/V3/CDA taskforce speak to you?

Grahame

p.s. A final note about anonymity: I decry the fact that “S” feels the need to be anonymous. HL7 needs to be an open organization that runs on merit alone, where all the contributors are free to express their beliefs. There’s some jurisdictions around the world, where having chosen a particular HL7 standard, they become intolerant of further criticism and people who speak out suddenly lose their jobs. (and also the reverse: when some jurisdiction reluctantly finally commits to one of the choices, a disaffected party or two start criticizing the decision publicly but anonymously in all this wonderful new free media).  I sure hope that no member of HL7 itself supports either of these processes since they will eat away at HL7’s essential qualities.

4 Comments

  1. Lloyd McKenzie says:

    Well, seeing as I was explicitly invited to respond, I guess I shall 🙂

    First, some comments on the general topic before the targeted question:

    Overall objective of HL7 v3 is and always has been improved semantic consistency. If I say “Patient” to someone in the v2 world, they’ll think PID segment and will have a sense of where different pieces of information may get stuffed. If they’ve played in the v2 world for a while, they’ll also know that a given data element might be stuffed into a number of different places and will therefore check several for common (and not so common) variations. But 95% of them will have leapt to the conclusion that “patient” refers to a human. In HL7 v3, as a designer you’re forced to consider these questions more. And as an implementer, the semantics are made more obvious. In v3 I’ll need to clearly identify whether my patient is a Person or a “Non-Person Living Subject” (gotta love that name) or perhaps a choice of one or the other. You don’t get that clarity in v2.

    Now, in a typical v2 installation within a hospital, that question is moot. Everyone working on the interface knows that the hospital doesn’t care for humans, so there’s no need for clarity in the spec. And if you aren’t sure, you can go check with Gary on the second floor and he’ll tell you.

    V3 is designed from the ground up on the assumption that there is no Gary on the second floor, and you may be communicating with applications not just on a different floor, but in a different city or even a different province or state. And your interface is going to be exposed to 100s or 1000s of application instances, each running different versions of different software. And in those situations, enhanced clarity and rigor in the design is essential. Because doing point-to-point customization and transformation would kill you.

    I will agree with Grahame that v3 hasn’t gotten it totally right. It’s still quite possible to create unclear or even outright misleading v3 designs. Even more concerning, it’s possible to get those designs passed as normative standards. HL7 needs to get its processes and governance in place to manage this a lot better. I know many of us are hoping to see that as part of the SAIF roll-out process, though we may see some of it sooner than that too. (Fingers crossed)

    As well, while v3 breeds improved consistency, there’s need for improvement there as well. We’ve got pretty descent consistency within domains, but we still have more inconsistency across domains than is necessary or desirable.

    On the question of “RIM stuff in a black box”, fully in favour. Canada’s built a tool that takes a v3 message specification that contains appropriate “business name” labels and produces an API that totally hides the “RIM-speak”. Point-of-service vendors can take the API, suck in a specification and write their application against a business friendly object API totally ignoring all the classCodes, moodCodes and the 7 classes it takes under the covers to represent “patient”. Work on things like Green CDA and some of the alternate ITSs under development aim to do the same sort of thing, but for over-the wire exchange. All this comes out of a recognition that, while the RIM aspect is essential for quality and consistency and semantic clarity on the design side, it’s only occasionally useful on the implementation side.

    Getting to the specific question I was asked to respond to:
    (10) I am sure you agree that the more the parts, the more are the chances of a breakdown (or error or worse still a SILENT error). Complexity, if it improves outcomes to the end user, would be worth the risk. But how does V3, realistically, improve outcomes for the enduser (as compared to V2.6)?

    I would say it depends. V2 actually has a lot of parts too. Trigger events, messages, segments, fields, components, sub-components (and sub-sub-components where the designers screwed up :>), HL7 and user-defined vocabularies, transport protocols, vertical bar vs. XML encoding. And all of those pieces are stitched together (sort of) by macros in an MS Word document. Maintenance is done independently by different committees who “stick stuff on the end” of whatever already exists. Almost everything is optional, and some of the few things that aren’t optional will get populated with fake data in those circumstances where the implementer determines they don’t make sense.

    One specification is expected to handle “prescribing” and is expected to work for community pharmacy, hospital pharmacy, long-term care pharmacy, oncology, psychiatric medicine in every country in the world. And so long as you’re generous with your application of Z-segments, you can probably make your data fit. Just don’t expect to interoperate anywhere close to out-of-the-box.

    V3 does have more pieces. However, those pieces are managed by (relatively) robust tools. And they come with a methodology that acknowledges that, while true interoperability can only come within a fairly tight-knit community, you can save a great deal of time by working through successive approximations of taking “International Pharmacy” to “International Prescription Management” to “International Prescription Creation” to “Canadian Prescription Creation” to “Canadian Community Prescription Creation”. A Canadian system will not interoperate with a system that implements “United Kingdom Community Script Initiation”, but it’s very easy to see exactly how they’ve diverged, and if you’re a vendor who needs to operate on both sides of the pond, you’ve got a data model that will allow you to accommodate both sets of needs.

    Sorry, got side-tracked.

    Do more pieces = more risk? Yes, if those pieces aren’t accompanied by robust quality assurance steps and the necessary training and tooling to use them properly. And HL7 has more work to do in this space. However, with appropriate tooling and enforced methodology, more pieces can actually lead to improved safety if those pieces enable improved consistency, clarity and fitness for purpose.

    The choice of v2 vs. v3 needs to be based around what it is you need to accomplish. If you’re working with a domain whose needs are well supported by existing v2 specs (e.g. ADT) and you’re working in a closed environment where it’s possible to talk with your various communication participants and negotiate exactly where you’re going to put a given piece of data, then v3 is severe overkill and will not be cost-effective or worth the learning curve of implementation.

    On the other hand, if you’re looking at implementing a domain that’s not well supported in v2 (clinical trials, genomics, dental claims) and/or are looking at open interoperability in a space where the only communication between the producers and consumers of data will be via the published message, document or services specification, you’re going to be a lot further ahead with HL7 v3. (The second part can be met in some cases with HL7 v2 and a *really* good conformance profile constructed in a tool like MWB, but if you’re going that far, you’re bringing in much of the complexity of v3 anyhow.)

    Agree with Grahame that free and open communication about faults and flaws is always desirable. So long as the conversation focuses on “how can we make things better”, I’m happy to participate.

  2. Thank you, Grahame for responding to my queries. The responses in your blog have clarified what I have always been stating.
    1. There is no doubt that V3 started out with noble intentions but some where along the way became overtly complex to the extent that to put it into practice became almost impossible. As you aptly put “Everything about V3 is complex, layered, and opaque”.
    Your quote “Design by constraint is a wonderful process for ensuring semantic design consistency” is correct since it ensures consistancy for interoperability, something that the early versions of HL7 V2x (say hl7 V2.3.1) did not have. In fact the core goal of having a “plug and play” interoperability paradigm is what, in my opinion, gave birth to HL7.
    However if we study history we can clearly see how things went wrong. Let me bullet it for simplicity.
    1. HL7 V2.x (say 2.3.1) has too much of optionality so each interface (for interoperability ) required an understanding between both the vendors/apps anf therefore “plug and play” was not possible.
    2. V3 introduced to solve this problem. In theory it was good since it so tightly constrained each element that “what -was-quoted-was- what-you-got”.
    3. Unfortunately due to the vastness of the RIM the combinations and permutations were so many that the effort required to finally create an interface became massive. In fact, so massive that the WGs tried to simplify by trying to do all the work for you and create ready made templates aka messages (because not many vendors had the guts, time or knowledge to do it by themselves).
    4. This was OK for generic interfaces but did not suffice for every implementation and venders still had to talk to each other to interface.Bottom line still no universal “plug and play”
    5. Meanwhile the V2.6 guys got clever…. they decided to simply copy the good learnings from V3. They put together a conformance chapter in which we simply create a conformance profile by constraining every element in a message – simple and brilliant!
    So the present status is as follows:
    V2.6: You put a little effort , vendors talk to each other create conformance profiles, adjust their interfaces and connect up! (and this trick can work between any V2.x version). Sadly no “plug and play”

    V3: Put in a LOT of effort to create a conformant message, or take a ready message template, work on it to tweak it, then the vendors talk to each other adjust their interfaces and connect up. Sadly still no “plug and play”
    That I think, sums up the present status.

    In conclusion, I am in agreement with LLoyd and yourself that
    “v3 hasn’t gotten it totally right”…. however
    ” HL7 needs to get its processes and governance in place to manage this a lot better. I know many of us are hoping to see that as part of the SAIF roll-out process”
    ““RIM stuff in a black box”, fully in favour”
    “If you’re working with a domain whose needs are well supported by existing v2 specs (e.g. ADT) and you’re working in a closed environment where it’s possible to talk with your various communication participants and negotiate exactly where you’re going to put a given piece of data, then v3 is severe overkill and will not be cost-effective or worth the learning curve of implementation.”

    So, for the good of all, I do hope nay, pray that V3 “black-boxizes”(remember I coined this word!! 🙂 ) its complexities, becomes robust and simplifies implementation.

    Thank you gentle men, for your time and willingness to discuss this important issue. Long Live HL7 , and may it prosper!
    …and let me get back to my mundane world of strategy, implementation and training.

    P.S.
    Dear Grahame as an answer to some of your earlier questions. I *am* an HL7 member, I have never been approached or influenced by any group or committee, all of these concerns are mine, we have never met so far (either in Sydney of Brisbane where I’ve been) and I am presently an executive committee member at Hl7 xxx. Last but not least, thank you for the stimulating discussion.

  3. Grahame says:

    Hi S

    I don’t think that v2 conformance statements are that significant. They’re certainly not getting a lot of use (though I’d like to see more). And from the point of view of interoperability, it’s rather like putting lipstick on a pig.

    I’ll illustrate by using ADT as an example. I’ve received many many ADT feeds. A08 messages etc. I’ve got a generic set of code – laced with special cases and arbitary piecemeal logic – that can suck patient demographics out of pretty much every message I’ve seen. When I get a new interface to do, I start collecting messages, then I eyeball them, and the specification, if I get one, and check whether I think that they’ll process ok. Mostly, things work. If I got a conformance profile – I’d eyeball that too. But I never have. And a couple of times I’ve set out to make a conformance profile of my own – but I always give up due to lack of energy and enthusiasm. A good conformance profile is a lot of work, and no one is asking me for it.

    But ADT works without it. For patient demographics. But not if I need to know *where the patient is at a given moment*. For that – I get to sit down and have long discussions with the patient admin administrator and vendor about what messages map to what real world events, and how the clerical staff use the system. And how the master files are maintained, and how clerical errors are fixed. This is a *real mess*, because v2 is all over the place with hard to differentiate events, many different ways to encode the same information that nearly means the same things. And a conformance profile would make little difference to me. In fact, I suspect it would be worse, because I’d get more information I wouldn’t care about (which fields are populated) and less about what I really care about – business processes, event mappings, and identifier usage patterns.

    I’d much rather be using v3 for this, simply because I’d get good state machines, sementics etc – and much more consistency. In fact, the harder implementation would make people think it through more. Though it would still not be enough – I’d still have to find out how the clerical staff handle patient leave, errors, unauthorised discharges, etc. So, in the end, plug and play remains elusive because I haven’t yet seen consistent end user requirements. everyone wants something different.

    Finally, why not come to a meeting in USA? Seeing the elephant up close might give you a better perspective on why things are being done.

  4. Lloyd McKenzie says:

    Agree that there is rarely Universal plug-and-play. However, that’s due more to the complexities of “Universal” and “Healthcare” than it is to a deficiency in v3. Coming up with a single specification that has consensus and covers all the vagaries of lab or pharmacy or even ADT that works in all countries in all healthcare environments (hospitals, clinics, vision, dental, psychiatry, veterinary, etc.) is just not achievable. At least not with the limited resources who engage in standards development at the HL7 International level.

    Even CDA, by far the most successful specification of the v3 specification is not “implementable” without further constraint. (Creating a UI that understands all of Clinical Statement at a generic level is just not feasible.) That doesn’t make the process useless. With appropriate constraint, either at the realm level (e.g. the work done by Canada Health Infoway on messaging) or in a narrowly scoped international area (CCD, Structured Product Labeling, etc.) it’s possible to create highly implementable specifications.

    The RIM itself doesn’t give plug and play or anything close to it – and it’s not meant to. The only thing the RIM does is provide underlying semantic consistency, or at least the possibility of it. The RIM and accompanying vocabulary provide the grammar and dictionary from which “constrained sentences” can be constructed.

Leave a Reply

Your email address will not be published. Required fields are marked *

question razz sad evil exclaim smile redface biggrin surprised eek confused cool lol mad twisted rolleyes wink idea arrow neutral cry mrgreen

*

%d bloggers like this: