v3 has failed? Summary of comments

My post “HL7 needs a fresh look because V3 has failed” generated a huge number of page hits (more than any other page already), and lots of comments, both private and public.

In this post, I’m going to try and summarize the comments. But before I do that – I made two posts in a point/counterpoint style. The first post (v3 has failed) generated 5 times as many page views as the second (v3 has succeeded), and 10x as many comments. And the second post is the only one I sent anyone a link about (several HL7 mailing lists). Such is life, I guess.

I think that generally, people in the comments were defensive of HL7 because they felt that either

  1. v3 hadn’t failed
  2. v3 was trying to something impossibly hard anyway
  3. HL7 should be congratulated for having a fresh look task force

With regard to the #3 – yes, and that’s a tough ask. With regard to #2 – yes, I thought that too, until I wrote RFH. Then I looked afresh at the v3 specifications, and realized that they make something hard even harder.

As far as v3 failing, I said that v3 was a failure because it wasn’t a suitable vehicle as it is to take the organization forward. No one really disagreed with that – there were a number of passionate defenses of the achievements of v3, including

  • CDA
  • SPL
  • several national programs
  • RIM based EHRs (there are several) or CDRs (clinical data repositories)

Apologies if I missed anything. Of course, a number of people pointed out that the definition of failure is very much in the eye of the beholder. Indeed it is. You can do good things with v3 as it is (as I said myself).

A number of people suggested that HL7 should invest in tooling in order to make the standard successful. I very skeptical of that theory. Imagine if OMG started writing software and giving it away to try and make CORBA a success – how could that make any difference? I think that that investing in tooling in order to present a better face to an interoperability standard is doubling down. (please, this is not a criticism of national health programs for their efforts in this direction. Tooling is not a bad thing, but thinking tooling solves the underlying problem is. It’s HL7 that needs to beware of doubling down like this.)

Also, people questioned whether repositioning v3 in terms of technology adoption would help. JSON? HTML 5? IDL? Google Protocol? I don’t know whether that would help or not. It’s an area fraught with difficulty.

Some people think that the XML simplification methods (such as greenCDA) for greenCDA will make a big difference – I already talked about that.

There was one particular thread in the comments by Andrew McIntyre / Gunther Schadow that I’ll pick up in later post:  the relationship between v2, v3 and archetypes.

 

 

3 Comments

  1. Gunther Schadow says:

    I will have to propose this more clearly. Here is what I think HL7 needs. We need to huddle together and create “HL7 lite”, an offering of the best of our stuff in the easiest to implement fashion, focusing on what really matters in practice applying the 80/20 rule, and not making any sweeping changes to things that are out there and used or under adoption.

    I said in my last comment under “A comparison of v3 and RFH” that the number one mistake of v3 was not to care about continuity and compatibility first with HL7 v2 and later not even with itself. The other mistake was not to apply the 80/20 rule. So the remedy has to be to provide stability, continuity, increase cohesion, and strictly focus on the 80% most important cases ignoring pathological edge use cases.

    In practice this would mean, we leave the following in place:

    * RIM,
    * data types (in the single R1/R2 both-ways-compatible schema),
    * CDA r2 / SPL Structured Documents.

    We also keep a limited set of R-MIMs and CMETs where they have been proven useful and where they can be plugged into structured documents (see how SPL R5 and CPM is a modular model that has many plug-in points.)

    This is not an exhaustive list. In the interest of continuity, we do not pull the rug from under any particular project that is being actively implemented. But we open up easier paths to do current implementations.

    There should be no more messing with the XML wire format.

    Instead we show a clear migration path from v2 generic messages (mostly ORU, ADT and ORM) to v3 RIM objects, possibly by publishing transforms. This is to address cohesion.

    The only new thing would be floating a RESTful interaction approach that allows breaking the humongous R-MIMs into smaller pieces and address more fine grained use cases that can be composed in multiple steps.

    What’s the actual change here? On the surface there shouldn’t be any change that would pull the rug from under adopters who are sending and receiving HL7 content.

    But I think we should radically change the way the standard is complexified by its presentation.

    Standards are published in free-form documents (e.g. using ISO document style from the start. These documents would contain just what’s needed and do not have to follow a necessary content template. Diagrams and descriptions are added to make points clear, but care is taken in describing the essence succinctly in text.
    Each project can choose how to write their documents (this is already happening for HL7, just v3 stuff is straight-jacketed into this onerous document nightmare.) Specifications are written with one clear focus, and multiple documents written for multiple concerns.

    We can keep the MIF and v3Generator to an extent where they are useful as a generator from models to schemas, a process that generally works (some necessary fixes to be made for backward-compatibility/continuity which I have ready and used for CPM). But we do not make much of these formal descriptions in public, they are not the focus, they may be used by those who care and understand them, those who don’t can ignore them.

    We won’t continue with this idea that the balloted stuff is the “HMD” not the Schema. But instead whats balloted it what matters, i.e., the wire format XML.

    Overall we should rid HL7 of most of these acronyms and abbreviations. HMD, MT, RMIM, CMET, domain, all of that stuff should be optionally ignorable, the specifications should tell people how to build some wire format and how to interpret it and that’s it.

    Vocabulary should be presented in simple lists, none of that unnecessary complexity and overhead. Standardized code sets are shown list shown right in the standard documents, external code systems continue to be referenced by OID and these OIDs can be shown right in the specification documents (standards and implementation guides) that use them.

    Not much metadata necessary, no big effort should be spent on metadata modeling and tooling. Template ids, realm codes, flavor ids, value-set ids, etc may be grandfathered in but people won’t be bothered with having to understand this stuff, and most exchanged instance will work without (except where grandfathered in).

    The result will be a small starter set of specifications in use today, e.g., CDA v2, SPL r5, e-stability (this is not an exclusive list), described (in their next update) as stand-alone documents that can be read and understood by people with minimal need to read anything else in order to dabble into the field with some initial success (the 80/20 rules applies to implementers as well.)

    In time, most specifications could be written as implementation guides for CDA, SPL, other Structured Documents (in the CDA v2/SPL style, not all new STD r3), any other existing specification that passes initial muster and/or a generic RESTful RIM object service (using generic RIM XML schema). In time there would be tons of implementation guides addressed at specific implementers that actually do things, and only few core specifications. R-MIM based modeling would not be prohibited, but extensions to free RIM XML allowed everywhere and controlled by these implementation guides.

    This is about what “HL7 lite” would be.

  2. Lloyd McKenzie says:

    Gunther,

    Agree with some of your ideas, disagree with others. Mapping v2 stuff to v3 is probably useful. However it’s a bit of a challenge when one v2 message maps to so many v3 places. (It’s an even bigger challenge when it took 7+ years to get one of the key targets through ballot approval.)

    I disagree with you that keeping the wire format the same is wise or desirable. The wire format places RIM front and center, forcing implementers to confront it. That was a big mistake with v3 and one that we need to correct. That said, we *MUST* provide a transition path from existing XML syntax to anything that replaces it. Not providing interoperability or migration paths would be a tremendous error.

    Vocabulary as simple lists is great in theory, totally falls down in practice. Good luck on maintaining a single list of the dispensable drug codes in Canada, let alone the orderable drug codes. Many vocabularies are just too big to list. Not to mention the fact that the lists in the UK, Canada, US and Japan would all be different. Furthermore, even if you did list them, the list would need to change weekly to be current and useful. Lists are absolutely horrible for vocabulary maintenance. However, I agree they’re definitely what implementers want. So maintaining vocabularies using the current complex structured approach but providing regularly published value-set expansions (aka ‘lists’) I think would be a useful compromise. These could even be delivered via CTS2 service for applications willing to get them that way, but even FTP download or copy-and-paste from a web page would be better than nothing.

    CDA, as is, won’t meet many needs. There’s an essential requirement for transactional behaviors. (“Please create this prescription”. “No, you haven’t managed contraindication X”. “Please create this prescription with managed contraindication X”. “Ok. Created”. “Please assign prescription 123 to pharmacy A”. etc.) The RIM-based structures must continue to be applied to far more than documents to meet the needs of the healthcare industry.

  3. Gunther Schadow says:

    Vocabulary … never said “a single list”, I meant those small tables of fixed code sets (in HL7 jargon “CNE” and “structural codes”). Of course drug codes would not be managed in any list like this, we use SPL files to manage the list of available drugs in the US … and in the same way orderable services would be managed through HL7 content. For SNOMED or LOINC you just say “use SNOMED or LOINC here”, no list. An implementation guide may say: find the set of recognized dosage forms here [http://www.accessdata.fda.gov/spl/terminology/dosage_form.xml] that’s it. Simple.

    CDA in theory may not meet those needs for ordering. But you could do ordering with CDA, all it needs is an implementation guide that establishes the workflow. For instance, FDA uses SPL to transact a “Labeler Code Request” and “Labeler Code Assignment” workflow. It can be done and it works. Since HL7 has choked for so long on this “dynamic model” stuff, this is one area in which HL7 should just quit making any general statements and approaches and just let individual standard user communities come up with their simple approaches with the stuff that exists already. There may be 5 or 6 different working approaches developed in practice, and after 5 years one may look at them and find out if there is any common lesson or any remaining real need.

    And yes, XML the wire format is good enough, and you need to also realize that a significant number of people find a single simple RIM XML an attractive thing. Instead of stirring up the trouble, just show people how easy it is to use the XML format as it is.

Leave a Reply

Your email address will not be published. Required fields are marked *

question razz sad evil exclaim smile redface biggrin surprised eek confused cool lol mad twisted rolleyes wink idea arrow neutral cry mrgreen

*

%d bloggers like this: