Complexity of Standards

I’m in Singapore this week, speaking at the 2011 Healthcare IT standards Conference. It’s a real pleasure to be in Singapore meeting with many people I’ve corresponded with over the years, but never met, and also exploring such a great city. In addition, I’ve had many deep and interesting discussions around how to progress either Singapore’s Healthcare system needs, or international standards (or both). Today, I spoke on sharing the experience I’ve learned over many years in my many and varied roles in HL7 and other standards contexts.

Much of the content was things I have already covered in this blog, such as the 3 laws of interoperability, drive by interoperability, and requirements for interoperability, but several things were new, and I’m going to post them here.

I’ll start with this diagram that I showed. It’s a rough plot of the internal complexity of the standard (y, log) vs the complexity of content that the technique/standard describes.

Some notes to help explain this:

  • Text (bottom left) is my (0, 0) point
  • You can solve more complex problems with any of these techniques than their position on the X axis – but you have to invent more protocol on top of it. (That’s what’s XML is for!) So the position on the x axis is that innate complexity
  • The actual position of the particular items will be a comment magnet. But I think they’re generally correct in an order of magnitude kind of way
  • What the graph doesn’t show is all sorts of other quality measures, such as breadth, tooling, integrity, – there’s heaps of other criteria. This is just about complexity.

Complexity is an issue of growing importance – we know that we need to have it – the problems we are trying to solve in healthcare are complex, and we can’t make that go way. But this means that we can’t afford to choose approaches that are any more complex than they have to be – which most of the existing approaches are. I’m spending a lot of time thinking about the question of how to move to the lower right of this diagram – RFH is an answer to that, but is there more we can do?

Btw, where does Snomed CT appear on this diagram? Way off to the top and right… I can’t think of anything that would plot further up than Snomed.

 

17 Comments

  1. Victor Chai says:

    Firstly I don’t think it is appropriate to include openEHR in the diagram, it is overstate its usefulness. In my view openEHR is at most a more structured way to capture clinical requirements, whether and how it can be used computable clinical knowledge database is still question mark, at least I have not seen or aware of any real implementation yet. The openEHR archetype is attractive, it is its greatest strength yet also its greatest weakness since openEHR currently does not specify detailed rules or guidance for designing archetypes. As the basic archetypes design method or its underlying reference model does not
    provide ‘built-in’ quality assurance or semantic consistency enforcement, so the quality of archetype solely relies on the knowledge of individual designers.

    In terms of its role for data exchange, it has yet to prove its worthiness, it is at most a way to logically organize the content extracted out from EHR system, so far it has not publicly published any implementation level XML schema for data exchange. Of course people will naturally ask why go for ISO13606/openEHR if most of it is already catered for in HL7 CDA.

    If I am not wrong, this is how openEHR used in Australia, it is just a model to define user requirement, whereas the actual implementation artifact is still HL7 CDA.

    Secondly I think the gap between HL7 CDA and HL7v3 is not that far, it is prerequisite to understand HL7v3 in order to implement HL7 CDA unless the implementers only interested in level#1 and level#2 CDA conformance.

  2. Hugh Leslie says:

    Hi Victor

    There are now many real implementations of openEHR around the world – Europe, South America and Asia Pacific both at national level and in real deployed vendor systems. Queensland Health in Australia has two deployed openEHR systems – a discharge summary clinical repository and an infection control system.

    There are now a number of National programs that are using openEHR for the EHR part of their national programs – Brazil is the latest country to announce this. They have announced openEHR for the EHR and CDA for comunication.

    There is a new international organisation called CIMI (Clinical Information Modelling Initiative) that is considering how to select a single modelling formalism for detailed clinical models. The membership of this group includes participants from all the major National programs including Singapore. There are a number of approaches under consideration including openEHR and ADL.

    In answer to the ‘rules governing design of archetypes’, I disagree with you. The openEHR reference model makes sure that the models you build are consistent at a technical level for computable semantics. This of course doesn’t stop you designing bad models (anymore than the V3 RIM stops you designing bad models).

    In terms of XML schema for data exchange, we use an approach that starts with the logical models at the top, creates a template for an implementation use case and then generates artefacts such as XML schema, PDF, CDA instances, HL7 V2 instances or any other required output. The important thing to understand is that there is a single logical model – the archetype – that drives this process.

    The reason that Australia is using this approach, is that NEHTA clearly see that while CDA is currently what the exchange format is, tomorrow it may be something different. Using a logical model approach to this makes sure that we are not locked in to one approach that will require extensive re-engineering at a later date. This is similar to the MOHH approach in Singapore where they are using a 13606 based model for the logical model.

  3. Koray Atalag says:

    Hi Victor, I suggest that you look at the following links. Working with openEHR more than a decade now I can confidently say that it is definitely implementable (well at least I succeeded in my own research – see http:gastros.codeplex.com)

    http://www.openehr.org/shared-resources/usage/government.html
    http://www.openehr.org/openehr/shared-resources/usage/commercial.html
    http://www.openehr.org/openehr/shared-resources/usage/academic.html

    Cheers,

    -koray

  4. Victor Chai says:

    #Koray

    Thanks for pointing me to the list, interestingly I found Singapore Ministry of Health is on the government list for “Adoption of openEHR 2009 for clinical modelling, terminology”. I have been working with that particular contact person for the past two years, I was completely unaware of that, and neither it was ever mentioned by him. I think we shall make no more further comment on that part just in case there might be some misunderstanding about the actual engagement.

    #Huge

    I use the analogy of UML class diagram and object diagram to illustrate the point here, openEHR archetype essentially ‘instantiates’ from underlying Reference Model data structure classes such as ITEM_LIST, ITEM_SINGLE, ITEM_TABLE, and ITEM_TREE, and then constrain the instances with more use case specific name, cardinality, data type, value ranges and terminology binding etc. For example, use one simple openEHR archetype “Adverse reaction”, it simply lists down the data elements such as “agent”, “specific instance”, “reaction category” etc, the modeler can use any name for the data elements as he wishes.

    Whereas in HL7 v3 modeling, though it follows the similar “design by constraint” approach, however there is subtle difference. Similarly the modeler also needs to ‘instantiate’ class from RIM, and the relationship between these classes is established through ActRelationship with the typeCode between different instances of Act or Participation/Role between Act and Entity, though there is also similar kind of use case specific name with clone name in HL7, however the actual meaning and relationship is bound by the underlying vocabulary as defined by HL7 especially for those structural attributes such as classCode for Act class and typeCode for ActRelationship class. So for example we can use typeCode “MFST (Is Manifestation Of)” to say that the source is a manifestation of the target (for instance, source “hives” is a manifestation of target “penicillin allergy”). So in HL7 v3 modeling, the clone name may be different, but the underlying meaning and structural relationship is enforced by RIM. Though the same information can be modeled differently in RIM, but due to the structural constraints placed by RIM, the difference is not that much, whereas in openEHR, the difference between different modelers can be huge.

    The semantic relationship between different information is important for efficient and consistent query, e.g in HL7v3, even though the clone name might be different, implementers can safely query using various structural attribute such as using typeCode in ActRelationship to find out all the allergen that will cause a specific reaction etc.

    Though in openEHR, the modeler can choose to assign formal semantic meaning via terminology for each data element, however the semantic relationship between the elements within the archetype is not representable in openEHR. (Note: LINK class in openEHR is only used to establish relationship between archetypes so primarily at ENTRY or COMPOSITION level, so it is not applicable here).

    That’s what I mean by saying in openEHR “the basic archetypes design method or its underlying reference model does not provide ‘built-in’ quality assurance or semantic consistency enforcement, so the quality of archetype solely relies on the knowledge of individual designers.”

    • Hugh Leslie says:

      Hi Victor

      I agree that its overstating the case that Singapore has been using openEHR modeling. They certainly looked at it early on and still have licenses for openEHR tooling, however currently they are using a 13606 modelling approach for the LIM models that they are building.

      We can certainly have a discussion about the relative merits of the HL7 RIM over the openEHR reference model. I would disagree with you completely that the V3 RIM constrains how you model something – if you look out there, there are many examples where the same thing has been modelled in many different ways. I would also challenge you about the idea of safely querying based on attributes alone – one of the issues with the RIM is that there are so many possible combinations of attributes that there are literally billions of possible combinations – most of which make no sense. This would be a nightmare for querying in real systems where its the computers that need to make the decisions. I also think that your argument is academic – there are very few V3 systems in existence and none that I know of that would query in this way. All the systems that I know of would decompose CDA into some known environment and then use standard query technology.

      openEHR has a query language as part of the specification and you can query on relationsips between models quite easily. You can model relationships between elements if you need to, using FOPL but we haven’t had a serious need in any implementation that I have seen yet. Links can be used at ANY level to link related data together.

      HL7 V3 is in a bit of trouble as a standard as it has been so poorly adopted. This analysis by Wes Rishel in the current Gartner Hype Cycle report says it all:

      “The V3 Messages (V3M) standard was conceived as being the direct analog of the Version 2 Messages standard — that is, XML documents designed to support the information necessary to be transferred when a specific event occurs, such as a patient being admitted or a lab result being approved for delivery to the provider. … we describe the messaging part of the V3 suite as “obsolete before the plateau.”… uptake has been so small and narrow that we cannot justify leaving V3M on the Hype Cycle.
      Business Impact: V3M will not have impact except, perhaps, in a few locales.
      Market Penetration: 1% to 5% of target audience

      This is of course not CDA which is the only successful part of the v3 standard at the moment but itself is undergoing some changes with the greenCDA movement etc.

      The point here is that its not correct to say that all is rosy with HL7 and we don’t need anything else. openEHR is continuing to be used in many parts of the world and is steadily gaining ground.

      • Lloyd McKenzie says:

        Neither HL7 v3 nor OpenEHR have the sort of market penetration their proponents might desire. I suspect that if Gartner were to estimate a penetration for OpenEHR, it would be even lower than HL7’s due to Gartner’s focus on U.S. implementation (where it’s XDS and CDA or bust!). We can quibble exactly what the “target audience” for v3 is (or OpenEHR for that matter), but I’m not sure there’s much point.

        The key point of Grahame’s diagram – HL7 v3 has slightly more semantic depth and a decent jump in difficulty over OpenEHR is reasonably accurate. (I might argue for a few more millimeters of semantic depth for HL7 due to its vocabularies, but again, not something to argue over.)

  5. Victor Chai says:

    Hi Huge

    I am not going to comment whether it is simply overstating on the situation of openEHR adoption in Singapore since I am not the contact person.

    As for your comment saying that my argument about the RIM based query is academic, I can confidently saying that again is due to lack of basic understanding of HL7v3 RIM, I have personally designed and implemented HL7v3 RIM application in Singapore which are in production more than four years ago, and there are at least three HL7v3 RIM based applications running in Singapore. I can even personally demonstrate RIM based query to you if situation permits.

    In fact I never try to paint all rosy picture of HL7v3,there are areas to improve to make it simpler for adoption if you are aware of my opinions within HL7 community as part of HL7 fresh look task force.

    Each has its own strength and weakness, it should be clearly stated instead of painting all rosy picture to confuse the to-be adopters, at least HL7v3 is open to all public scrutiny. In fact my assessment of HL7v3 and openEHR is completely based on my own investigation of all available specifications and available open source code from my experience of more than 15 years’ experience as software architect and hard core java developer , I walked through the whole cycle from modeling and implementation all the way to save data and query data from database. I do not come to judgement based on what other people said.

  6. Peter Jordan says:

    If I may return to the original topic of comparing complexity.. 🙂

    Perhaps it might be more worthwhile to restrict this to directly-related technologies. Stating that CDA is more complex than one of its constituent elements, XML, and less so than the group it forms part of, HL7 v3, is (perhaps) a little too obvious?

    To this end, I think that JSON v XML, REST v WS/SOAP, XDS.b v RLUS, HL7 v3 v openEHR and SNOMED v Other Coding Systems may be more illuminating discussions.

    For example, it would be fascinating to see a detailed comparison between HL7 v3, openEHR and (of course) RFH/FHIR examining the layers (data types, application models, reporting tools, serialisation/payload formats)supported and the core technologies incorporated in each one.

    Separation of concerns being, of course, one way of reducing (or handling?) complexity.

  7. Grahame Grieve says:

    #Peter, I’m sure you’re right. But it’s not something I’ll doing right away 🙂

  8. Hello Grahame

    This was an interesting read, can i just ask, exactly what do you mean by “Complexity” and “Difficulty”? (i.e. what are their definitions?)

    In general of course they do make sense but we have to make sure that an objective judgement is made across those descriptions (on your first figure). A more accurate definition is also very important when you get to actually quantify these two qualities.

    Incidentally, some time ago, i posted this at the openEHR mailing list (http://www.openehr.org/mailarchives/openehr-technical/msg05900.html) and have since made some progress at least on the quantification part (which as it seems is the easiest problem to tackle).

    The most difficult thing (at least for me 🙂 ) has been the fundamental dissimilarities in the internal structure across the models.

    Of course, this is “desirable” because this is where each model draws its performance gain from so what i mean exactly is that it is extremely difficult to derive one formula (or algorithm, or metric) for the calculation of some features that can be applied exactly in the same way across a textual description or a much more “complex” XML description. There are plenty of little tricks and alternatives but at the moment they seem like tiptoeing around the problem rather than a real solution. So it is better to keep looking rather than fudge something that looks like a solution but would only confuse things more. Nevertheless, this is a very interesting subject.

    • Grahame Grieve says:

      I don’t want to say exactly what I mean. To go further than I did – laying out the principle with illustrative diagrams – and be quantitative – that would take actual real work and research.

  9. Thomas Beale says:

    #Victor
    I can tell by your comments that they stem from misunderstandings of people in the NHS CfH group and/or certain openEHR detractors in HL7 who for some reason like to try and paint the openEHR modelling stack as something to do with UI requirements (one of the things it is hardly used at all for). You said:

    Though in openEHR, the modeler can choose to assign formal semantic meaning via terminology for each data element, however the semantic relationship between the elements within the archetype is not representable in openEHR.

    This is completely untrue. The meaning of every element is known, including across elements within containers (e.g. CLUSTERs). The meaning of every connection is codable.

    Not only that, but querying is based on the codes, and the AQL query language is actually the only extant approach I know of that allows semantically reliable and portable queries to be built against domain content models.

    #Peter Jordan
    In addition to this comment I would say that it doesn’t really make sense to plot terminologies and structural systems like openEHR, HL7v3 on the same graph; the meaning of ‘semantic depth’ is quite different in the two cases, and the general idea is that structural and terminological systems are used together to achieve the total needed semantics (obviously Grahame knows this, he was probably just trying to keep things simple for illustration).

    #Grahame
    I do wonder if Snomed (and terminology/ontology in general) should not be even further up the y-axis, even with the log scale 😉

  10. Wang Jinsong says:

    The fact that openEHR/13606 is used for user requirements shows that its scope can be well-matched to clinical thinking in a specific area. I would definitely rate HL7v3 in the far top right quadrant, and put CDA in the middle.

    I look forward to seeing more choice in the lower right quadrant, either from new products from HL7 such as RFH/FHIR, SNOMEDlite, or from the semantic web (HTML/XML space).

  11. Tim Benson says:

    I want to come back to Graham’s views on SNOMED. SNOMED relies on description logic, which in itself is not really very complex. OK it has complexities but simple things are simple and complex things are more complex.

    SNOMED also has the Concept Model, which is largely applied common sense. In comparison, I think you could argue that the data types used in HL7 V2 or V3 are at least as complex as SNOMED.

    Of course if you add SNOMED to the complexity of existing data types you end up with a combinatorial explosion, but maybe the way forward is to introduce one or two new simple datatypes which are specific to use of SNOMED expressions.

    • Grahame Grieve says:

      hi Tim

      I like description logic. It’s pretty straight forward to implement. But you have to understand much more than description logic to make snomed useful – heaps more. That’s where the complexity comes from – the subtlety that .snomed produces. And I’m not sure how a simpler CD could make any difference to complexity – particularly if you read the next post.

  12. Callum Bir says:

    Hi Grahame

    I am assuming when you mention HL7 v3 on the chart to the right and up, you are referring specifically to v3 messages?

    I am also keen to get your further insights into incremental complexity of v3 to CDA. By CDA, I am assuming you mean CDA level 3.

    I suspect the effort is directional going from CDA to v3. My experience is different as I have been involved in implementing v3 messages for years before starting on CDA initially with CCD. I did not necessarily find CDAs considerably easier, certainly not in any logarithmic scale as shown on your chart.

    PS. I liked your recent blog on “Just what is HL7 v3?”

    Callum

    • Grahame Grieve says:

      Yes, I was referring to v3 messages (like how I made a post later about being specific about that…)

      v3 is more complicated than CDA for a number of reasons – much more comprehensive models, multiple schemas, no graceful fallback with the narrative, much more dependency on engaging with design by constraint… much more complex than CDA

Leave a Reply

Your email address will not be published. Required fields are marked *

question razz sad evil exclaim smile redface biggrin surprised eek confused cool lol mad twisted rolleyes wink idea arrow neutral cry mrgreen

*

%d bloggers like this: