Monthly Archives: February 2012

Good Exchange Specifications: Interoperability vs Intraoperability

There’s two very different approaches to getting systems to exchange data.

Interoperability

This way, you get the different system designers to sit around a table on the basis that a common syntax and data model will be agreed to that allows all of them to exchange what needs to be exchanged, without requiring any design changes to the way their systems works – whatever is done can be done on the periphery. And what can be done is therefore constrained to what can be achieved given that none of the participants are willing to make changes to the way their products work. Initially, the outcome is the lowest common denominator of the way that the systems function – all systems are constrained to the dumbest system. Of course, that doesn’t deliver the things that are needed, so the smarter systems need to come up with “extensions” to the basic model so they can do smarter things.

The deficiencies of this approach are obvious. But there’s another way…..

Intraoperability

In this other approach, everybody sits down around the table and agrees how the systems should work, and then everybody goes off and makes their systems work that way – they rework the core of the systems to function in the agreed way. Because all the systems work the same way, then exchange between the systems is easy and straight forward. I call this “intraoperability”, because it’s about operations within a sandbox. There’s no lowest common denominator, and there’s need for extensions or custom negotation.

The second way has fewer deficiencies, but they are much bigger: it’s much harder to get agreement – definitely harder to get agreement to what that one model everyone must follow is, and orders of magnitude harder to get the system designers to sign on to this approach – because everyone wants to do things their own way. Typically, at this point, the system designers (usually vendors) get the blame. But I think it’s not as simple as that – vendors do whatever sells, which is whatever the purchaser wants to buy. The business model for intraoperability rarely works out, because it’s standards writ large – a big price for a big payoff, but who’s going to be paying the price? (as usual, not the ones who benefit)

Application

So both approaches have strengths and weaknesses. In practice, of course, HL7 pursues interoperability, while openEHR is about what I have called intraoperability. Both HL7 and openEHR are pragmatic though, and they compromise for practicality. All the time I’ve been working in HL7, there’s been this mantra: we do not dictate how applications function, only how they exchange data as perceived externally. This often comes up in committee when we are debating design choices. But on the other hand, vendors know that the requirements that lead to HL7 designs are their requirements too, and also that HL7 designs lead to how requirements are understood and represented over the years, and in effect, HL7 standards dictate some aspects of design to systems. In a similar vein, while openEHR chases intraoperability, they’ve made pragmatic decisions about extensibility and design to cater for variance in systems.

This philosophy is tied to the respective business models: HL7 was formed as a vendor consortium (and later providers and governments). All of them have massive investments in existing systems. It generally takes a vendor 10 years or so to effectively redesign production healthcare systems, and the government might take 20 years to effect a change across it’s installed base. So HL7 is driven to pursue interoperability, with a very hands off approach to existing systems On the other hand, openEHR was about intraoperability from the start, so they had to be an opt-in open source consortium, with a gestation period of at least a decade.

And now – openEHR has very nice stuff (content models, tools, ecosystem, community), but still very limited buy-in. I am watching with interest whether the intraoperability approach can scale into being a standard for real world exchange (and since Tom & Co read my blog, that’s not about architecture, technology or models, but about governance and buy in. A few countries are trying it out, I’m waiting to see how it goes). As for HL7 , I think that v3 intended to deliver the benefits of intraoperability without actually paying the piper up front and getting global agreement in the systems. And I don’t think we have to wait any longer to determine how that went.

So what’s the right standard? Should HL7 be more dictatorial to system designers in order to get better interoperability? Now of all times, is this the worst time or the best time?

Comments on the senate enquiry on the PCEHR

Going through the recent transcript of the Australian Senate enquiry into e-health, I found this, from Vince Macaulay, spoken as a representative of the MSIA (I am also a member).

Even in the NEHTA specification as it is there is no intention to audit or manage that medication list, so until the terminology is in place it is going to mean that that medication list is going to become a major mish-mash of information from different people and different places and description of medication using different names. It is going to be extremely confusing. By going to a representation of the actual report from a prescription or the pharmacy dispensed record—by putting those into the PCHR—you are actually presenting the information in a readable format that the clinicians can understand, and it gets around those problems with terminology that we can move to at a later time. But in that initial phase, if we have that simplified format, we have safety risks that might not be involved otherwise.

Now I have a great deal of respect for Vince – he’s a clinician, a scientist, a standards producer, a software developer, and has owned his own medical software business. That’s a powerful combination that gives Vince great insight into the health interoperability space – but I’m going to disagree with him anyway.

There’s several problems here:

  1. Vince is correct that there’s no intention to audit or manage a medication list as part of the PCEHR. A managed medication list is the holy grail of EHRs, of course, but it’s one of the highest mountains to climb, and so the PCEHR isn’t going there for the foreseeable future. So when he says “that medication list is going to become a major mish-mash of information from different people and different places and description of medication using different names”, what does that mean? There won’t be a medication list to be a mish-mash. There will be a series of documents that each make a set of statements about a patient’s medications from a different perspective. And that will be confusing – but less confusing than it is now, when those documents aren’t even available. (Actually, Vince implies that this is about pharmacy dispense records, but as I read the ConOps, these aren’t even in the PCEHR)
  2. The second problem is that there’s again a misunderstanding of CDA and the notion of a single document that contains both presentation and structured data. The implication above is that the PCEHR is using structured data, and that the problems this will cause could be resolved by not sending structured data, and using a human readable form for clinicians instead. But this is a false dichotomy – CDA includes the human readable form. That’s one reason why the PCEHR is built using CDA documents. Structured data is added to foster the gradual growth of secondary reuse and clinical decision support. Few people have any illusion that this reuse will happen from day #1 – to get good use, you need good data, and the clinicians will need to do better record keeping to enable it. While there will be many documents with properly populated structured data (and some with Snomed/AMT too) from day #1, the willingness to use the data, and put effort into improving the other documents will only happen when clinicians see useful outcomes. CDA documents allow us to have human readable documents, and then to gradually grow into populating and using the data without changing the technical base.
  3. It would be great if health industry as a whole would take up AMT quickly. That it hasn’t says a little about AMT and a great deal about the health industry (per my previous point, and also see my previous post about the value of standards for part of the reason and this one too). But even if everyone was using AMT (or any other medications coding system) that still wouldn’t mean we’d could  build a coherent medication list. Even if we got every dispense entirely fully and correctly coded, and every administration, we still won’t have a useful medication list unless we can get right up close to the patient and somehow automatically know what they actually take. In the meantime, a useful medication list has to built the hard way by a human. Properly coding this with AMT etc would be useful and helpful for building it, but it isn’t the biggest problem.
  4. The thing I really wanted to comment about was the underlying assumption that I commonly run into when developing interoperability specifications. The thrust of the argument here is that if the NEHTA specifications provide structured data, this will lead to unsafe medication lists. That’s because implementers who aren’t at the table are obviously too dumb to figure these problems out for themselves. Now it’s certainly possible – implementers can go out and do amazingly stupid things (btw, in my experience this is often because some clinician asked for it). But this argument, which I see fairly often in lots of contexts, is that because it’s possible for a feature to be implemented badly, then it shouldn’t be present at all, whatever functional outcomes might be possible from it.At heart, this is a governance problem. There’s a natural desire to control things – that’s what drives people to play in the standards space. And people take their duty to clinical safety seriously too. But in most contexts, it’s wrong to keep useful stuff out because it might be abused. In this case, the limitations of the structured data for medications are well understood, but there are still legitimate uses for it.

Vince is a friend, and as I said, knows most of this stuff well indeed. I can only assume that the pressure of being in the senate and making statements rather than having a useful discussion lead to Vince not saying quite what he meant. I still thought it was worth commenting on though. And I certainly disagreed with what was reported as the official MSIA position at the senate enquiry – that the PCEHR should be built on PDF documents. Though I’m not sure that’s quite what the MSIA said to the enquiry after I read the transcript – perhaps that was just what the media reported.

p.s. I’m looking forward to an active comments thread on this one, but note that with regard to the PCEHR, many of the most knowledgeable people (all NEHTA employees) are not allowed to comment here. There’s a lot I’m not allowed to comment on either – else I’d have taken up much meatier issues from all the comments at the Senate enquiry.

Good Exchange Specifications: Microsoft vs Apple

One of the early choices you have to make in building a specification is around how to leverage your domain analysis. It’s a question of how you use your story boards. There’s the Apple way, and there’s the Microsoft way.

The Apple Way

The Apple way is simple: you document your story boards, and then you develop a solution to the story boards that you agreed to. You’re going to produce a tightly crafted, simple workflow/product that solves those story boards very effectively. In as much as you’ve covered their workflow, the users will love the outcome, and it’ll just work for them. If you haven’t covered their work flow, well, they just ain’t going to be a fan-boy.

The Microsoft Way

This isn’t so simple: once you’ve documented your story boards, then you generalise the things you see, and solve the general case. You’re going to produce a flexible robust product that can be tailored to all sorts of usages you never imagined. Users will never love your products, but they’ll keep using and buying them, because they can do what they need.

Btw, I learnt about these two “Ways” by personal revelation in a vision by both Steve and Bill, so you can take them as gospel. Of course not – this is just how I feel they work based on using their products extensively. And, of course, this is a stereotype. Anyone who’s tried to teach their grandma how to shoot rogue applications on their iPhone knows that Apple can produce some spectacularly bad UI as well. And I’m sure I’ve seen really tight focused and easy to use UI from Microsoft somewhere – but by and large, these stereotypes have held true for a long time.

The same two can apply to standards – you can do your business analysis, and solve just that problem in the standard. It’ll be easy to use, fit for purpose. It’ll just work. That is, it’ll just work when the story boards (etc) fit to the implementers problem. If they don’t…. well, there’s always another standard. If you generalise the requirements, then your standard will always be more complicated than any single user wants – but at least it’ll work (all right, at least there’s a high chance it’ll sort of work!)

Straw Poll in the comments: which works better?

What’s a Good Exchange Specification?

There’s a lot of different standards for exchanging healthcare information out there. Here’s one reason from the ever funny XKCD:

But there’s actually some very good reasons why there’s many competing standards. Well, maybe there’s lots of solid reasons, even if they aren’t good reasons, and they won’t go away just by wishing they weren’t there.

The problem is that there’s lots of different perspectives about what makes a good exchange standard. I’m going make a series of posts describing some of the issues. I’m going to make them as fun as possible, but they are all real issues.

 

An API is just another exchange specification?

Because the specifications for data exchange are so hard, a number of government projects around the world are focusing on releasing libraries that wrap the complexities of the standard behind a local implementation that is easier to use. At least, that’s the theory. (or, alternatively, vendors provide such libraries. sometimes, vendors provide the libraries commercially with a government subsidy).

Here’s some examples:

There’s plenty of other examples, but that’ll do. At HL7, we’re doing a lot of serious thinking about how to develop a standard – what is a good standard. (It’s going to be a bit of a theme on my blog for a little while). At the recent San Antonio meeting, someone proposed to me that the focus shouldn’t be on making the standard easier to understand or implement, but on providing wrapping libraries that do that for you.

The basic notion here is that you can wrap complexity of the standard behind a simple API that dramatically reduces the time to implement the specification. Me, I’m not convinced. Obviously, there’s some pragmatic issues, such as what platforms you’re going to provide support for (Win32? OSX? iOS? Unix (which)? Java? DotNet? LAMP?),  can you provide an architecture that is usable in a variety of application architectures, and how do you support it? But putting those practical issues aside, why does it make implementation easier/quicker?

As far as I can see, here’s the reasons to think that an API is easier to work with than an exchange specification:

  • An API is a fine grained interface, while messages are coarse grained. Fine grained interfaces are easier to work with for a programmer (are they?)
  • A library can do local validation as the content is built – but I don’t know why it can do more validation than otherwise could occur on the message content
  • A library can integrate the bits of the data exchange format with platform dependencies (e.g. network and particularly digital signatures, yuck!)
  • The API can hide needless documentation and syntactical complexity in the standard
  • the API covers a narrower set of use cases than the exchange standard

The last points are the important ones – if the standard is needlessly complicated, then you can hide that behind an interface with busy work – but the library can’t hide the real issues. If a particular data element has to be coded with the correct code, if you have to implement a particular workflow, the API can’t do that for you. That’s where the real challenge is – or should be.

But the most important issue is that very often the API covers a narrower set of use cases. This arises because the designers of the API can say “oh, that’s an advanced use case – if they want to do that, they’ll have to use the exchange format directly”. I think that this is the biggest reason why an API in front of the data exchange format can be easier to use. (Actually, greenCDA is just a another case of this). Whenever this happens, the question should always be asked of the exchange format – does it cater for unneeded use cases? (the answer might be either yes or no)

But for me, in practice, an API is just another exchange format that I have to deal with – this data item goes here, that one comes from over here, etc. It’s only easier if the API is well designed. It sometimes turns out to be harder because the simplications don’t work for me, and because bugs in the API are really hard to diagnose and resolve. Any vendor who’s used the Medicare Australia libraries will know what I mean there. I wonder – which do vendors really prefer? I myself prefer not to use an API, but I note that most vendors working NEHTA are choosing to use the NEHTA reference platform.

Response to Critical Safety Issue for the PCEHR

While I was on leave at Tamboon Inlet (and completely off the grid), Eric Browne made a post strongly critical of CDA on his blog:

I contend that it is nigh on impossible with the current HL7 CDA design, to build sufficient checks into the e-health system to ensure these sorts of errors won’t occur with real data, or to detect mismatch errors between the two parts of the documents once they have been sent to other providers or lodged in PCEHR repositories.

Eric’s key issue is that

One major problem with HL7 CDA, as currently specified for the PCEHR, is that data can be supplied simultaneously in two distinct, yet disconnected forms – one which is “human-readable”, narrative text displayable to a patient or clinician in a browser  panel;  the other comprising highly structured  and coded clinical “entries” destined for later computer processing.

It’s odd to hear the central design tenant of CDA described as a “major problem with CDA”. I think this betrays a fundamental misunderstanding of what CDA is, and why it exists. These misunderstandings were echoed in a number of the comments. CDA is built around the notion of a the twin forms – a human presentation, and a computer processible version. Given this, it’s an obvious issue about how the two relate to each other, and I spend at least an hour discussing this every time I do a CDA tutorial.

Eric complains that clinicians have no way to inspect how the data and the narrative relate, nor is there an algorithm to test this:

However, the critical part of the document containing the structured, computer-processable data upon which decision support  is to be based is totally opaque to clinicians, and cannot be readily viewed or checked in any meaningful way

This is true – and it would be a whole lot more concerning except that this is true of all the forms of exchange we currently have – it’s just that they don’t have any human fall back to act as a fail safe check. Of course, in a perfect world, this wouldn’t be necessary. The data elements would be clearly and unambiguously defined, everyone would agree with them, no one would use anything extra, and all the implementations would be perfect. This is not the world we live in – it’s a pure governance fantasy, but one that some of Eric’s commenters share:

I can’t imagine going into any true IT organisation and proposing storing the same information constructed in two different ways in the same document, and with no computable requirement to have them synchronised (Andrew Patterson)

My initial response was, no, of course not. But in actual fact, this is ubiquitous in health care, and CDA is designed for the reality that we have. Note that CDA is designed for exchange, for the cracks between systems that cannot or might not agree on data fields. People might not like that, but the PCEHR is very much living between the cracks of the existing clinical systems, unless we replace all of them now.

CDA itself doesn’t have much to say about the relationship between the data and the text. It implies that there must be one, but because CDA covers such a wide variety of use cases, CDA itself doesn’t make the rules; instead, the rules are delegated to CDA implementation guides to make comment about this. And a number of the NEHTA implementation guides do exactly that in response to the same concerns Eric expresses.

Back to Eric’s concerns:

Each clinician is expected to attest the validity of any document prior to sharing it with other healthcare providers, consumers or systems, and she can do so by viewing the HTML rendition of the “human-readable” part of the document… However, the critical part of the document containing the structured, computer-processable data upon which decision support  is to be based is totally opaque to clinicians, and cannot be readily viewed or checked in any meaningful way.

Where as now, with HL7 v2, they can’t see it, and can’t attest the validity at all. Instead, they must trust their systems, and there is no level of human to human fall back position at all. With CDA, they still must trust their systems, because they still can’t see the data portion – that is no different. But they also have a level of human to human communication that doesn’t exist with v2. CDA solves this problem, but does not solve the fact that we still have to trust the systems to exchange the data correctly to get computable outcomes (aside: I’m far from convinced that the clinical community wants more than a modicum of computable outcomes at the moment).

Of course, this still leaves the question of whether the data and the narrative agrees with each or not. The important thing that you have to consider in this regard is how do you build a document? How would you actually build a document that contains narrative and data that disagree with each other? Once you start pursuing this question, it becomes clear that a system or clinician that produces CDA documents that disagree between narrative and data have a serious underlying problem. Note the emphasis on clinician there. In a genuine clinical system producing genuine clinical documents, the system can’t prevent clinicians form producing incoherent documents – it’s up to the clinician. It seems to me, from watching the debate, that whether you think that is good depends on whether you’re a clinician or not.

I’ll illustrate this by describing two scenarios.

  1. A (e.g. pathology) system produces CDA entirely from structured data. The document is produced in the background with no human interaction. In this case, how can the narrative and the data disagree? Well, if a programmer or a user misunderstood the definitions or intended/actual usage of the data items.
  2. A (e.g. GP) system generates a CDA document using user selected available data from the patient record using a clinician defined template, loads the section narratives with their underlying data into an editor, and lets the clinician edit the narrative (usually in order to add additional details or clarifications not found in the structured data). In this case, the narrative can disagree from the data if the user updates the document to disagree with their own data.

In either case, there is an underlying problem that would not be detectable at the end-point were only the data provided. CDA can’t solve these problems – but the fact that CDA contains both narrative and text doesn’t create them either

Finally, the actual usefulness of containing narrative and data is unexpectedly illustrated by Eric’s own post, in an example where he appears to think that he’s criticising the presence of both narrative and data.

As an illustration of the sort of problems  we might see arising, I proffer the following. I looked at 6 sample discharge summary CDA documents  provided by the National E-health Transition Authority recently. Each discharge summary looked fine when the human-readable part was displayed in a browser, yet unbeknownst to any clinician that might do the same, buried in the computer-processable part, I found that each patient was dead at the time of discharge. One patient had been flagged as having died on the day they had been born – 25 years prior to the date that they were purportedly discharged from hospital! Fortunately this was just test, not “live” data.

Firstly, Eric shouldn’t have used technical examples provided to illustrate syntax to application developers as if they are also semantically meaningful (most NEHTA examples aren’t due to time constraints, though I’ve done one or two – it’s a lot slower than you think to produce really meaningful examples). But the date of death is actually buried in the portion of CDA that is data only, not narrative. And because Eric chose the wrong stylesheet (not the NEHTA one), his system didn’t know about the date of death, and ignored it. Had CDA actually contained a narrative portion for the header too, this would not have been a problem. Which brings me back to my earlier point: in the world we live in, not everyone shares the same set of data and processes them with no errors etc.

CDA isn’t a perfect specification – nothing is (subject of a series of posts to come) – and it does have it’s own complexity. But the problems aren’t due to containing both narrative and data. Eric says:

I know of no software anywhere in the world that can compare the two distinct parts of these electronic documents to reassure the clinician that what is being sent in the highly structured and coded part matches the simple, narrative part of the document to which they attest. This is due almost entirely to the excessive complexity and design of the current HL7 CDA standard.

This I completely disagree with. The inability to automatically determine whether what is being sent in the highly structured part matches the narrative is not “entirely due” to the complexity of CDA, but is almost entirely due to the problem itself.

Finally, Eric says that

NEHTA should provide an application, or an algorithm,  that allows users to decode and view all the hidden, coded clinical contents of any of the PCEHR electronic document types, so that those contents can be compared with the human-readable part of the document.

Actually, this is a pretty good idea, though I don’t know whether NEHTA can or not (on practical grounds). I guess that we should be able to, since the Implementation Guides will increasingly make rules about what the narrative must do, and we already provide examples based on rendering the structured data in the narrative. But my own experience trying to interpret AS 4700.1 HL7 v2 messages (Australian Diagnostic Reports) suggests that a canonical rendering application is even more necessary for that – but who could define such a beast?