Because the specifications for data exchange are so hard, a number of government projects around the world are focusing on releasing libraries that wrap the complexities of the standard behind a local implementation that is easier to use. At least, that’s the theory. (or, alternatively, vendors provide such libraries. sometimes, vendors provide the libraries commercially with a government subsidy).
Here’s some examples:
- Medicare Australia with several secure data exchange standards for biliing (example)
- Quicksilver’s Spinal Tap
- The Everest Framework in Canada
- NEHTA’s reference platform implementation (password protected link – sorry)
There’s plenty of other examples, but that’ll do. At HL7, we’re doing a lot of serious thinking about how to develop a standard – what is a good standard. (It’s going to be a bit of a theme on my blog for a little while). At the recent San Antonio meeting, someone proposed to me that the focus shouldn’t be on making the standard easier to understand or implement, but on providing wrapping libraries that do that for you.
The basic notion here is that you can wrap complexity of the standard behind a simple API that dramatically reduces the time to implement the specification. Me, I’m not convinced. Obviously, there’s some pragmatic issues, such as what platforms you’re going to provide support for (Win32? OSX? iOS? Unix (which)? Java? DotNet? LAMP?), can you provide an architecture that is usable in a variety of application architectures, and how do you support it? But putting those practical issues aside, why does it make implementation easier/quicker?
As far as I can see, here’s the reasons to think that an API is easier to work with than an exchange specification:
- An API is a fine grained interface, while messages are coarse grained. Fine grained interfaces are easier to work with for a programmer (are they?)
- A library can do local validation as the content is built – but I don’t know why it can do more validation than otherwise could occur on the message content
- A library can integrate the bits of the data exchange format with platform dependencies (e.g. network and particularly digital signatures, yuck!)
- The API can hide needless documentation and syntactical complexity in the standard
- the API covers a narrower set of use cases than the exchange standard
The last points are the important ones – if the standard is needlessly complicated, then you can hide that behind an interface with busy work – but the library can’t hide the real issues. If a particular data element has to be coded with the correct code, if you have to implement a particular workflow, the API can’t do that for you. That’s where the real challenge is – or should be.
But the most important issue is that very often the API covers a narrower set of use cases. This arises because the designers of the API can say “oh, that’s an advanced use case – if they want to do that, they’ll have to use the exchange format directly”. I think that this is the biggest reason why an API in front of the data exchange format can be easier to use. (Actually, greenCDA is just a another case of this). Whenever this happens, the question should always be asked of the exchange format – does it cater for unneeded use cases? (the answer might be either yes or no)
But for me, in practice, an API is just another exchange format that I have to deal with – this data item goes here, that one comes from over here, etc. It’s only easier if the API is well designed. It sometimes turns out to be harder because the simplications don’t work for me, and because bugs in the API are really hard to diagnose and resolve. Any vendor who’s used the Medicare Australia libraries will know what I mean there. I wonder – which do vendors really prefer? I myself prefer not to use an API, but I note that most vendors working NEHTA are choosing to use the NEHTA reference platform.