An API is just another exchange specification?

Because the specifications for data exchange are so hard, a number of government projects around the world are focusing on releasing libraries that wrap the complexities of the standard behind a local implementation that is easier to use. At least, that’s the theory. (or, alternatively, vendors provide such libraries. sometimes, vendors provide the libraries commercially with a government subsidy).

Here’s some examples:

There’s plenty of other examples, but that’ll do. At HL7, we’re doing a lot of serious thinking about how to develop a standard – what is a good standard. (It’s going to be a bit of a theme on my blog for a little while). At the recent San Antonio meeting, someone proposed to me that the focus shouldn’t be on making the standard easier to understand or implement, but on providing wrapping libraries that do that for you.

The basic notion here is that you can wrap complexity of the standard behind a simple API that dramatically reduces the time to implement the specification. Me, I’m not convinced. Obviously, there’s some pragmatic issues, such as what platforms you’re going to provide support for (Win32? OSX? iOS? Unix (which)? Java? DotNet? LAMP?),  can you provide an architecture that is usable in a variety of application architectures, and how do you support it? But putting those practical issues aside, why does it make implementation easier/quicker?

As far as I can see, here’s the reasons to think that an API is easier to work with than an exchange specification:

  • An API is a fine grained interface, while messages are coarse grained. Fine grained interfaces are easier to work with for a programmer (are they?)
  • A library can do local validation as the content is built – but I don’t know why it can do more validation than otherwise could occur on the message content
  • A library can integrate the bits of the data exchange format with platform dependencies (e.g. network and particularly digital signatures, yuck!)
  • The API can hide needless documentation and syntactical complexity in the standard
  • the API covers a narrower set of use cases than the exchange standard

The last points are the important ones – if the standard is needlessly complicated, then you can hide that behind an interface with busy work – but the library can’t hide the real issues. If a particular data element has to be coded with the correct code, if you have to implement a particular workflow, the API can’t do that for you. That’s where the real challenge is – or should be.

But the most important issue is that very often the API covers a narrower set of use cases. This arises because the designers of the API can say “oh, that’s an advanced use case – if they want to do that, they’ll have to use the exchange format directly”. I think that this is the biggest reason why an API in front of the data exchange format can be easier to use. (Actually, greenCDA is just a another case of this). Whenever this happens, the question should always be asked of the exchange format – does it cater for unneeded use cases? (the answer might be either yes or no)

But for me, in practice, an API is just another exchange format that I have to deal with – this data item goes here, that one comes from over here, etc. It’s only easier if the API is well designed. It sometimes turns out to be harder because the simplications don’t work for me, and because bugs in the API are really hard to diagnose and resolve. Any vendor who’s used the Medicare Australia libraries will know what I mean there. I wonder – which do vendors really prefer? I myself prefer not to use an API, but I note that most vendors working NEHTA are choosing to use the NEHTA reference platform.

6 Comments

  1. Lloyd McKenzie says:

    An API can do a few things. It can hide complexity that’s hard-wired into the standard that provides no business value (there’s a lot that can be hidden in v3 without taking away functionality if the base specification is properly constrained). It eliminates the learning curve for a particular serialization syntax (XML is still scary to some, though less so all the time.) But the big one is coding the stuff that consumes immense time and is not a business differentiator. Digital signatures is one, though I’d extend that to cover security protocols in general.

    Managing secure HTTP (or whatever) is hard for those who haven’t done it before, and there are often specific rules to talk to a given communication partner that aren’t immediately handled by whatever library you get off the shelf. And until you’ve got that up and working, you’ve got no ability to validate anything else you’re doing.

    Basic handshaking and error-handling is also a non-value add that everyone needs. Terminology support (import vocabulary, do subsumption testing, generate a tree structure for lookups, etc.) is complicated stuff, there are few if any commercial APIs and everyone needs it.

    I agree with your qualifiers though. The API has to be really well designed. It shouldn’t conceal functionality, it should function with a variety of architectural approaches, and it should be tested like mad. Ideally it should be open source so if it’s broken, you can go in and fix it, or at least see what’s going on.

    I think API’s, if properly done, can accelerate adoption and reduce overall costs. It’s a lot cheaper for one team of crack developers with access to the necessary experts and a decent chunk of time to solve the common problems once in a generic way than it is to have 100+ different vendors attempt to hack their way through the same set of problems individually (and then pay to support them, conformance test them, etc.)

    It’s not a panacea, but it’s a strategy worth looking at if you can invest the time and resources to actually get it right.

    • Gustavo says:

      This sodnus like an interesting topic. API’s have a lot of potential for leveraging shared data sources, for defining a shared base of content and services that scholars can then repurpose and extend (possibly with some technical help, possibly by learning to do it on their own) to present their own analysis and interpretation without having to create the entire project from scratch. Perhaps we could spend some time talking about the design principles for creating RESTful APIs as well as the some of the benefits/drawbacks to these approaches (e.g., what are the sustainability implications if I create a project that depends on someone else’s API). For the hack-a-thon, it might be useful to have one or two project domains in mind. I’d be interested in walking through a general purpose API for representing digital facsimiles. I think that might be something that is broad enough that we can get both technical and non-technical people working on and yet (hopefully) have something cobbled together in an hour or so. I’m sure that there are other interesting projects out there as well. Are there other types of resources that people would be interested in hacking up?

  2. Peter Jordan says:

    So far, in NZ, we’ve taken the Toolkit/API approach to CDA implementations and most vendors (particularly the smaller ones)appear to favour this. With all due respect to Grahame, he is not the average e-Health developer, and his perspective is probably akin to that of an F1 racing driver’s view of automatic transmissions!

    We have adopted many of Lloyd’s suggestions notably extensive testing and source code distribution to all users. Design of the Application Object Model is also critical if the API is to effectively combine numerous IGs and the CDA specification into a single exchange model.

    Other benefits to consider are the ability to generate both the section text and atomic CDA entries from a common Application Layer Model and comprehensive CDA validation. The later is particularly significant; given the limitations of all the relevant XML-related technologies (XDS, Schematron, etc) an API is the ideal place for the supplementary custom coding required to support most CDA IGs.

  3. The XML & HTML DOMS are just such an API, And they are indeed fine grained, but also ubiquitous. It is worth pursuing.

  4. Grahame Grieve says:

    So. APIs are useful. But no substitute for a poor standard – which is exactly what “complexity that’s hard-wired into the standard that provides no business value” is.

  5. An API can be fine-grained, but the critical concept of an API is that it is a public published interface. It’s not just a standard that says “Put temperature in position 6 of message 35” and then is followed (or not)in 17 different ways. It says, for example, call my get-temperature function with the MRN of the patient you are interested in and I will pass you a floating point number which represents the temperature in Centigrade.

    Now I don’t care if the temp is passed in Centigrade, Farenheit, or Kelvin, as long as the units are publicly published.

    The API can be fine-grained or coarse, complex or straightforward, but the critical issue is that it does not leave implementation up to the whim of the individual programmer. THAT is what makes it such a great tool for interoperability, and why it has been standard practice in software engineering since the 80s.

Leave a Reply

Your email address will not be published. Required fields are marked *

question razz sad evil exclaim smile redface biggrin surprised eek confused cool lol mad twisted rolleyes wink idea arrow neutral cry mrgreen

*

%d bloggers like this: