Category Archives: IHE

The 3 Legs of the Health Informatics Standards Process

Lately, I’ve been describing the standards process that we’re engaged in – making a difference to people’s health through defining healthcare IT standards – as a process that has 3 legs. Kind of like this:

Finding out whether you’re really friends

Well, not really. It’s actually 3 legs of a journey. The 3 legs are:

  1. Developing the base standards
  2. Profiling the standard for particular communities 
  3. Driving Solutions into production in the market

#1 Developing the Base Standards

The first leg is developing the base standards the define the capabilities for describing and exchanging healthcare. These base standards enable all sorts of communities to form around exchanging healthcare data. 

But given the breadth of healthcare, and the variety of processes, jurisdictions and business rules – along with the fact that there’s no single authority of these things (in as much as there’s any authority at all), these are platform standards: they create platforms that are used in all sorts of different ways. They define how things can work.

Example Standards:

They’re also limited in that they can only make hard rules that everyone in the world agrees to – which is a lot less than an particular set of rules that a single solution needs. So they don’t define particular solutions, except in the rare case that everyone in the world agrees to the solution (we actually might have a couple of those things: consumer healthcare repository, and terminology service).

#2 Profiling the Standard for Particular Communities 

Once the international platform specification(s) exist, particular communities find common use cases for which they need to determine and record their business rules, and converge on a common approach for using the platform standards. These are called various things – profiles, templates, value sets, reference sets, implementation guides. But in all cases the basic steps are the same:

  • identify the need for a particular solution
  • form a smaller community around it 
  • support the community to converge and agree on a particular approach 
  • publish the approach and help the community test it’s validity 
  • iterate the process

The principle is simple: a smaller communities can get more agreement amongst themselves because they have more in common (and are sometimes empowered by hard rules/regulations/laws – when they are useful)

There’s lots of organizations working in this space. Some relevant in the FHIR eco-system:

Aside: Platform vs Usage

Note that the demarcation line between leg #1 (platform) and leg #2 (usage) is grey. I think of XDS as a platform standard, even though it’s technically profiling other standards. Thing is, that’s what FHIR does too, and it’s very definitely a platform standard. The question is far more about community and process than technology, and XDS has played out more like a platform standard (though it’s not that important which it is)

At first encounter with the standards process, many people challenge the value of the platform/usage split – why not just get it right in the first place? We’d love to – but the trade-off between community size and level of agreement is a hard external fact we can’t wish away. Given that, if you don’t create the platform/application split, the result will be a shattered standards system with myriads of incompatible standards, one for each variation of the use case and all incompatible. Which is (unfortunately) too much like what we have now. 😒 It’s really nice the first time you solve a problem, but the next time you solve a related problem…. then you start to feel the pain. After the Nth time… you really want a platform standard.

#3 Driving Solutions into production in the market

Once you’ve formed a community, got your agreements, published them, and tested that it’s possible… you’re still not done. You created capability, and expectation. But at that point, nothing is actually working (except for limited demonstrations at hotels and exhibitions, typically). 

Driving the solutions home – the last leg of the journey – that’s actually the hardest part of the journey. IT solution providers (vendors, usually though not exclusively) are often happy to collaborate in a community process, and demonstrate cool stuff to sell, but bringing stuff to market involves messy processes like certification, sales, deployment, contracts, and many people have to sign off on the process and someone has to fund the deployment/testing/maintenance. 

This is the primary source of the Gartner Trough of Despair, and we’re certainly hearing that this the case with FHIR:

  • The platform standard is doing well – 4th release, normative, lots of energy and adoption spreading like wild fire (sorry)
  • There’s plenty of projects around profiling / usage agreements from IHE and others. In particular, the Argonaut project has created a huge potential in USA with access to EHR data
  • But – case in point – actual argonaut end-points are not widely available, and getting access to them, even when in production, can be very difficult

HL7 has pretty much no leverage over the 3rd leg of the process. IHE has more, but not as much as everyone would like. Argonaut, in their smaller community, has more again – but still not enough. Classically, the agents that have influence/leverage over the 3rd leg are the regulators (hello CMSONC and the NPRM), but professional societies such as HIMSS and HISA also have influence, since their membership is much more provider based.

HL7 and the FHIR community are starting to pay much more attention to the whole 3 legs, and looking to work much more pro-actively with critical partners like IHE, HIMSS etc to see what we can do to make as much as possible real.

Question: Exchanging CDA

Question:

I am a CDA newbie; my task is to include a report (in PDF format) in a CDA document. I have managed to find the right implementation guide (for unstructured documents); so what I have managed to do so far is successful construction of a CDA document. My question is: what are some of the best practices for distributing the CDA document? I am especially interested in sending it via HL7 V2 messaging scheme. Please shed some light on CDA distribution

Answer:

Well, if I was asked to rank exchange methods for CDA in order or importance/prevalence, the order would look like this

  • XDS – This the major mechanism for distributing CDA documents
  • MDM messages (HL7 v2.x chapter 9) – Messages for managing documents (we use these in Australia for exchanging CDA documents0, and they would be a logical choice where you have v2 exchange already set up
  • If you in USA: Direct – Using Encrypted SMTP. This had a big push from ONC in the recent past, but I’m hearing that it’s falling out of focus (I say that as a non-US observer)

There’s a bunch of other ways – sneakernet, or FTP, for instance.

There’s a new way coming too: With FHIR there’s a way to store a CDA document, and make an entry about it to a registry so it can easily be found. David Hay has been explaining how to do this:

Fhir is not done and ready – yet. But it will be soon.

Implementing XDS on a FHIR Server

This is a follow up to an earlier post, Implementing ATNA on a FHIR server, and a preparative post for the XDS-focused FHIR Connectathon to be held this weekend.

For the connectathon, I am providing an XDS.b interface to a FHIR Resource Repository. The server accepts and XDS.b “ProvideAndRegisterDocumentSet” operation, and converts it a set of FHIR resources, and then submits them to the FHIR Repository. It would be possible to support other XDS.b operations, but I don’t have time for this prior to the connectathon.

There’s some conceptual differences between a XDS.b approach, and the FHIR RESTful approach to XDS:

  • The XDS.b interface is very transactional – a focus on “submission sets”, things that are submitted together. On the other hand, the FHIR definitions for XDS are very focused on stable resources – documents, folders, authors, patients. The notion of a submission set is absent in FHIR (though you can submit groups of these resources as a “batch” to a FHIR Repository
  • Security: the FHIR definitions that support XDS functionality do not get involved with security – it is assumed that this comes from elsewhere, using standard functionality such as OAuth.
  • The classic XDS architecture has a document source submitting to a document repository, which notifies a patient registry of the existence of the document, so that document consumers can find it. FHIR handles this differently: registries, which are aggregators of content, subscribe to multiple repositories by using their atom feeds.

To explain the third point further, an XDS eco-system consists of a series of clients and servers that use the following resource types:

  • Patient/Person – patient demographics
  • Agent/Organization – author & provider information etc
  • XdsEntry – the actual index entry for a document
  • XdsFolder – when the document is allocated to a folder
  • SecurityEvent – equivalent of an ATNA record (see Implementing ATNA on a FHIR server)
  • Binary – the resource that actually stores the document itself

In FHIR, the difference between a registry and a Repository is that the repository supports the binary resource, and references from the XdsEntry to the Binary refer to local references on the server, whereas a registry has XdsEntry resources that refer to the content found elsewhere – either Binary Resources on another server, or some other kind of server that serves resources. A registry subscribes to XdsEntry updates, and converts (if necessary) relative references to Binary resources to absolute references. It would also subscribe to the other resource types (Patient, Person, Agent, Organization, XdsFolder)

This means that a server that provides “ProvideAndRegisterDocumentSet” has to take the submission, convert it to a set of resources, and store those resources in a FHIR repository. Documents submitted through the XDS.b interface become visible by watching the XdsEntry atom feed, or by searching the XdsEntry resources on the server. Done carefully, a ProvideAndRegisterDocumentSet can be converted to a FHIR batch in a single operation, which can then be submitted to a FHIR repository as a separate operation – this is called an “XDS 2 FHIR Bridge”.

A test implementation of an XDS 2 FHIR bridge can be found at http://hl7connect.healthintersections.com.au:3098/svc/xds. This accepts XDS ProvideAndRegisterDocumentSet operations. Submitted documents can be seen at http://hl7connect.healthintersections.com.au/svc/fhir/xdsentry. This is based on an IHE certified XDR implementation, though this version hasn’t been IHE certified. (No security, btw, since the outputs are public anyway, so don’t submit real clinical documents!) (And note that due to a technical limitation in the XDS side of the implementation, this interface only accepts CDA documents, not any other kind of document).

Note that this test implementation also builds an XdsEntry2 as well as an XdsEntry – the FHIR project team is evaluating two different granularity approaches to XDS, to see which provides the best balance of usability and re-useability.

Note also that this implementation makes extensive use of inline resources, as discussed previously, as a way of evaluating their use.

Implementation Details

The implementation of the bridge is done in javascript. The script is commented extensively to explain what is happening. As an overview, the script has to build the following kinds of resources:

  • XdsEntry (no XdsFolder support – I haven’t seen real world usage of that yet)
  • Patient/Person
  • XdsEntry2 / Agent

In addition to building their raw contents by extracting information from the XDS ExtrinsicObject, the script also has to figure out how the resources are identified, and link them up correctly in in the batch so that server will identify them properly. Much of the hard work is involved in this step (see previous post on this subject).

The script is called and provided with a handle to 3 parameters of interest: the ExtrinsicObject element in the submission set, the associated binary content as base64, and an empty batch to be filled out form the information provided in the ExtrinsicObject. The script makes use of some fairly obvious utility functions, and a FHIR factory object that is used to construct the objects that build out the batch.

I keep the current version of the script at http://www.healthintersections.com.au/xds2fhir.js – read the script for further information.

Implementing ATNA on a FHIR server

This is the first of a series of posts that I’ll be making about the trial implementation of the IHE XDS profile using FHIR resources. Note that the status of the FHIR resources for XDS is a strawman – both HL7 and IHE are looking at these and evaluating their suitability for use for providing a mobile friendly gateway to an XDS system, but neither organization has committed to this approach yet – we have to see how well it works.

The first step in the implementation is to provide a working security logging framework since this is a fundamental requirement for all XDS actors. This post describes the way that ATNA (rfc 3881) is implemented using FHIR resources.

Firstly, a quick primer for rfc 3881: a security log message is sent from an application to a specialized security logging server that provides logging and analysis services. The security log message is sent using either UDP or secure-TCP, with a specified xml format at the end – the XML part is what is specified in RFC 3881. The XML specifies the kind of event the took place, information about the user and system context in which it takes place, and information about what system objects were affected by the operation – and whether the operation succeeded or not. The ATNA actors describe a simple flow – event records are created on one system, and sent to the other system for storage. No other interactions are allowed.

In order to reproduce this functionality, the FHIR specification defines the “SecurityEvent” resource. This resource includes a few of the fields from the security event log itself and the content specified by RFC 3881. In a RESTful context, the source system creates the resource – using either XML or JSON, and posts it to the security logging system using an HTTP post (a FHIR “create” event).The HTTP post can be secured using SSL to reproduce the same functionality as the secure-TCP syslog equivalent.

A RESTful approach based on FHIR is better because:

  • syslog is alien to many developers these days, while HTTP/SSL libraries are ubiquitous
  • There are many HTTP/network configuration, debugging and manipulation frameworks and services
  • JSON is defined out of the box, and equivalent and interconvertable with the XML format
  • The exchange is easy to extend and leverage

The last point requires some clarification. To start with, the “create” function is only one of many defined by FHIR – there is also update, and delete, though these would generally by disabled in a security logging system, or at least highly restricted. There is also the ability to read existing security logs, search them, and use pub/sub to replicate some or all of the events on other systems. These operations are not the basis for a full security event analysis service, but do open up the system for more reuse.

In addition, FHIR defines many other functions that can be used, such as profiles and conformance statements, and also offers the ability to bundle the security information about the operation along with the operation itself – that resolves a conundrum with the existing ATNA spec – do you log before you find out whether the operation works, or afterwards, when you might not be able to log it? Other FHIR framework advantages also apply, such as RDF based mappings into a ontological reasoning systems. It would be reasonable to expect that at some point in the near future this would be a useful addition to security analysis services.

Trying it out

There’s a public FHIR server offering ATNA functionality. You can get a list of security events on the system as html, xml or as JSON, or you can create new events on the system by posting the appropriate XML or JSON to http://hl7connect.healthintersections.com.au/svc/fhir/securityevent (or https:, though the server uses a self signed certificate). The system doesn’t allow updates or deletes to security events (as defined in the conformance profile here).

To assist with testing, this server offers a IHE-certified ATNA server on port 3099 of hl7connect.healthintersections.com.au (also known as ec2-107-20-116-177.compute-1.amazonaws.com). Both UDP and secure TCP are offered (secure TCP uses the trusted certificate issued to Health Intersections from Gazelle). Any ATNA Audit Records sent to this server will be added to the FHIR repository as SecurityEvent resources, and will show up in the feeds.

Anyone is welcome to use this test server to work up a FHIR Security Event logging or handling system. I’d especially appreciate feedback with regard to the performance of the ATNA-FHIR bridge – I’m not sure how solid the conversion of codes is.

FHIR Connectathon

The next FHIR connectathon is focused on XDS. Because we are focusing on the interfaces, not the system functionality, production of security events is not required. But I encourage everyone to implement these anyway – this is pretty simple stuff once you have done anything with FHIR.

 

Australian IHE Connectathon

I have been at the IHE Australian connectathon all week. I came to test the HL7Connect XDR implementation – send and receive CDA documents/packages as part of an XDS infrastructure.

There’s need a little bit of confusion about XDR, quite what it is. XDR is a document push using the same interface as XDS, but without the XDS roles and so forth to back it up.So it’s just a way to transfer content. There’s no expectation about the particular contect, nor what happens after it’s transferred. From my point of view, this is interesting because it establishes the capability for placing an interface engine between the document source and the document repository in order to do the kinds of things that interface engines do – patch the data to deal with differences between the source and destination context.

Of course, people are working hard to ensure that this isn’t actually needed, but for now I’m confident that these kinds of requirements aren’t going away anytime soon. Of course, digital signature requirements stand in the way of this, and it’s going to be interesting as the goal for integrity and assurance in the documents runs into the very real world obstacles that stand in the way of integrity and assurance (and these aren’t the technical ones that the people who push the security line are thinking of).

Anyhow, the connectathon has been an interesting process, and I’ve met a bunch of new and interesting people, which has been great.

Unfortunately, I haven’t passed the tests, and haven’t gained certification. It’s not that my ATNA/XDR implementations aren’t up to scratch – they are, and we’ve been passing content around insecurely just fine. The problem is that you have to do this securely, and my SSL / certificate hacking skills haven’t been up to the task of making the IHE test certificates work. It’s not at all helped by the obscurity and arcanery of the toolkits I have available – one that won’t load the private keys with error “wrong format or password”, and one that reports “-1” whenever anything goes wrong. Even when I can step them, I don’t learn anything.

If I’m going to pass a connectathon, my knowledge and tools need to improve an order of magnitude. It troubles me somewhat that my skills to pass the connectathon testing will need to be greater than I’d need in the real world (get a proper cert from a proper CA, and use it). The connectathon requires multiple certificates from multiple custom CAs… it’s been too much for me. Of course, the problems I have might be in my SSL/TLS tooling (particularly around the network binding) – I don’t know enough to get CA verification working with them, and I don’t know why it’s not working.

I don’t know whether this IHE checking is good or not – evidently some secure communications is needed, but spending half the connectathon fighting with the libraries to get them to use the IHE certificates (I’m not the only one)… I don’t think this is a good use of connectathon time. But I don’t know what else IHE can do.

Anyway, my XDR and ATNA works, both ends. I’ll retreat and lick my wounds and work on my security skills. And for those people who think we should add security requirements for FHIR Connectathons… I don’t think that’s a good idea.