Monthly Archives: October 2017

#FHIR and R (Stats / graphing language)

I’ve spent the last 2 days at the 2017 Australian R Unconference working on an R client for FHIR.

For my FHIR readers, R is a language and environment for statistical computing and graphics. (Having spent the last couple of days explaining what FHIR is to R people).  My goal for the 2 days was to implement a FHIR client in R so that anyone wishing to perform statistical analysis of information available in R could quickly and easily do so. I was invited to the R Unconference by Prof Rob Hyndman (a family friend) as it would be the best environment to work on that.

My work was a made a lot easier when Sander Laverman from Furore released an R package to do just what I intended to do earlier this week. We started by learning the package, and testing it. Here’s a graph generated by R using data from test.fhir.org:

Once we had the R Package (and the server) all working, I added a few additional methods to it (the other read methods). For sample code and additional graphs, see my rOpenSci write up.

I think it’s important to make the FHIR data easily available to R because it opens up a connection between two different communities – that’s good for both. Many of the participants are the Unconference were involved in health, and aware of how hard it is to make data available for analysis

Restructuring the data

FHIR data is nested, heirarchical, and focused on operational use. I’ve written about the difference between operational and analytical use before. Once we had the data being imported from a FHIR server into a set of R data frames, Rob and I looked at the data and agreed that that most important area to focus on was tools to help reshape the data down to a usable form. The thing about this is that it’s not a single problem – the ‘usable form‘ will depend entirely on what the question that is being asked of the data is.

So I spent most of the time at the Unconference extending my graphQL implementation to allow reshaping of the data (in addition to it’s existing function in assembling and filtering the data). I defined 4 new directives:

I’ve documented the details of this over on the rOpenSci write up, along with examples of how they work. They don’t solve all data restructuring problems by a very long shot, but they do represent a very efficient and reusable way to shift the restructuring work to the server.

There was some interest at the unconference in taking my graphQL code and building it into an R package to allow graphQL query of graphs of data frames in R, to assemble, filter, and restructure them – it’s an idea that’s useful anytime you want to do analysis of a graph of data. But we were all too busy for that – perhaps another time.

Where to now?

I think we should add R support to the AMIA FHIR datathon series, and maybe it’s time to encourage the research track at the main FHIR connectathons to try using R – I think it’s a very powerful tool to add to our FHIR toolkits.

Thanks to Adam Gruer from Melbourne’s Royal Children’s Hospital for helping – those graphs are his. Thanks also the organisers – particularly Nick Tierney (gitmeister estraordinaire). I picked up some ideas from the Unconference that will improve the FHIR connectathons.

Question: HL7 v2 referrals in Australia

Question:

It seems there are lots of different ways to format a letter type message in Australian Health IT

  1. Pretend its a path report and use ORU with the appropriate LOINC code. This seems the most common
  2. Do it “properly”: use REF
  3. Use PIT, still around I understand
  4. MyEHR CDA type

And within that

  1. 1/ use the FT plain-text format
  2. 2/ embed as an RTF document
  3. 3/ embed as PDF

So that’s 12 permutations at least and I’m sure I’m missing some. Which options have the best chance of getting parsed/rendered correctly by the current crop of GP software (MDW, Best Practice, Zedmed, etc)

Answer:

Well, I think that right now there is no great answer to this question. I consulted Peter Young from Telstra Health, who is leading a vendor consortium that is working with Australian Digital Health Agency (ADHA) on this particular issue. Peter says:

There is work underway in collaboration between HL7 Australia, ADHA and a number of industry players to create a standard mechanism of sharing messages between clinical applications and capable of being transmitted via SMD based on an HL7 Ref I12. It leverages the recently published pathology messaging profile and is part of a broader program led by ADHA  to improve interoperability between applications. Initially there is a baseline specification that provides essential structured data with a clinical document that can be in  either of RTF, pdf, html or text. The idea is that receiving applications will support any document format; if not natively, at least use a viewer. The work plan also includes extensions to add more structured data.  Most of the major clinical and messaging vendors are involved in the work program.

The link to the simplified profile is https://confluence.hl7australia.com/display/OO/Appendix+8+Simplified+REF+profile

I think that the best bet right now is the REF I12 message, with PDF for the letter. But this is an area of active work.

Argonaut in Australia, and the MyHR

Project Argonaut is coming to Australia. That is, at least one major US EHR vendor is planning to make their SMART-on-FHIR EHR extension interface available in Australia during 2018 (discussion about this at Cerner Health Conference where I was last week). HL7 Australia will work with them (and any other vendor) to describe what the Argonaut interface looks like in Australia (short answer: not much different: some different patient extensions, a few terminology changes (not RxNorm), maybe a couple of extensions on prescriptions for Reg24 & CTG). Also, HL7 Australia will be planning to engage with Australian customers of the US EHR vendors to help build a community that can leverage the capabilities of the SMART on FHIR interface.

This is a big deal for Australian EHR vendors that compete with the US vendors – they better start offering the same capabilities based on the same standards, or one of their key market advantages will be consigned to the dust of history. They’ll also find themselves competing with established SMART on FHIR Application vendors too. So I look forward to good engagement with Australian companies as we flesh this out (I know at least one will be deeply involved).

This also offers us an opportunity to consider what to do with the MyHR. The government has spent a great deal of money on this, and results have been disappointing. (Yes, the government publishes regular usage stats which show continuous increase, but these are not the important usage metrics, and they’re not the kind of stats that were hoped for back when we were designing the system). And it’s hardly popular amongst the typical candidate users (see, for example, AMA comments, or for more color, Jeremy Knibb’s comments or even David More’s blog).

But I’m not surprised at this. Back when it was the pcEHR, the intentions were solid, and though the original timeline was impossibly tight, it came together in an amazingly quick time (kudos to the implementers). But as it came together, I knew it was doomed. This is inevitable given it’s architecture:

Salient points about this architecture:

  • The providers push CDA documents to the central document repository
  • Patients can view documents about them
  • Patient’s can write their own documents, but providers won’t see them
  • Patient’s can exert their control only by ‘hiding’ documents – e.g. they can only break the flow of information (reminder, the internet age treats censorship as damage and routes around it)
  • Clinicians can find and read documents
  • Each document is it’s own little snap shot. There’s no continuity between them, no way to reconcile information between them
  • There are no notifications associated with the system

You can’t build any process on that system. You can’t even build any reliable analysis on it (stakeholders worried about the government using it for secondary data analysis shouldn’t, in general, worry about this, it’s too hard to get good data out of most of the CDA documents). These limitations are baked into the design. That’s why I went and developed FHIR – so that when the full reality of the system become evident, we’d have a better option than a document repository.

Well, 10 years later, and we’re still trying to load ever more use into the same broken design, and the government sponsors are still wondering why it’s not ‘working’. (at least we stopped calling it the ‘personally controlled’ EHR, since it’s the government controlled EHR). And as long as it exists and is the focus of all government efforts to make digital health happen, it will continue to hold up innovation in this country – a fact which is terribly evident as I travel and see what’s happening elsewhere.

But it doesn’t have to be like this.

The MyHR is built on a bunch of useful infrastructure. There is good ideas in here, and it can do good things. It’s just that everything is locked up into a single broken solution. But we can break it open, and reuse the infrastructure. And the easiest way I can see to do this is to flip the push over. That is, instead of the source information providers pushing CDA documents to a single repository, we should get them to put up an Argonaut interface that provides a read/write API to the patient’s data. Then, you change the MyHR so that it takes that information and generates CDA documents to go into the MyHR – so no change needed to the core MyHR.

What this does is open up the system to all sorts of innovation, the most important of which is that the patient can authorise their care providers to exchange information directly, and build working clinically functional systems (e.g. GP/local hospital, or coordinated care planning), all without the government bureaucrats having to decide in advance that they can’t be liable for anything like that. That is, an actually personally controlled health record system not a government controlled one. And there’s still a MyHR for all the purposes it does exist for

This alternative looks like this:

The salient features of this architecture:

  • All healthcare providers make healthcare information services available using the common Argonaut based interface (including write services)
  • Patients can control the flow at the source – or authorise flows globally through myGov (needs new work on myGov)
  • Systems can read and write data between them without central control
  • The MyHR can pull data (as authorised) from the sources and populate the MyHR as it does now
  • Vendors and providers can leverage the same infrastructure to provide additional services (notifications, say)

The patient can exert control (either directly at the provider level, or through mygov as an OAuth provider) and control the flow of information at the source – they can opt-in or -out of the myHR as appropriate, but they can also share their information with other providers of healthcare services directly. Say, their phone’s very personal health store. Or research projects (e,g, AllofUs). Or, most importantly and usefully, their actual healthcare providers, who can, as authorised by the patient, set up bi-directional flows of information on top of which they can build better coordinated care processes.

These features lead to some important and different outcomes:

  • Healthcare providers and/or system vendors can innovate to build distributed care models that provide a good balance between risk and reward for different populations (instead of the one-size suits bureaucrats that we have now)
  • Patient’s can control the system by enabling the care flows that they want
  • Clinicians can engage in better care processes and improve their process and outcomes (though the US process shows clearly that things get worse before they get better, and you have to plan for that)

This isn’t a small change – but it’s the smallest change I know of that we can make that preserves the MyHR and associated investment, and gives us a healthcare system that can innovate and build better care models. But I don’t know how we’ll think about getting there, given that we’re still focused on “make everyone use the MyHR”.

Note: Adapted from my presentation yesterday at the HL7 Australia Meeting

 

Cultural Factors affecting Interoperability

One of the under-appreciated factors that affects how successful you’ll be at ‘interoperability’ (for all the various things that it might mean) is your underlying culture of working with other people – your and their underlying expectations about whether and when you’ll compromise with other people in order to pursue a shared goal.

Culture varies from organization to organization, and from person to person. And even more, it varies from country to country. As I work with different countries, it’s clear that in some countries, it’s harder to get people to to sacrifice their short term personal interests for shared long term communal interests. There’s plenty of research about this – mostly phrased in terms of micro-economics. And it very often comes to down trust (or lack thereof). Trust is a critical success factor for success at compromise and collaboration. And it’s pretty widely observed that the level of trust that people have in institutions of various kinds is reducing at the moment (e.g. 1 2 3).

Plenty has been written about subject, and I’m not going to add to it. Instead, I’m going to make a couple of related observations that I think are under-appreciated when it comes to interoperability:

The first is that smaller countries with a bigger dominant country that can easily overpower them next door (my go-to examples: Estonia, Denmark, New Zealand) have populations that are much more motivated to collaborate and compromise with each other than countries that are economically (and/or politically) without peer in their geographic area.

And so you might think that these countries are better at interoperability than others…? well, sort of:

Countries that have a cultural disadvantage with regard to interoperability are the countries that produce the great interoperability technologies and methodologies (they have to!), but countries that have a cultural advantage for interoperability are much better at taking those technologies and methodologies and driving them home so the task is complete.

If my theory is right, then when you look at what countries are doing, you should look for different lessons from them, depending upon their cultural situation with regard to interoperability.

p.s. If my theory is right, one really has to wonder how bad are the cultural headwinds against interoperability here in Australia..

p.p.s. I found very little about this on the web. References in comments would be great.