Monthly Archives: May 2014

The PCEHR Review, and the “Minimum Composite of Records” #2

This post is a follow up to a previous post about the PCEHR review, where I promised to talk about medications coding. The PCEHR review quotes Deloittes on this:

The existing Australian Medications Terminologies (AMT) should be expanded to include a set of over the counter (OTC), medicines and the Systematised Nomenclature of Medicine for Australia (SNOMED-CT -AU) should become universal to promote the use of a nationally consistent language when recording and exchanging health information.

Then it says:

Currently there are multiple sources of medication lists available to the PCEHR with varying levels of clinical utility and functionality. From some sources there is an image of the current medication list, from some sources the current medication list is available as text, from some sources the information is coded and if the functionality existed would allow for import and export into and out of clinical systems as well as transmission by secure messaging from health care provider to health care provider.

Note: I’m really not sure what “there is an image of the current medication list” means. As a separate choice to “list is available as text”, I guess that implies it’s actually made available as a bitmap (or equivalent). I’ve never seen anything like that, so I have no idea what they mean.

And further:

The NPDR should be expanded to include a set of over the counter (OTC) medicines to improve its utility.

Over the counter medication is essential to detect such issues as poor compliance with Asthma treatment, to show up significant potential side effects with prescription only medicines and to allow for monitoring and support for drug dependent persons. The two main data sources of data are complementary and neither can do the job of the other. The curated current medications list together with adverse events, could be sourced from the GP, Specialist, Hospital or Aged Care Facility clinical information system, the discharge summary public or private is immediately clinically useful and will save time for the clinician on the receiving end.

It is imperative that further work be done on software systems to make the process of import and export and medication curation as seamless as possible to fit in to and streamline current workflow.

My summary:

  • Extend AMT and NPDR to include over-the-counter medicines
  • Work with all the providers to get consistently encoded medicines and medication lists so the medications can be aggregated, either in the PCEHR or in individual systems

Over-the-counter medicines

Well, I guess this means pharmacists only things (such as ventolin, which they mention) and doesn’t include supermarket type things like aspirin or hay-fever medications. I don’t know how realistic this is from a pharmacist workflow perspective (e.g. getting consent, signing the NPDR submission), but let’s (for the sake of argument) assume that it is realistic. That will mean that each OTC product they sell will need to be coded in AMT (once AMT is extended to cover them). I don’t know how realistic this is – can it be built into the supply chain?  Let’s assume that it can be, so that pharmacists can make this work, and that we’ll then be able to add this to NPDR.

However there’s a problem – this recommendation appears to assume that the NPDR is already based on AMT. I’ve got a feeling that it’s not. Unfortunately, good information in public is not really available. By repute, AMT adoption isn’t going well.

Is that true? What would really be helpful in resolving this question would be to fill this table out:

Coding System SHS/ES eRef/SL DS NPDR
AMT v2
ICPC 2+
MIMS
PBS
Vendor Specific
Snomed-CT
First Data Bank
Text only

Where each cell contains 3 numbers:

  • Number of systems certified for PCEHR connection
  • Number of documents uploaded that contain codes as specified
  • Number of medications in PCEHR coded accordingly

Some notes about this table:

  • Even just gathering data from the PCEHR to fill this table out would be a significant amount of work – it’s not just running a few sql queries. And even then, it wouldn’t cover non-PCEHR exchange – if it did, it would be even more interesting
  • Although a system may encode medications in one of these coding systems, some medications might not be coded at all since they don’t appear on the list of codes. And some systems encode immunizations differently from medications and some do it the same (ICPC 2+ is used for immunizations by a few systems)
  • Although most medications use codes in production systems, for a variety of reasons many medications in the PCEHR are not coded, they’re just plain text (in fact, other than the NPDR, I think plain text would be the biggest number). I know of several reasons for this:
    • There was no financial penalty for not coding medications at all
    • The PCEHR system returns warnings and/or errors if the medications are not coded the way it supports
    • The PCEHR system is extremely limited in what it does support
    • There’s no systematic way to find out what it does support
    • Trouble shooting failed documents is extremely hard for a variety of reasons
    • There’s lack clarity around safety and where liability falls when aggregating
    • note: I don’t claim that this list is complete nor have I assigned priorities here

If we saw the counts for this table, we’d have a pretty good feel for where medication coding is in regard to the PCEHR. And I’m pretty sure that it would show that we have a long way to go before we can get consistent encoding.

Consistently Encoding Medications

Getting consistently encoded medications list is far more difficult that merely getting alignment around which technical coding system to use. Beyond that, different systems take different approaches to tracking medications as they change. For instance, the standard Discharge Summary specification says that there are two medication lists:

  • Current Medications On discharge: Medications that the subject of care will continue or commence on discharge
  • Ceased Medications: Medications that the subject of care was taking at the start of the healthcare encounter (e.g. on admission), that have been stopped during the encounter or on discharge, and that are not expected to be recommenced

There’s a hole between those two definitions: medications that the patient took during the admission that have residual effect. But many discharge summary systems can’t produce these two lists. Some have 3 lists:

  • Medications at admission
  • Medications during inpatient stay
  • Medications at discharge

Others only track medications prescribed by the hospital, and not those already taken by the patient, or if it tracks those, it does so as text. Some systems don’t know which inpatient medications are continuing after admission or not. Even if the system can track these medications in fully coded detail, there’s no guarantee that the clinical staff will actually code them up if they didn’t prescribe them. Some institutions have medication management systems that track every administration, while others don’t.

Finally, there’s the question of to what degree GP’s and specialists record medications that are not relevant to the problem(s) they are interested in (particularly specialists). Humans (or clinicians – I’m not convinced they’re the same things 😉 ) are good at dealing with degeneracy in these lists, and it saves them a lot of time (mention a medication generically, but no relevant details). Computers are not good at dealing with degeneracy, so in the future, the clinical staff will all have to be unerringly precise in their records.

Note that I’ve moved into pure clinical practice at the end there; in order to meet the PCEHR goal, we need to align:

  • Medications coding
  • Medication tracking system functionality
  • Clinical practice

And while we’re at it, we need to jettison the existing data which will be come legacy and no longer useful to all the existing systems.

I’m not holding my breath. The PCEHR review says:

The PCEHR Value Model suggests that of the total gross annual theoretical benefit potential of eHealth in Australia, medication management is the greatest individual driver of benefits ($3.2 billion or 39% of gross benefits).

I’m not convinced that will be a net profit 🙁

p.s. does anyone know where the PCEHR Value Model is published?

 

South American FHIR Connectathon

There will be a FHIR Connectathon for South American participants on Sept 3rd in Buenos Aires. The connectathon will be held in association with CAIS 2014 (Argentinian congress on Health Informatics).

The connectathon will be a fairly introductory level one (partly tutorial):
Level 0: Explore patients and know what it is about using Fiddler or a tool alike
Level 1: Ask for conformance and then explore patients and send lab results
Level 2: Same but + documents (get-save)
This is the event brochure and registration page (in Spanish): http://www.cais.org.ar/?q=node/20

The event is being organized by Diego Kaminker, who makes a tremendous contribution to the global HL7 community through organising a wide variety of educational activities.

Double Layer OAuth

At the last connectathon, we had two servers offering OAuth based logins. They took two totally different approaches:

  • My server used OAuth to delegate authentication to public/commercial identity manager services such as Google or Facebook (and HL7 too). Once authorised, users had open access to the FHIR API
  • Josh Mandel’s server used OAuth to let the user choose what kind of operations the user could do on the FHIR API, but users were managed by the server

The way my server works is to ask the OAuth provider (Google or Facebook) to access the user’s identity. In order to give me that permission, the server has to identity the user (and I have to trust them to do that correctly). The problem with implementing like this is that while I identify the user, I’m not offering the user any choice as to what access their user agent (e.g. web site or mobile application) will have to the resources my server protect – which is their medical record (well, their prototype medical record anyway).

Josh’s server does that – it asks you what parts of your medical record you want to make visible to your user-agent, and whether you want to let it change any parts of it. So that’s much better (aside: There’s a number of reasons why you might want to limit access, but there’s also some real risks to that as well. These are well known problems I’m not taking up here). The problem with the approach Josh’s server takes is that you cannot delegate identity management, and managing user identity is hard and expensive.

So, can you have both? Well, it turns out that you can, but it’s pretty hard to visualise how that works. Josh explains how the process works here:

Josh made that video specifically to explain to me how the process works (thanks). So I’ve gone ahead and implemented this based on Josh’s explanation. You can check out how it works here: https://fhir.healthintersections.com.au/closed. In this example, the web server acts as a client to it’s own underlying RESTful API. Initially, you have to login:

oauth_login

Once you choose to login using OAuth, you get this page:

oauth_id_page

 

You can identify yourself using 3 different external providers, or you can login using a username/password (contact me directly to get an account). Or, if you know me via Skype you can authenticate to me out-of-band using the token. Once you’ve identified yourself, then you get asked what permissions to give your user-agent:

oauth-choice

For now, this is just a place holder while we decide (argue) about what kind of permissions would be appropriate. The only right that is currently meaningful is whether to share your user details (internal id – which might be a facebook or google internal id, user name, and email address). If you share them, the user agent can use them; if you don’t the user agent won’t be able to recognise you as the same user again.

I wrote this to demonstrate the process, since it’s not easy to visualise it. Developers who want to use this can consult my API documentation (it’s as close to Google as I can get; there’s no JWT token, but I will be implementing IHE IUA later). For developer’s who want to try this out locally, there’s a predefined client you can use:

client_id = LocalTest
client_secret = SecretForLocalTest
name = Local Test Client
redirect = http://localhost:8000/app/index.html

Contact me if you the redirect doesn’t suit, and I can add it to the list, or set up a different client for you.

Btw, in case anyone wants to see the source – this is all open source.

 

 

 

#FHIR – looking for translators

One of the parts of the FHIR specification is a translations file. This is an XML file that includes a whole series of user-level messages that implementers may find useful, along with translations to other languages. A typical entry looks like this:

 <item id="MSG_UNHANDLED_NODE_TYPE">
   <translation lang="en">Unhandled xml node type "%s"</translation>
   <translation lang="it">Tipo di nodo Xml non gestito "%s"</translation>
   <translation lang="nl">Kan xml nodetype "%s" niet verwerken</translation>
 </item>

The idea is that any implementation that wants to report this kind of error looks up the message by it’s identifier – “MSG_UNHANDLED_NODE_TYPE”, and picks the most appropriate message for the provided language code (typically, for a server, this would be taken from the browser’s preferred language code).

As you can see in the example above, we have English, Dutch and Italian translations for the entries, though the Italian ones are a little out of date. We’d love to get more translations – we’re looking for volunteers. If you’re interested in volunteering, the process is simple – download the latest  source file, go through it adding the translations to the language of your choice, and send it to me.

Notes:

  • If you’re interested in taking on the task of maintaining a language in an ongoing basis, we can set you up with svn so that you can maintain it directly
  • If you’re an implementer and you want additional messages added to the file, we can do that too
  • Thanks to Alexander Henket for tidying up the translations file, updating the dutch translations, and prompting me to do this post

Question: v2.x delimiter escape sequences

Question:

Is there a good, public algorithm for HL7 v2.x delimiter escape sequences?

Answer:

Well, yes, and no. The various open source libraries for v2 all include escape sequences – I wrote some of them myself. The syntactical escape sequences are easy – simply replace the contents of the data element component the escape sequence – so that’s trivial:

StringBuilder b = new StringBuilder();
for char in content  {
  switch (char) {
    case FieldDel : b.append(EscapeChar + 'F' + EscapeChar); // |
    case ComponentDel: b.append(EscapeChar + 'S' + EscapeChar); // ^
    case SubComponentDel: b.append(EscapeChar + 'T' + EscapeChar); // &
    case RepetitionDel : b.append(EscapeChar + 'R' + EscapeChar); // ~
    case EscapeChar : b.append(EscapeChar + 'E' + EscapeChar); // ~
    case TruncationChar : b.append(EscapeChar + 'P' + EscapeChar); // # (v2.7+)
    default: if (char in [#10, #13, #9] || char.toInteger >= 128)
        b.append(EscapeChar + 'X' + char.toInteger.ToHex + Escape;
      else 
        b.append(char)
  }
}
return b.toString();

Note: that pseudo code is mish mash of java, C#, and pascal.  Note that the values of the delimiter characters are configurable, so they’re constants, not hard coded, though many implementation guides fix them to the default values shown.

However, there’s a problem: there are different kinds of escape sequences:

  • syntactical
  • highlighting
  • character set escapes
  • binary escapes

These have different implications depending on your architecture, so it’s not really possible to have “an algorithm for escape sequences”. Instead, you have to decide what you need to do about these

The PCEHR Review, and the “Minimum Composite of Records”

So the PCEHR review has finally been released, and I’ve been reading with considerable interest. I’m going to stick to analysing the technical recommendations that they make, starting with a group of recommendations they call the “Minimum Composite of Records”:

19. Expand the existing Australian Medications Terminologies (AMT) data set to include a set of over the counter (OTC) medicines.
20. Widen the existing National Prescribing and Dispensing Repository (NPDR) to include the expanded set of over the counter (OTC) medicines.
21. Implement a minimum composite of records to allow transition to an opt-out model by a target date of 1st January 2015 inline with recommendation 13. This will dramatically improve the value proposition for clinicians to regularly turn to the MyHR, which must initially include:

  • Demographics
  • Current Medications and Adverse Events
  • Discharge summaries
  • Clinical Measurements

The section that explains these starts with the following paragraph:

A common theme in the consultation process was the need for a minimum data set to make up a viable clinical record. Many of the submissions also pointed out that it was imperative for the data
standards to be widely and universally adopted to allow the MyHR to function. The more clinically relevant material that was present within the MyHR the faster the rate of adoption and therefore the faster the return on investment will be

I’m really pleased to see the question of wide and universal adoption of standards mentioned – that’s what I would have said to the panel if I’d made my own submission. From these general comments at the introduction, the review seems to get rather distracted by medications coding issues, before suddenly coming back to the question of “minimum composite of records”. So, what does that mean?

  • Demographics – I cannot imagine what this means beyond what is already in place? The documents include demographics – my consistent feedback from clinicians is that they contain too much of them, and I couldn’t figure out from the text what they thought this meant.
  • Current Medications and Adverse Events – well, that’s consistently been a focus of what we’ve already done, but the section indicates that this is about the medications coding. So more on that in the next post
  • Discharge summaries – again, this is something that has already been prioritised, but the section points out that this doesn’t apply to private hospitals. And, in fact, private hospitals aren’t really a good fit for the current discharge summary because of the way their business works, so the basic discharge specification may need to be reviewed to make it a better fit for that use
  • Clinical Measurements – the section says “capture vital signs to prevent avoidable hospitalisation and demonstrate meaningful use of PCEHR.” – uh? How will capturing vital signs – data of relevance today during an admission – will “prevent avoidable hospitalisation”? That was submission from the Aged Care industry, so perhaps they’re saying, if the PCEHR contained a record of the patients baseline vital signs, then we can know whether they’re actually significantly impaired if they have an emergency admission outside their normal facility? – it seems like a super narrow use for me

They say: “All other functionality … should be deprioritised while these data sets are configured” – but what other functionality is that? It’s not obvious to me.

So, the Minimum composite of records actually means:

  • Improve medications coding
  • Cater for discharge summaries from private hospitals
  • Add support for vital signs
  • Continue to focus on implementation activities in support of these things

Have I read that right? Comments welcome…

I’ll take up the question of medications coding (recommendations #19 and #20) in the next post.

Question: use of HL7 v2 for specialist Letter

Question:

When sending a Specialist Letter HL7 V2 REF_I12 back to a GP, should the Referring Provider in the PRD segment point to the GP (the originator of the Referral) or the Specialist (the originator of the Specialist Letter please?

Would the Originating referral identifier RF1.6 for the Specialist Letter be the identifier of the original referral (I believe it would be this) or that of the specialist letter please? In the specialist letter HL7, would OBR-16 (ordering provider), refer to the GP?

Answer:

Well, I’m going to take this as an Australian question, because that’s the only context in which I’ve heard of a “Specialist Letter”.

The roles in the PRD are a repeating field. In the Australian context, we have a Role IR = Intended Recipient which should be used to indicate the recipient used of the message, and there should only be one of these in the message. This removes ambiguity of where the message is destined.  The other roles are the roles in the context of the scenario and not related to the messaging, so “Referring Provider in the PRD segment points to the GP (the originator of the Referral)”

The definition for RF1-6:

The first component is a string of up to 15 characters that identifies an individual referral. It is assigned by the originating application, and it identifies a referral,and the subsequent referral transactions, uniquely among all such referrals from a particular processing application. 

And for RF1-11:

The first component is a string of up to 15 characters that identifies an individual referral. It is typically assigned by the referred-to provider application responding to a referral originating from a referring provider application, and it identifies a referral, and the subsequent referral transactions, uniquely among all such referrals for a particular referred-to providerprocessing application. For example, when a primary care provider (referring provider) sends a referral to a specialist (referred-to provider), the specialist’s application system may accept the referral and assignit a new referral identifier which uniquely identifies that particular referral within the specialist’s application system. This new referral identifier would be placed in the external referral identifier field when the specialist responds to the primary care physician.

So the purpose of RF1 is to identify the referral (not the message or documents within ie. OBR-3).

GP -> Specialist referral:

RF1||||||123^GP Practice^1CA696CA-C91D-466E-BC11-B5C9B7B99ACA^GUID

Specialist referral -> GP:

RF1||||||123^GP Practice^1CA696CA-C91D-466E-BC11-B5C9B7B99ACA^GUID|||||AB354^Specialist Practice^1BC63E55-3FEC-4B2E-8A61-0DE1796C3410^GUID

RF1-6 is a required field. So a problem may arise when the specialist sends a reply to the GP and the originating referral identifier is not known, say if the original referral was received not via REF^I12, ie rather paper/fax. In this case, all you could use would be very unique dummy value such as a GUID be used in this instance in RF1-6. Leaving it blank may not be acceptable to some receivers.

And so, yes, in the specialist letter HL7, OBR-16 (ordering provider), would refer to the GP.

Note: Thanks to Jared Davison from Medical Objects for this answer.

Sharing Healthcare Data Between Primary and Secondary Usage

One of the difficult problems associated with healthcare information is sharing data between primary users and secondary data. In fact, it’s come up in quite a few places recently, and seems to be causing more noise than light. The problem is that these two user bases have such radically different views of how the data should be understood.

Secondary Use

The secondary users of data fundamentally live in a statistics orientated world view. If fact, to be clear, that’s how I define what a secondary user is – someone who wishes to do analysis (usually statistical) on the data.

Their fundamental desire is to get data in spreadsheets or databases, in the form of a square – that is, rows and columns, because that’s the form that’s most amenable to statistical analysis. Typically, secondary users seek to control the reality that they interact with, to simplify the data by making a set of assumptions true. This is particularly true in research, for example, where the objective is to keep everything well controlled except the variables you are interested in. This is what I used to do:

  1. Design an experiment (or a trial protocol, just an experiment on a grand scale)
  2. Determine what data items capture the outcomes of the experiment
  3. Compare the possible result of analysis on these to your goal
  4. Repeat until they align

Secondary users who use the data for financial efficiency, quality and safety reporting don’t have the quite the same amount of ability to control their reality, but they do still make choices about the degree to which they capture it.

The other feature of secondary use is that quality is a feature of the aggregate data, not individual points of data. As a user of secondary data, you worry about systematic error more than you worry about random error.

Primary Use

Primary users don’t have these kind of choices – their record keeping system has to be robust against the cases they have to deal with, and – particularly in healthcare – they just have to deal with what they encounter. No one has designed perfect record keeping systems for healthcare (and all attempts have ended up looking horrifyingly complicated), so primary users have to tolerate ambiguity and looseness in their record keeping.

Because of this, primary users are obsessed with context and trace-ability in their records, so that they can judge for themselves about the reliability of a particular piece of data. For a primary user, that’s the determiner of quality, and it is judged at the individual data point level. Errors in aggregated data – such as systematic bias – simply don’t matter in the same way. As a consequence, operational systems are characterized by hierarchical data structures.

These two groups cannot – and should not – share the same set of data elements. Recently I’ve been party to some discussions where parties from each of these communities claim that the others are wrong to use their own data element definitions.

But I think that’s wrong: the different perspectives on data are valid and necessary. That doesn’t mean that data shouldn’t migrate from one community to another – just that it’s going to need transforming. And the transform is not just a tax – it’s an investment in the strengths of each approach.

Example

Let’s illustrate this with an example, using  blood pressure as. Classically, a blood pressure measurement includes 2 values, systolic and diastolic. In a clinical record, they’ll be written/recorded as something like 130/80. Clinical users can easily acquire and compare these values.

The first thing you do when you put these values into a spreadsheet is split them into 2 columns – Diastolic, and Systolic so that the statistics package can handle these as numbers. It’s simply assumed that the 2 numbers come from the same measurement, but it’s rarely stated anywhere (or, if it is, it’s generally stated in narrative prose, not some formal definition). Of course, in this simple example, that’s pretty safe because people will generally know that this is what implied. But that’s not always true.

In clinical practice, however, a blood pressure is not just a systolic/diastolic pair, but also you need information about how the blood pressure was taken – in particular, what was the patient like? Maybe they were lying down, or extremely agitated? For little kids, you might have to do it on the leg, or even get a bad measurement, and just have to go with what you can get.

This presents a recording challenge for operational systems – there’s a myriad of ways to capture this kind of uncertainty, and it’s not really clear how much of that quality information needs to be computable (one of the more comprehensive models is openEHR’s Blood Pressure Archetype, which has 18 elements not counting the stuff in the reference model).

Secondary users mostly aren’t interested in this stuff, particularly if they do research, where they can simply write a study protocol that eliminates all these things from their scope.

Migrating blood pressure measurements from a primary to a secondary use isn’t simply a matter of mapping from one model to another, but of adapting information from one set of perspectives and intellectual requirements to another.

Value the Transform

So, the transform is important. In tool form, it’s an ETL (Extract, Transform, Load) and it’s an explicit representation of the assumptions of the secondary data.

Outside healthcare, this is hardly controversial; it’s the first step of OLAP: Consolidation of the data).

Tom Beale reviewed this post for me (thanks), and pointed out that there’s an unfinished task here:

My suspicion is that the transform is going to be the interesting question in more computational clinical data situations – at the moment, everyone writes ETLs ad hoc. But we shouldn’t…

..and I agree, but you have to walk before you can run, and we can’t even crawl yet.

 

p.s. I’ll be making a follow up post to this, describing some things we propose to do in FHIR regarding this.

 

Summary of #FHIR progress at the HL7 WGM in Phoenix

Well, another HL7 Working Group Meeting (WGM) has come to an end. It was a big week for FHIR, so here’s a summary of the more significant outcomes:

  • The “FHIR” track was the single biggest track (at least, that’s what was reported). Though I did wonder whether that simply reflects that we’re better at keep track – next meeting the way that works will change slightly. But certainly this meeting had the broadest participation in the development of FHIR yet
  • This reflects that solid interest through out HL7 in leveraging the FHIR specification
  • The connectathon held at the start of the meeting was the biggest yet. I was thrilled to see multiple applications effectively sharing the same questionnaire resources. We got a set of really useful feedback
  • We published an update to the FHIR DSTU – the only real effective change was to add an extra note about security issues displaying HTML
  • We started planning for the next DSTU version of FHIR. This is planned to be released in March 2015. Here’s a list of new features that it is planned to have:
    • Managing Appointment scheduling
    • Referrals
    • DICOM Key Image Annotations
    • Device Alarm management
    • Push-based Subscriptions
    • Support for exchange between primary and secondary users of data (more about this in the next blog post)
    • A user/privileges subsystem for systems that want to use it
    • More security guidance around deployment architectures
    • Better support for services
    • Support for publishing implementation guides
    • Much more clinical accessibility
  • In addition, there will be many fixes, and the tooling support will be rounded out (that’s an ongoing process)
  • We’re going to hold a couple of significant out-reach activities in the next few months:
    • Joint review of some clinical resources with the openEHR community
    • An open source virtual connectathon
  • In addition, we’re going to hold a “Clinical Connectathon” at the next HL7 meeting (we’re still working out exactly that that means, and this will be a prototype one to find out whether the idea works)
  • Finally, at the next HL7 WGM at Chicago, we’ll be holding another connectathon. There’ll be 3 tracks:
    • Patient (for new particpants)
    • Conformance (Conformance, Profile, and Valueset)
    • Experimental (based on the dev version of the spec, not the existing DSTU)
  • We’re still considering options for the theme of the experimental track