Category Archives: Interoperability

Sharing Information with a Patient Case Manager

Sharing Information with a Patient Case Manager

This post (which is co-authored with Josh Mandel, thanks) describes a general pattern for a clinician to initiate sharing of a clinical record between an EHR (or more generally, a clinical records system – can be more than just an EHR) and some other clinical information service. These might be one the following:

  • Patient assistance organizations like Medic Alert
  • Patient networks such as ‘PatientsLikeMe
  • Voluntary/professional Clinical case registries (e.g. Vermont Oxford network)
  • Social support services
  • Government and Private Preventative/Management Programs(?)

During the clinical workflow, the clinician with consent from the patient, agrees to ‘refer’ the patient’s case to some kind of case manager, and continue to share information about the patient with the case manager. At some stage, either the clinician or the patient may wish to stop further sharing.

Notes:

  • This is not the same as a formal clinical referral, though there’s some similarities – that’ll be a separate post.
  • There’s a spread of balance between clinical and patient interest here – some of them are ‘what the clinician wants, with consent of the patient’ while others are ‘helpful suggestions from the clinician that the patient might want to run with’. The balance of these may influence which parts of the interaction pattern are more applicable
  • There’s other kinds of case manager registries where the patient doesn’t get consulted. Typically, this is for mandatory public health reporting; this is a different problem for which this particular pattern is not applicable, though parts of it would no doubt be useful

Interaction pattern

Up till now, this kind of sharing has been very difficult to organize in practice, because it has needed vendor support one way or another, and vendors just don’t have time for the never-ending stream of requests for this. That translates into the fact that each individual project cannot afford the development. But FHIR, SMART on FHIR, and CDS Hooks change that.

This blog describes a general pattern for using the Argonaut profile and current EHR development to organize this sharing without needing any special arrangements. I’m publishing this because the overall pattern isn’t evident to many participants.

This pattern describes:

  1. How the case manager service can suggest to the clinician that they should initiate the referral
  2. How the case manager service can provide an application that initiates the sharing
  3. How the sharing can be maintained in an ongoing fashion

Pre-requisites

In order to support this pattern, the clinician must be using an EHR that supports the following:

  • Required: Registering and Running SMART on FHIR applications
  • Recommended: An internet facing provider portal (preferred) or patient portal
  • Optional: Registering and using CDS Hooks service for patient-view hooks

Setting up the pattern includes registering the case manager service with the EHR, and vice versa, and probably exchanging public keys. This step is not standardized by the FHIR/SMART on FHIR/CDS Hooks specifications, but it needs to be done.

Interaction

#1 Prompting the clinician to register the patient

The EHR notifies the case manager service that a patient is being viewed. The case manager service uses the local patient id, and patient information – if provided – to determine whether the patient is already registered, or could or should be registered, and, if it wants, can prompt the user to run the registration app.

This part of the pattern is optional: the clinician can always choose to run the registration app (step #2) without being prompted – this just makes the process easier.

Details:

  • EHR calls CDS Hooks Service. Information provided includes something like:
    • hook :  “patient-view”
    • hookInstance : random UUID,
    • patient: {local patient id}
    • context: none
    • prefetch: patient resource, list of patient’s medications, allergies etc (all optional, and case manager service specific)
  • CDS Hooks Service returns a card that says
    •  patient is or is not (or might be) registered with case manager service (based on recognising local id, or matching patient details (if provided)
    •  patient is not registered with case manager service
    •  if prefetch was passed, and patient is not registered with case manager service, and patient meets criteria, a link to a SMART App to register the patient with the case manager service
    • card includes a link the case manager smart app (see #2)

#2 Registering a patient with the case manager service

The user runs the SMART on FHIR application for the case manager, which gathers information from the EHR (as authorized by the user/institution) and then prompts the user for any additional information needed by the case manager service, and then submits all the information to the case manager service.

Note that each case manager service provides their own application that does what they want, though there is likely to be a (or several) general open source frameworks that do most of what is entailed here.

Details:

  1. User runs the registration app. Either as prompted by CDS Hooks Service above, or by a manual app launch
  2. SMART app loads patient details
  3. SMART app loads medications, problems, allergies, subset of lab results as appropriate for case service
  4. SMART app connects to case manager service, and prompt user for additional information as required (typically, this would include patient consent information and reason for referral at least)
  5. user fills out additional information and submits  (As an optional best practice, the information can be submitted as is submitted to register as a FHIR Bundle, including ServiceRequest, practitioner resource, provenance resource, and gathered data, as appropriate)
  6. Case Management Service stores all required information within its own database.

Note that FHIR provides the Questionnaire/QuestionnaireResponse resources. In this case, the app would retrieve a blank Questionnaire from the case manager service, prompt the user to fill out any information that cannot be automatically determined, and then save a QuestionnaireResponse as part of its registration Bundle (see the SDC implementation guide for details on this)

#3 Ongoing sharing of the clinical record

This part of the pattern is optional – there might not be any utility in ongoing sharing, or it might not be appropriate to ask (either clinician or patient), or it might not be consented to. Note that there’s a marked difference between consenting to share ‘this set of information’ and ‘what ever arises in the future’.

How ongoing sharing works depends on whether the provider API end-point (used by the application in #2) is publicly available or not, and whether the ongoing sharing should be patient or provider managed. If it’s not public, the patient API end-point can be used. If the sharing should be managed by the patient, the patient API should be used. If there’s no patient or provider API publicly available, then there’s no way to orchestrate on-going sharing; records can only be shared as a one-time process in step #2 when the clinician chooses to (usually during a referral). (note: whether the provider portal is publicly accessible, or the URL of the patient portal need to be shared with the application at configuration time).

Ongoing sharing uses the SMART on FHIR protocol as profiled in Sync for Science (S4S) until access is revoked. Access to the S4S interface requires a bearer token  – this pattern relates to acquiring the token.

Provider Portal:

If the provider portal is publicly available, then the application (during step #2) shares its bearer token with the case manager service. How this works:

  • The application asks for an access token with offline scope
  • It shares the token with the case manager service
  • The case manager service uses the token to update the patient’s record periodically

Note that it might be useful for the case manager service to have a different token from the case manager application that runs locally and interactively. The application could use the draft IETF token exchange spec for that, though this is not part of the eco-system now.

Patient Portal:

If there is no provider portal, or if ongoing sharing should be managed by the patient, the patient portal can be used instead. The way this works generally is:

  • The case manager service emails the patient a link (email gathered during registration)
  • patient follows the link and logs in or registers etc
  • patient gets redirected to OAuth login on patient’s portal
  • Patient chooses to authorise the case manager service to access patient data. Scopes could include a broad permission like “patient/*.read”, or granular permissions like “patient/AllergyIntolerance.read”, “patient/MedicationStatement.read”, “patient/MedicationOrder.read”, “patient/Medication.read”, “patient/Condition.read”, “patient/Observation.read”, etc.
  • Case manager service uses the bearer token as described above

If there is no Internet-accessible FHIR API endpoint for the EHR’s data, then there’s another solution: to run a service within the hospital’s internal network that hosts SMART on FHIR apps that can have access to the provider portal. That’s, well, possible, but there’s a series of challenges around administration.

Note:

This pattern is for managing this task using the FHIR/SMART on FHIR/CDS Hooks framework alone. There are other patterns – using the Consent resource – that come into play if the EHRs implement a patient managed consent framework (directly or using UMA), or if the EHR vendor is able/willing to do integration work.

#FHIR and R (Stats / graphing language)

I’ve spent the last 2 days at the 2017 Australian R Unconference working on an R client for FHIR.

For my FHIR readers, R is a language and environment for statistical computing and graphics. (Having spent the last couple of days explaining what FHIR is to R people).  My goal for the 2 days was to implement a FHIR client in R so that anyone wishing to perform statistical analysis of information available in R could quickly and easily do so. I was invited to the R Unconference by Prof Rob Hyndman (a family friend) as it would be the best environment to work on that.

My work was a made a lot easier when Sander Laverman from Furore released an R package to do just what I intended to do earlier this week. We started by learning the package, and testing it. Here’s a graph generated by R using data from test.fhir.org:

Once we had the R Package (and the server) all working, I added a few additional methods to it (the other read methods). For sample code and additional graphs, see my rOpenSci write up.

I think it’s important to make the FHIR data easily available to R because it opens up a connection between two different communities – that’s good for both. Many of the participants are the Unconference were involved in health, and aware of how hard it is to make data available for analysis

Restructuring the data

FHIR data is nested, heirarchical, and focused on operational use. I’ve written about the difference between operational and analytical use before. Once we had the data being imported from a FHIR server into a set of R data frames, Rob and I looked at the data and agreed that that most important area to focus on was tools to help reshape the data down to a usable form. The thing about this is that it’s not a single problem – the ‘usable form‘ will depend entirely on what the question that is being asked of the data is.

So I spent most of the time at the Unconference extending my graphQL implementation to allow reshaping of the data (in addition to it’s existing function in assembling and filtering the data). I defined 4 new directives:

I’ve documented the details of this over on the rOpenSci write up, along with examples of how they work. They don’t solve all data restructuring problems by a very long shot, but they do represent a very efficient and reusable way to shift the restructuring work to the server.

There was some interest at the unconference in taking my graphQL code and building it into an R package to allow graphQL query of graphs of data frames in R, to assemble, filter, and restructure them – it’s an idea that’s useful anytime you want to do analysis of a graph of data. But we were all too busy for that – perhaps another time.

Where to now?

I think we should add R support to the AMIA FHIR datathon series, and maybe it’s time to encourage the research track at the main FHIR connectathons to try using R – I think it’s a very powerful tool to add to our FHIR toolkits.

Thanks to Adam Gruer from Melbourne’s Royal Children’s Hospital for helping – those graphs are his. Thanks also the organisers – particularly Nick Tierney (gitmeister estraordinaire). I picked up some ideas from the Unconference that will improve the FHIR connectathons.

Cultural Factors affecting Interoperability

One of the under-appreciated factors that affects how successful you’ll be at ‘interoperability’ (for all the various things that it might mean) is your underlying culture of working with other people – your and their underlying expectations about whether and when you’ll compromise with other people in order to pursue a shared goal.

Culture varies from organization to organization, and from person to person. And even more, it varies from country to country. As I work with different countries, it’s clear that in some countries, it’s harder to get people to to sacrifice their short term personal interests for shared long term communal interests. There’s plenty of research about this – mostly phrased in terms of micro-economics. And it very often comes to down trust (or lack thereof). Trust is a critical success factor for success at compromise and collaboration. And it’s pretty widely observed that the level of trust that people have in institutions of various kinds is reducing at the moment (e.g. 1 2 3).

Plenty has been written about subject, and I’m not going to add to it. Instead, I’m going to make a couple of related observations that I think are under-appreciated when it comes to interoperability:

The first is that smaller countries with a bigger dominant country that can easily overpower them next door (my go-to examples: Estonia, Denmark, New Zealand) have populations that are much more motivated to collaborate and compromise with each other than countries that are economically (and/or politically) without peer in their geographic area.

And so you might think that these countries are better at interoperability than others…? well, sort of:

Countries that have a cultural disadvantage with regard to interoperability are the countries that produce the great interoperability technologies and methodologies (they have to!), but countries that have a cultural advantage for interoperability are much better at taking those technologies and methodologies and driving them home so the task is complete.

If my theory is right, then when you look at what countries are doing, you should look for different lessons from them, depending upon their cultural situation with regard to interoperability.

p.s. If my theory is right, one really has to wonder how bad are the cultural headwinds against interoperability here in Australia..

p.p.s. I found very little about this on the web. References in comments would be great.

 

 

The Vendor Engagement Matrix

This is post 2 in my series on why to participate in the standards process: a reason why Vendors should engage with standards

Professional Societies

One strong feature of the healthcare eco-system is professional societies. Everywhere you look, there’s another one. See here, and here, and on wikipedia. Or for Australia, here. (And none of the ones I’m involved with – HISA, ACHI, AACB – are even listed). For professionals in healthcare, the reasons to be involved in these are obvious (see, e.g. “Top 10 reasons“). In fact, in most places I’ve worked – whether clinical labs, research labs, government agencies, or vendors, membership of and engagement with professional societies is expected (or even required) if you have a leadership role.

And most employers enthusiastically support their employee’s involvement and leadership in professional societies (witness the employer affiliations for board members of ACHI, AIIA, AMIA). It’s obvious why employees participate in professional societies, but why do employers support this, even to their apparent cost (funding time, travel, sharing implicit IP, etc)? For example:

Some employers claim that there is no time for such efforts, or that it could prove too expensive. Many worry that all they would be doing is enhancing employees’ skills for a future employer. They ask themselves what would happen if they encourage professional development and some of their employees leave.

Those are all valid concerns. But:

A better question would be to ask what would happen if they ignore professional development and their employees stay

I’m quoting from the Society for Human Resource management there. Do read the whole thing. I’m just cherry picking a choice quote:

One critical reason given for seeking employment elsewhere was that although the employees valued mentoring, training and coaching very highly, their employers were falling far below expectations in those areas.

Finally, I’ll note that it’s when an organization must trust the choices that it’s employees make on their behalf that professional societies are most compelling – in fact, that’s when they become necessary: employees are inspired and pressured to apply professional excellence in ways that an employer cannot easily reproduce. And that’s why they’re a big deal in healthcare (and IT).

Most vendors I know encourage their staff – and particularly their leaders – to be involved in professional societies. It’s a tangible demonstration of their commitment to excellence. And the vendors that don’t encourage their staff to get involved?… they’re signalling to the market where they sit on professional engagement: it doesn’t matter. And, more importantly, they’re signalling to their staff about that as well, and I’ve seen that companies that don’t do this suffer from a steady erosion in their culture.

Choosing a Vendor

When it comes to choose your enterprise information system (EIS) providers, the quality and culture of the EIS vendor really matters. It all comes down to what you are buying. If you’re buying a widget, where the delivery of quality goods is feasible to measure, and the goods are a commodity, then it makes absolute sense to choose the lowest price you can get in the market. But once you start buying things where the quality is not easily measured, or you have a high transaction cost to change supplier, you have to think differently about your choice. And EIS’s really are the epitomy of these problems (e.g. changing EIS frequently costs more than the cost of the system, and the vendor pretty much does nothing but make decisions on your behalf, ones you can’t really review).

Because of this, thoroughly understanding the culture of the vendor of the EIS – who you are effectively marrying for the duration that you’ll depend on them to assist you manage your organization – is critical. As technical lead at Kestral, I always believed that our potential customers, when choosing a vendor, focused their scoring too highly on the current features of the widget they were buying (the EIS), and not on the relationship they were entering into – because the system they were buying was not static, and a key feature of their future success was how well their EIS vendor could support them to grow their business by delivering on their future interoperability requirements. (And, if I’m not mistaken, lack of delivery on interoperability features a little in the news at the moment….)

But the problem is, how do you choose a vendor that consistently delivers based on a culture of professional excellence? well, one way is to count what % of the vendors key employees have leadership roles in professional societies. And asking the vendor to report that should be a standard feature of all EIS RFPs, and it should be scored enough to weight it against the long list of feature the EIS is being scored on.

But, you say… that’s easy for a vendor – it’s not really very costly in the overall picture to support an employee’s involvement in a professional society, and there’s no need for that to make a meaningful difference to their culture. And that’s actually partly right. Which is why the real question is not about the professional societies of their employers, but the vendor’s membership of professional societies for the vendor itself. That’s where the rubber really hits the road.

And professional societies for healthcare EIS systems are standards organizations (or their derivative, informal open source collaborations).

Involvement in Standards Organization

This means that a vendors involvement in standards is a key indicator of it’s aspiration to enduring professional competence. And it’s deep involvement over time that matters, along with consistent delivery of the standards being worked on. It’s not a magic bullet, but it’s tangible evidence that the vendor has a deep commitment to it’s own professional standards to deliver a quality product in regards to more than just the feature list that appear on sales materials.

And this is something that institutions should ask about in their RFPs: what tangible evidence can you as a vendor show that you have an organizational commitment to quality? Now obviously there’s more answers to this than just involvement in the standards process, but involvement in the standards organization not only is one good and measurable marker, it’s a particularly relevant marker given an organizations inevitable future dependency on the vendor to deal with ongoing interoperability requirements.

All this suggests that we could define a Standards Engagement Matrix which can be used to score how involved a vendor is in the standards process. It might look something like this:

SDO Membership Gold Membership Organization leaders Technical Leaders Contributions Delivery
One row for each relevant SDO 5 points for being a paid member 20 points for paying high level fees 3 points for each board member, or chair of an organizational committee 4 points for each technical committee chair

5 points for each editor of standard

5 points for major contributions of tools, drafted documents.

10 points for donating patents to the SDO

10 points for each standard implemented during the trial phase in the last 4 years

3 points for proving conformance to the standard by a recognized testing authority in the last 4 years

Then divide the point totals to the log of the company revenue, and you have a the “Standards Engagement Matrix score”

I need to be clear here: this is just a straw man to make people think about how you would score something like this. I’m sure it needs more adjustment around vendor size/income. And you’d absolutely need some kind of adaptive score for the meaningfulness of the SDO itself. And note, of course, that as vendors grow, their organization and business interests broaden, and different parts of the organization could behave differently, so again, there’s no magic bullet here.

But if we had a working consensus on something like this, then providers buying a healthcare EIS could ask, as part of their RFP, ‘what’s your standards engagement matrix score?’, and know that the score is a very good indication of their organizational commitment to excellence. Which is actually more relevant than even vendor financial viability to the long term happiness of the purchaser, in my opinion.

 

p.s. The exact same logic applies to whether providers and institutions are willing to share patient data – it’s a key indicator to their own staff of their commitment to professional excellence, not just short term profit-making. And we should be educating patients about that.

Interconversion between #FHIR and HL7 v2

Question

What advice do you give for introducing FHIR in new software, while continuing to maintain HL7v2 interoperability with client applications that do not speak FHIR?

For example, are FHIR resources the way to go as an internal representation of an application’s health care data?

If yes, is it practical to convert HL7 messages into FHIR resources (e.g. Patient, Practitioner, ProcedureRequest, ReferralRequest, Appointment…)? What open source software do you recommend for converting HL7 messages into FHIR resources (and vice-versa)?

Or is it better to use FHIR for external information exchange only (with outside FHIR clients)?

Answer

I’ve worked with several projects rebuilding their products around FHIR resources. Like all development methodologies, this has pros and  cons. What you get out of the box is a highly interoperable system, and you get a lot of stuff for free. But when your requirements go beyond what’s in the specification, it starts to get hard – FHIR is an interoperability standard, that focuses on the lowest common denominator: what everyone agrees with. Whether that’s a net benefit depends on how far beyond common agreement your going to go. (This, btw, is a demonstration of my 3rd law of interoperability).

It is practical to convert HL7 messages to FHIR resources and vice versa, yes. We’ve seen plenty of that going on. But there’s no canned solution, because to do the conversion, you have to do two things:

  • Figure out all the arcane business logic and information variants and code this into the conversion
  • Figure out how to integrate your conversion logic into your application framework

The upshot of this is that you have a programming problem, and most people solve this by taking a open source libraries for v2 and FHIR in the language of their choice (most languages have one of those) and writing the business logic and application integration in their development language of choice. Hence, there’s no particular open source library to do the job other than the parsers etc. There are some commercial middleware engines that include FHIR as one of the formats that can be supported.

In the FHIR spec, we’ve defined a mapping language that tries to abstract this – so you can separate the business logic from the application integration, and a platform independent business logic that has libraries for whatever platform. That’s an idea that is gradually starting to gather some interest, but is still a long way from maturity.

With regard to using FHIR for external exchange only… what I usually say about this that is that it makes sense to implement FHIR for new things first, and then to replace old things only when they become a problem. And most new stuff is on the periphery, where the architectural advantages to FHIR are really big. But internally, v2 will increasingly become a major service limitation in time, and will have to be replaced. The open question is how long that timeline is. We don’t know yet.

 

Lloyd McKenzie on Woody Beeler

Guest post: My close friend Lloyd wanted to share his thoughts on hearing the news about Woody.

My recollections of Woody are similar to Grahame’s.

I started my HL7 international journey in 2000.  In my case, it was in an attempt to understand how I could design my v2 profiles so they would be well aligned with v3.  I quickly learned the foolishness of that notion, but became enamored of the v3 effort.

HL7 was an extremely welcoming organization and Woody played a big part in that welcome.  I was a wet-behind the ears techy from Canada and he was an eminent physician, former organization chair and respected elder of the organization.  Yet he always treated me as an equal.  Over the years, we collaborated on tooling, data models, methodology, processes and the challenges of getting things done in an organization with many diverse viewpoints.  In addition to his sense of humour and willingness to get his hands dirty, I remember Woody for his passion.  He really cared about making interoperability work.  He was willing to listen to anyone who was trying to “solve the problem”, but he had little patience for those who he didn’t sense had similar motivations.

His openness to new ideas is perhaps best exemplified by his reaction to the advent of FHIR.  Woody was one of the founding fathers of v3 and certainly one of its most passionate advocates.  Over his time with HL7, he invested years of his life advocating, developing tools, providing support, educating, guiding the development of methodology and doing whatever else needed to be done.  Given his incredible investment in the v3 standard, it would not be surprising for him to be reluctant to embrace the new up-and-comer that was threatening to upset the applecart.  But he responded to the new development in typical Woody fashion.  He asked probing questions, he evaluated the intended outcomes and considered whether  the proposed path was a feasible and efficient way to satisfy those outcomes.  Once he had satisfied himself with the answers, he embraced the new platform.  Woody took an active role in forming the FHIR governance structures served as one of the first co-chairs of the FHIR govenance board.  To Woody, it was the outcome that mattered, not his ego.

Woody embraced life.  He loved traveling with his wife Selby (and his kids or grandkids when he could).  He loved new challenges.  He loved his work, but he wasn’t afraid to play either.  He was an active participant in after-hours WGM poker games.

It was with reluctance that Woody stepped back from his HL7 activities after his diagnosis with cancer, but as he expressed it at the time, he had discovered that he only had time for two of three important things – fighting his illness, spending time with his family and doing the work he loved with HL7.  He chose the right two priorities.

While version 3 might not have had the success we thought it would when we were developing it, the community that evolved under HL7 v3 and the knowledge we gleaned in that effort has formed the essential foundation and platform that enabled the building of FHIR.  I am grateful to have had Woody in my life – as a mentor, a co-worker and a friend.  I am grateful too for everything he helped build.  Woody’s priority was to focus on really making a difference.  In that he has set the bar very high for the rest of us.

Thank you for everything you’ve done Woody.  We miss you.

Woody Beeler has passed away

Today, Woody Beeler passed away after battling cancer for a few years. Woody was a friend, an inspiration, and my mentor in health care standards, and I’m going to miss him.

I first met Woody in 2001 at my first HL7 meeting. It was Woody who drew me into the HL7 community, and who educated me about the impact that standards could have. Many people at HL7 have told me the same thing – it was Woody that inspired them to become part of the community.

When I remember Woody, I think of his humour, his passion for developing the best standards possible, and his commitment to building the community out of which standards arise. And I remember the way Woody was prepared to roll up his sleeves and get his hands dirty to get the job done. To the point of learning significant new technical skills long after retirement age had come and gone. Truly, a Jedi master at healthcare standards.

For many years, Woody was the v3 project lead for HL7. Woody wasn’t blind to the issues with v3, but it was the best option available to the community at the time – so he gave everything he had to bring v3 to completion.  And it was Woody who mentored me through the early stages of establishing the FHIR community.

It’s my goal that in the FHIR project we’ll keep Woody’s commitment to community and healthcare outcomes – and doing whatever is needed – alive.

Vale Woody

(pic h/t Rene Spronk, who maintains a history of HL7 v3 – see http://ringholm.com/docs/04500_en_History_of_the_HL7_RIM.htm)

New #FHIR Vital Signs Profile

Over on the official FHR product blog, I just announced a new release. I wanted to expand on one of the features in the new version here

A new profile to describe vital signs (note: this is proposed as mandatory to enable better data sharing)

One of the emerging themes in many countries is sharing data with patients. And one of the broad set of things called ‘health data’ that is easiest to share is what is loosely called ‘vital signs’. It’s simple data, it’s easy to share with the patients, and we’re starting to see monitoring devices available in mainstream consumer technology. But it’s more than just patients that care – vital signs data is widely shared through the healthcare provision system, and there’s lots of interesting decision support and surveillance that can usefully be done with them.

But if every different part of the healthcare system, or different jurisdictions, represent basic vital signs differently, there’ll be no way to easily share decision support and survelliance systems, nor will patients be able to share their healthcare data with common data management tools – which are cross-jurisdictional (e.g. things like healthkit/carekit).  With this in mind, we’ve defined a vital signs profile in the new draft of FHIR, and said that all vital signs must conform to it. It doesn’t say much:

  • The common vital signs must be marked with a common generic LOINC code, in addition to whatever other codes you want to give them
  • There must be a value or a data absent reason
  • There must be a subject for the observations
  • Systolic/Diastolic must be represented using components
  • The units must be a particular UCUM unit

This is as minimal a floor as we can get: defining a common way to recognize a vital sign measurement, and a common unit for them. None of this restricts what else can be done, so this is really very minimal.

For FHIR, this is a very gentle step towards being proscriptive about how healthcare data is represented. But even this looks likely to generate fierce debate within the implementer community, some of whom don’t see the data sharing need as particularly important or near in the future. I’m writing this post to draw everyone’s attention to this, to ensure we get a good wide debate about this idea.

Note: it’s a proposal, in a candidate standard. It has to get through ballot before it’s actually mandatory.

 

Patient Matching on #FHIR Event at HIMSS

In a couple of weeks I’m off to HIMSS at Los Vegas. I’m certainly going to be busy while I’m there (If you’re hoping to talk to me, it would be best to email me to set up a time). Before HIMSS, there’s several satellite events:

  • Saturday: Health Informatics on FHIR: Opportunities in the New Age of Interoperability (IEEE)
  • Sunday: Patient Matching on FHIR event (HIMSS)
  • Monday: First joint meeting between HEART/UMA & FHIR/SMART teams – if you want to attend this meeting, let me know by email (there’s a couple of places still open)

About the Sunday meeting, quoting from the announcement:

Previous work included a Patient Testing Matching Event on this idea was developed at an event in Cleveland, OH on August 14th, 2015 at the HIMSS Innovation Center.  The event covered a tutorial on FHIR along with sessions on patient matching.  A key takeaway from the event was that the healthcare community can advance interoperability by working on a standard Application Programming Interface (API) for master patient index software, commonly used to facilitate patient data matching.

In fulfillment of this vision, we are hosting this second Patient Matching on FHIR Workshop in conjunction with the HIMSS 16 Annual Conference in Las Vegas.  We invite:

─         Algorithm vendors
─         EMR vendors,
─         Developers and standards experts
─         All interested parties

So, here’s passing on the invitation – see you there!

ps. I’ll pass on information about the IEEE event when I get a link.

Language Localization in #FHIR

Several people have asked about the degree of language localization in FHIR, and what language customizations are available.

Specification

The specification itself is published and balloted in English (US). All of our focus is on getting that one language correct.

There are a couple of projects to translate this to other languages: Russian  and  Japanese, but neither of these have gone very far, and to do it properly would be a massive task. We would consider tooling in the core build to make this easier (well, possible) but it’s not a focus for us.

One consequence of the way the specification works is that the element names (in either JSON or XML) are in english. We think that’s ok because they are only references into the specification; they’re never meant to be meaningful to any one other than a user of the specification itself.

Codes & Elements

What is interesting to us is publishing alternate language phrases to describe the codes defined in FHIR code systems, and the elements defined in FHIR resources. These are text phrases that do appear to the end-user, so it’s meaningful to provide these.

A Code System defined in a value set has a set of designations for each code, and these designations have a language on them:

code system

Not only have we defined this structure, we’ve managed to provide Dutch and German translations for the HL7 v2 tables. However, at present, these are all we have. We have the tooling to add more, it’s just a question of the HL7 Affiliates deciding to contribute the content.

For the elements (fields) defined as part of the resources, we’d happily add translations of these too, though there’s no native support for this in the StructureDefinition, so it would need an extension. However no one has contributed content like this yet.

It’s not necessary to do this as part of the specification (though doing it there provides the most visibility and distribution), though we haven’t defined any formal structure for a language pack. If there’s interest, this is something we could do in the future.

Useful Phrases

There’s also a source file that has a set of common phrases – particularly error phrases (which do get displayed to the user) – in multiple languages. This is available as part of the specification. Some of the public test servers use these messages when responding to requests.

Note that we’ve been asked to allow this to be authored through a web site that does translations – we’d be happy to do support this, if we found one that had an acceptable license, could be used for free, and had an API so we could download the latest translations as part of the build (I haven’t seen any service that has those features)

Multi-language support in resources

There’s no explicit support for multi-language representation in resources other than for value set. Our experience is that while mixed language content occurs occasionally, it’s usually done informally, in the narrative, and rarely formally tracked. So far, we’ve been happy to leave that as an extension, though there is ongoing discussion about that.

The one place that we know of language being commonly tracked as part of a resource is in document references, with a set of form letters (e.g. procedure preparation notes) in multiple languages. For this reason, the DocumentReference resource as a language for the document it refers to (content.language).