Category Archives: Interoperability

Cultural Factors affecting Interoperability

One of the under-appreciated factors that affects how successful you’ll be at ‘interoperability’ (for all the various things that it might mean) is your underlying culture of working with other people – your and their underlying expectations about whether and when you’ll compromise with other people in order to pursue a shared goal.

Culture varies from organization to organization, and from person to person. And even more, it varies from country to country. As I work with different countries, it’s clear that in some countries, it’s harder to get people to to sacrifice their short term personal interests for shared long term communal interests. There’s plenty of research about this – mostly phrased in terms of micro-economics. And it very often comes to down trust (or lack thereof). Trust is a critical success factor for success at compromise and collaboration. And it’s pretty widely observed that the level of trust that people have in institutions of various kinds is reducing at the moment (e.g. 1 2 3).

Plenty has been written about subject, and I’m not going to add to it. Instead, I’m going to make a couple of related observations that I think are under-appreciated when it comes to interoperability:

The first is that smaller countries with a bigger dominant country that can easily overpower them next door (my go-to examples: Estonia, Denmark, New Zealand) have populations that are much more motivated to collaborate and compromise with each other than countries that are economically (and/or politically) without peer in their geographic area.

And so you might think that these countries are better at interoperability than others…? well, sort of:

Countries that have a cultural disadvantage with regard to interoperability are the countries that produce the great interoperability technologies and methodologies (they have to!), but countries that have a cultural advantage for interoperability are much better at taking those technologies and methodologies and driving them home so the task is complete.

If my theory is right, then when you look at what countries are doing, you should look for different lessons from them, depending upon their cultural situation with regard to interoperability.

p.s. If my theory is right, one really has to wonder how bad are the cultural headwinds against interoperability here in Australia..

p.p.s. I found very little about this on the web. References in comments would be great.

 

 

The Vendor Engagement Matrix

This is post 2 in my series on why to participate in the standards process: a reason why Vendors should engage with standards

Professional Societies

One strong feature of the healthcare eco-system is professional societies. Everywhere you look, there’s another one. See here, and here, and on wikipedia. Or for Australia, here. (And none of the ones I’m involved with – HISA, ACHI, AACB – are even listed). For professionals in healthcare, the reasons to be involved in these are obvious (see, e.g. “Top 10 reasons“). In fact, in most places I’ve worked – whether clinical labs, research labs, government agencies, or vendors, membership of and engagement with professional societies is expected (or even required) if you have a leadership role.

And most employers enthusiastically support their employee’s involvement and leadership in professional societies (witness the employer affiliations for board members of ACHI, AIIA, AMIA). It’s obvious why employees participate in professional societies, but why do employers support this, even to their apparent cost (funding time, travel, sharing implicit IP, etc)? For example:

Some employers claim that there is no time for such efforts, or that it could prove too expensive. Many worry that all they would be doing is enhancing employees’ skills for a future employer. They ask themselves what would happen if they encourage professional development and some of their employees leave.

Those are all valid concerns. But:

A better question would be to ask what would happen if they ignore professional development and their employees stay

I’m quoting from the Society for Human Resource management there. Do read the whole thing. I’m just cherry picking a choice quote:

One critical reason given for seeking employment elsewhere was that although the employees valued mentoring, training and coaching very highly, their employers were falling far below expectations in those areas.

Finally, I’ll note that it’s when an organization must trust the choices that it’s employees make on their behalf that professional societies are most compelling – in fact, that’s when they become necessary: employees are inspired and pressured to apply professional excellence in ways that an employer cannot easily reproduce. And that’s why they’re a big deal in healthcare (and IT).

Most vendors I know encourage their staff – and particularly their leaders – to be involved in professional societies. It’s a tangible demonstration of their commitment to excellence. And the vendors that don’t encourage their staff to get involved?… they’re signalling to the market where they sit on professional engagement: it doesn’t matter. And, more importantly, they’re signalling to their staff about that as well, and I’ve seen that companies that don’t do this suffer from a steady erosion in their culture.

Choosing a Vendor

When it comes to choose your enterprise information system (EIS) providers, the quality and culture of the EIS vendor really matters. It all comes down to what you are buying. If you’re buying a widget, where the delivery of quality goods is feasible to measure, and the goods are a commodity, then it makes absolute sense to choose the lowest price you can get in the market. But once you start buying things where the quality is not easily measured, or you have a high transaction cost to change supplier, you have to think differently about your choice. And EIS’s really are the epitomy of these problems (e.g. changing EIS frequently costs more than the cost of the system, and the vendor pretty much does nothing but make decisions on your behalf, ones you can’t really review).

Because of this, thoroughly understanding the culture of the vendor of the EIS – who you are effectively marrying for the duration that you’ll depend on them to assist you manage your organization – is critical. As technical lead at Kestral, I always believed that our potential customers, when choosing a vendor, focused their scoring too highly on the current features of the widget they were buying (the EIS), and not on the relationship they were entering into – because the system they were buying was not static, and a key feature of their future success was how well their EIS vendor could support them to grow their business by delivering on their future interoperability requirements. (And, if I’m not mistaken, lack of delivery on interoperability features a little in the news at the moment….)

But the problem is, how do you choose a vendor that consistently delivers based on a culture of professional excellence? well, one way is to count what % of the vendors key employees have leadership roles in professional societies. And asking the vendor to report that should be a standard feature of all EIS RFPs, and it should be scored enough to weight it against the long list of feature the EIS is being scored on.

But, you say… that’s easy for a vendor – it’s not really very costly in the overall picture to support an employee’s involvement in a professional society, and there’s no need for that to make a meaningful difference to their culture. And that’s actually partly right. Which is why the real question is not about the professional societies of their employers, but the vendor’s membership of professional societies for the vendor itself. That’s where the rubber really hits the road.

And professional societies for healthcare EIS systems are standards organizations (or their derivative, informal open source collaborations).

Involvement in Standards Organization

This means that a vendors involvement in standards is a key indicator of it’s aspiration to enduring professional competence. And it’s deep involvement over time that matters, along with consistent delivery of the standards being worked on. It’s not a magic bullet, but it’s tangible evidence that the vendor has a deep commitment to it’s own professional standards to deliver a quality product in regards to more than just the feature list that appear on sales materials.

And this is something that institutions should ask about in their RFPs: what tangible evidence can you as a vendor show that you have an organizational commitment to quality? Now obviously there’s more answers to this than just involvement in the standards process, but involvement in the standards organization not only is one good and measurable marker, it’s a particularly relevant marker given an organizations inevitable future dependency on the vendor to deal with ongoing interoperability requirements.

All this suggests that we could define a Standards Engagement Matrix which can be used to score how involved a vendor is in the standards process. It might look something like this:

SDO Membership Gold Membership Organization leaders Technical Leaders Contributions Delivery
One row for each relevant SDO 5 points for being a paid member 20 points for paying high level fees 3 points for each board member, or chair of an organizational committee 4 points for each technical committee chair

5 points for each editor of standard

5 points for major contributions of tools, drafted documents.

10 points for donating patents to the SDO

10 points for each standard implemented during the trial phase in the last 4 years

3 points for proving conformance to the standard by a recognized testing authority in the last 4 years

Then divide the point totals to the log of the company revenue, and you have a the “Standards Engagement Matrix score”

I need to be clear here: this is just a straw man to make people think about how you would score something like this. I’m sure it needs more adjustment around vendor size/income. And you’d absolutely need some kind of adaptive score for the meaningfulness of the SDO itself. And note, of course, that as vendors grow, their organization and business interests broaden, and different parts of the organization could behave differently, so again, there’s no magic bullet here.

But if we had a working consensus on something like this, then providers buying a healthcare EIS could ask, as part of their RFP, ‘what’s your standards engagement matrix score?’, and know that the score is a very good indication of their organizational commitment to excellence. Which is actually more relevant than even vendor financial viability to the long term happiness of the purchaser, in my opinion.

 

p.s. The exact same logic applies to whether providers and institutions are willing to share patient data – it’s a key indicator to their own staff of their commitment to professional excellence, not just short term profit-making. And we should be educating patients about that.

Interconversion between #FHIR and HL7 v2

Question

What advice do you give for introducing FHIR in new software, while continuing to maintain HL7v2 interoperability with client applications that do not speak FHIR?

For example, are FHIR resources the way to go as an internal representation of an application’s health care data?

If yes, is it practical to convert HL7 messages into FHIR resources (e.g. Patient, Practitioner, ProcedureRequest, ReferralRequest, Appointment…)? What open source software do you recommend for converting HL7 messages into FHIR resources (and vice-versa)?

Or is it better to use FHIR for external information exchange only (with outside FHIR clients)?

Answer

I’ve worked with several projects rebuilding their products around FHIR resources. Like all development methodologies, this has pros and  cons. What you get out of the box is a highly interoperable system, and you get a lot of stuff for free. But when your requirements go beyond what’s in the specification, it starts to get hard – FHIR is an interoperability standard, that focuses on the lowest common denominator: what everyone agrees with. Whether that’s a net benefit depends on how far beyond common agreement your going to go. (This, btw, is a demonstration of my 3rd law of interoperability).

It is practical to convert HL7 messages to FHIR resources and vice versa, yes. We’ve seen plenty of that going on. But there’s no canned solution, because to do the conversion, you have to do two things:

  • Figure out all the arcane business logic and information variants and code this into the conversion
  • Figure out how to integrate your conversion logic into your application framework

The upshot of this is that you have a programming problem, and most people solve this by taking a open source libraries for v2 and FHIR in the language of their choice (most languages have one of those) and writing the business logic and application integration in their development language of choice. Hence, there’s no particular open source library to do the job other than the parsers etc. There are some commercial middleware engines that include FHIR as one of the formats that can be supported.

In the FHIR spec, we’ve defined a mapping language that tries to abstract this – so you can separate the business logic from the application integration, and a platform independent business logic that has libraries for whatever platform. That’s an idea that is gradually starting to gather some interest, but is still a long way from maturity.

With regard to using FHIR for external exchange only… what I usually say about this that is that it makes sense to implement FHIR for new things first, and then to replace old things only when they become a problem. And most new stuff is on the periphery, where the architectural advantages to FHIR are really big. But internally, v2 will increasingly become a major service limitation in time, and will have to be replaced. The open question is how long that timeline is. We don’t know yet.

 

Lloyd McKenzie on Woody Beeler

Guest post: My close friend Lloyd wanted to share his thoughts on hearing the news about Woody.

My recollections of Woody are similar to Grahame’s.

I started my HL7 international journey in 2000.  In my case, it was in an attempt to understand how I could design my v2 profiles so they would be well aligned with v3.  I quickly learned the foolishness of that notion, but became enamored of the v3 effort.

HL7 was an extremely welcoming organization and Woody played a big part in that welcome.  I was a wet-behind the ears techy from Canada and he was an eminent physician, former organization chair and respected elder of the organization.  Yet he always treated me as an equal.  Over the years, we collaborated on tooling, data models, methodology, processes and the challenges of getting things done in an organization with many diverse viewpoints.  In addition to his sense of humour and willingness to get his hands dirty, I remember Woody for his passion.  He really cared about making interoperability work.  He was willing to listen to anyone who was trying to “solve the problem”, but he had little patience for those who he didn’t sense had similar motivations.

His openness to new ideas is perhaps best exemplified by his reaction to the advent of FHIR.  Woody was one of the founding fathers of v3 and certainly one of its most passionate advocates.  Over his time with HL7, he invested years of his life advocating, developing tools, providing support, educating, guiding the development of methodology and doing whatever else needed to be done.  Given his incredible investment in the v3 standard, it would not be surprising for him to be reluctant to embrace the new up-and-comer that was threatening to upset the applecart.  But he responded to the new development in typical Woody fashion.  He asked probing questions, he evaluated the intended outcomes and considered whether  the proposed path was a feasible and efficient way to satisfy those outcomes.  Once he had satisfied himself with the answers, he embraced the new platform.  Woody took an active role in forming the FHIR governance structures served as one of the first co-chairs of the FHIR govenance board.  To Woody, it was the outcome that mattered, not his ego.

Woody embraced life.  He loved traveling with his wife Selby (and his kids or grandkids when he could).  He loved new challenges.  He loved his work, but he wasn’t afraid to play either.  He was an active participant in after-hours WGM poker games.

It was with reluctance that Woody stepped back from his HL7 activities after his diagnosis with cancer, but as he expressed it at the time, he had discovered that he only had time for two of three important things – fighting his illness, spending time with his family and doing the work he loved with HL7.  He chose the right two priorities.

While version 3 might not have had the success we thought it would when we were developing it, the community that evolved under HL7 v3 and the knowledge we gleaned in that effort has formed the essential foundation and platform that enabled the building of FHIR.  I am grateful to have had Woody in my life – as a mentor, a co-worker and a friend.  I am grateful too for everything he helped build.  Woody’s priority was to focus on really making a difference.  In that he has set the bar very high for the rest of us.

Thank you for everything you’ve done Woody.  We miss you.

Woody Beeler has passed away

Today, Woody Beeler passed away after battling cancer for a few years. Woody was a friend, an inspiration, and my mentor in health care standards, and I’m going to miss him.

I first met Woody in 2001 at my first HL7 meeting. It was Woody who drew me into the HL7 community, and who educated me about the impact that standards could have. Many people at HL7 have told me the same thing – it was Woody that inspired them to become part of the community.

When I remember Woody, I think of his humour, his passion for developing the best standards possible, and his commitment to building the community out of which standards arise. And I remember the way Woody was prepared to roll up his sleeves and get his hands dirty to get the job done. To the point of learning significant new technical skills long after retirement age had come and gone. Truly, a Jedi master at healthcare standards.

For many years, Woody was the v3 project lead for HL7. Woody wasn’t blind to the issues with v3, but it was the best option available to the community at the time – so he gave everything he had to bring v3 to completion.  And it was Woody who mentored me through the early stages of establishing the FHIR community.

It’s my goal that in the FHIR project we’ll keep Woody’s commitment to community and healthcare outcomes – and doing whatever is needed – alive.

Vale Woody

(pic h/t Rene Spronk, who maintains a history of HL7 v3 – see http://ringholm.com/docs/04500_en_History_of_the_HL7_RIM.htm)

New #FHIR Vital Signs Profile

Over on the official FHR product blog, I just announced a new release. I wanted to expand on one of the features in the new version here

A new profile to describe vital signs (note: this is proposed as mandatory to enable better data sharing)

One of the emerging themes in many countries is sharing data with patients. And one of the broad set of things called ‘health data’ that is easiest to share is what is loosely called ‘vital signs’. It’s simple data, it’s easy to share with the patients, and we’re starting to see monitoring devices available in mainstream consumer technology. But it’s more than just patients that care – vital signs data is widely shared through the healthcare provision system, and there’s lots of interesting decision support and surveillance that can usefully be done with them.

But if every different part of the healthcare system, or different jurisdictions, represent basic vital signs differently, there’ll be no way to easily share decision support and survelliance systems, nor will patients be able to share their healthcare data with common data management tools – which are cross-jurisdictional (e.g. things like healthkit/carekit).  With this in mind, we’ve defined a vital signs profile in the new draft of FHIR, and said that all vital signs must conform to it. It doesn’t say much:

  • The common vital signs must be marked with a common generic LOINC code, in addition to whatever other codes you want to give them
  • There must be a value or a data absent reason
  • There must be a subject for the observations
  • Systolic/Diastolic must be represented using components
  • The units must be a particular UCUM unit

This is as minimal a floor as we can get: defining a common way to recognize a vital sign measurement, and a common unit for them. None of this restricts what else can be done, so this is really very minimal.

For FHIR, this is a very gentle step towards being proscriptive about how healthcare data is represented. But even this looks likely to generate fierce debate within the implementer community, some of whom don’t see the data sharing need as particularly important or near in the future. I’m writing this post to draw everyone’s attention to this, to ensure we get a good wide debate about this idea.

Note: it’s a proposal, in a candidate standard. It has to get through ballot before it’s actually mandatory.

 

Patient Matching on #FHIR Event at HIMSS

In a couple of weeks I’m off to HIMSS at Los Vegas. I’m certainly going to be busy while I’m there (If you’re hoping to talk to me, it would be best to email me to set up a time). Before HIMSS, there’s several satellite events:

  • Saturday: Health Informatics on FHIR: Opportunities in the New Age of Interoperability (IEEE)
  • Sunday: Patient Matching on FHIR event (HIMSS)
  • Monday: First joint meeting between HEART/UMA & FHIR/SMART teams – if you want to attend this meeting, let me know by email (there’s a couple of places still open)

About the Sunday meeting, quoting from the announcement:

Previous work included a Patient Testing Matching Event on this idea was developed at an event in Cleveland, OH on August 14th, 2015 at the HIMSS Innovation Center.  The event covered a tutorial on FHIR along with sessions on patient matching.  A key takeaway from the event was that the healthcare community can advance interoperability by working on a standard Application Programming Interface (API) for master patient index software, commonly used to facilitate patient data matching.

In fulfillment of this vision, we are hosting this second Patient Matching on FHIR Workshop in conjunction with the HIMSS 16 Annual Conference in Las Vegas.  We invite:

─         Algorithm vendors
─         EMR vendors,
─         Developers and standards experts
─         All interested parties

So, here’s passing on the invitation – see you there!

ps. I’ll pass on information about the IEEE event when I get a link.

Language Localization in #FHIR

Several people have asked about the degree of language localization in FHIR, and what language customizations are available.

Specification

The specification itself is published and balloted in English (US). All of our focus is on getting that one language correct.

There are a couple of projects to translate this to other languages: Russian  and  Japanese, but neither of these have gone very far, and to do it properly would be a massive task. We would consider tooling in the core build to make this easier (well, possible) but it’s not a focus for us.

One consequence of the way the specification works is that the element names (in either JSON or XML) are in english. We think that’s ok because they are only references into the specification; they’re never meant to be meaningful to any one other than a user of the specification itself.

Codes & Elements

What is interesting to us is publishing alternate language phrases to describe the codes defined in FHIR code systems, and the elements defined in FHIR resources. These are text phrases that do appear to the end-user, so it’s meaningful to provide these.

A Code System defined in a value set has a set of designations for each code, and these designations have a language on them:

code system

Not only have we defined this structure, we’ve managed to provide Dutch and German translations for the HL7 v2 tables. However, at present, these are all we have. We have the tooling to add more, it’s just a question of the HL7 Affiliates deciding to contribute the content.

For the elements (fields) defined as part of the resources, we’d happily add translations of these too, though there’s no native support for this in the StructureDefinition, so it would need an extension. However no one has contributed content like this yet.

It’s not necessary to do this as part of the specification (though doing it there provides the most visibility and distribution), though we haven’t defined any formal structure for a language pack. If there’s interest, this is something we could do in the future.

Useful Phrases

There’s also a source file that has a set of common phrases – particularly error phrases (which do get displayed to the user) – in multiple languages. This is available as part of the specification. Some of the public test servers use these messages when responding to requests.

Note that we’ve been asked to allow this to be authored through a web site that does translations – we’d be happy to do support this, if we found one that had an acceptable license, could be used for free, and had an API so we could download the latest translations as part of the build (I haven’t seen any service that has those features)

Multi-language support in resources

There’s no explicit support for multi-language representation in resources other than for value set. Our experience is that while mixed language content occurs occasionally, it’s usually done informally, in the narrative, and rarely formally tracked. So far, we’ve been happy to leave that as an extension, though there is ongoing discussion about that.

The one place that we know of language being commonly tracked as part of a resource is in document references, with a set of form letters (e.g. procedure preparation notes) in multiple languages. For this reason, the DocumentReference resource as a language for the document it refers to (content.language).

Question: #FHIR and patient generated data

Question:

With the increase in device usage and general consumer-centric health sites (e.g. myfitnesspal, Healthvault, Sharecare) coupled with the adoption of FHIR, it seems like it is becoming more and more common for a consumer to be able to provide the ability to share their data with a health system. The question I have lies in the intersection of self-reported data and the clinical record.

How are health systems and vendors handling the exchange (really, ingest) of self-reported data?

An easy example is something along the lines of I report my height as 5’10” and my weight as 175 in MyFitnessPal and I now want to share all my diet and bio  data with my provider.  What happens to the height and weight?  Does it get stored as a note?  As some other data point?  Obviously, with FHIR, the standard for transferring become easier, however, I’m curious what it looks like on the receiving end. A more complicated example might the usage of codifying an intake form.  How would i take a data value like “do you smoke” and incorporate that into the EHR?  Does it get stored in the actual clinical record or again, as a note?  If not in the clinical system, how do I report (a la MU) on this data point.

Answer

Well, FHIR enables this kind of exchange, but as you say, what’s actually happening in this regard? Well, as you say, it’s more a policy / procedure question, so I have no idea (though the draft MU Stage 3 rule would give credit for this as data from so-called “non-clinical” sources). But what I can do is ask the experts – leads on both the vendor and provider side. So that’s what I did, and here’s some of their comments.

From a major vendor integration lead:

For us, at least, the simplest answer is the always-satisfying “it’s complicated.”

Data generally falls into one of the following buckets:

  1. Data that requires no validation: data that is subjective. PHQ-2/9.
  2. Data that requires no validation (2): data that comes directly from devices/Healthkit/Google Fit.
  3. Data that requires minimal validation: data that is mostly subjective but that a clinician might want to validate that the patient understood the scope of the question – ADLs, pain score, family history, HPI, etc.
  4. Data that requires validation: typically, allergies/problems/meds/immunizations; that is, data that contributes to decision support and/or the physician-authored medical record.
  5. Data that is purely informational and that is not stored discretely.

Depending on what we are capturing, there are different confirmation paths.

Something like height and weight would likely file into (e). Height and weight are (a) already captured as a part of typical OP flow and (b) crucially important to patient safety (weight-based dosing), so it’s unlikely that a physician would favor a patient-reported height/weight over a clinic-recorded value.

That said, a patient with CHF who reports a weight gain > 2lb overnight will likely trigger an alert, and the trend remains important. But the patient-reported value is unlikely to replace a clinic-recorded value.

John Halamka contributed this BIDMC Patient Data Recommendation, along with a presentation explaining it. Here’s a very brief extract:

Purpose: To define and provide a process to incorporate Patient Generated Health Data into clinical practice

Clinicians may use PGHD to guide treatment decisions, similarly to how they would use data collected and recorded in traditional clinical settings. Judgment should be exercised when electronic data from consumer health technologies are discordant with other data

Thanks John – this is exactly the kind of information that is good to share widely.

Question: Solutions for synchronization between multiple HL7-repositories?

Question:

In the area of using HL7 for patient record storage, there are use cases to involve various sources of patient information who are involved in the care for one patient. For these people, we need to be able to offer a synchronization between multiple HL7-repositories. Are there any implementations of a synchronization engine between HL7 repositories?

Answer:

There is no single product that provides a solution like this. Typically, a working solution like this involves a great deal of custom business logic, and such solutions are usually solved using a mixture of interface engines, scripting, and bespoke code and services developed in some programming language of choice. See Why use an interface engine?

This is a common problem that has been solved more than once in a variety of ways with a myriad of products.

Here’s an overview of the challenge:

If by synchronization we mean just “replication” from A to B, then A needs to be able to send and B needs to receive messages or service calls. If by synchronization we mean two-way “symmetric” synchronization then you have to add logic to prevent “‘rattling” (where the same event gets triggered back and forth). An integration engine can provide the transformations between DB records and messages, but in general the concept codes and identifiers must still be reconciled between the systems.

For codes, an “interlingua” like SNOMED, LOINC, etc. is helpful if one or both of the systems uses local codes. The participants may implement translations (lookups) to map to the other participant or to the interlingua (it acts as the mediating correlator) The interface engine can call services, or perform the needed lookups. “Semantic” mapping incorporates extra logic for mapping concepts that are divided into their aspects (like LOINC, body system, substance, property, units, etc. Naturally if all participants actually support the interlingua natively the problem goes away. For identifiers, a correlating EMPI at each end can find-or-register patients based on matching rules. If a simplistic matching rule is sufficient and the receiving repository is just a database, then the integration engine alone could map the incoming demographic profile to a query against the patients table and look up the target patient – and add one if it’s new.

But if the target repository has numerous patients, with probabilistic matching rules (to maximize the rate of unattended matches, i.e. not bringing a human registrar into the loop to do merges), then the receiving system should implement a service of some kind (using HL7/OMG IXS standard, OMG PIDS (ref?), or FHIR), and the integration engine can translate the incoming demographic into a find-or-register call to that service. Such a project will of course require some analysis and configuration, but with most interface engines, there will be no need for conventional programming. Rather, you have (or make) trees that describe the message segments, tables, or service calls, and then you map (drag/drop) the corresponding elements from sources to targets.

An MDM or EMPI product worth its salt will implement a probabilistic matching engine and implement a web-callable interface (SOAP or REST) as described. If the participants are organizationally inside the same larger entity (a provider health system), then the larger organization may implement a mediating correlator just like the interlingua for terminology. The “correlating” EMPI assigns master identifiers in response to incoming feeds (carrying local ids) from source systems; Then that EMPI can service “get corresponding ids” requests to support the scenario you describe. An even tighter integration results if one or both participants actually uses that “master” id domain as its patient identifiers.

Here’s some example projects along these lines:

  • dbMotion created a solution that would allow a clinical workstation to access information about a common patient from multiple independent EMRs. It accomplished this by placing an adapter on top of EHR that exposed its data content in a common format (based upon the RIM) that their workstation application was able to query and merge the patient data from all the EMR into a single desktop view. The actual data in the source EHR were never modified in any way. This was implemented in Israel and then replicated in the US one RHIO at a time. (Note: dbMotion has since been acquired by Allscripts)
  • California State Immunization created a solution that facilitated synchronization of patient immunization history across the nine different immunization registries operating within the state. The solution was based upon a family of HL7 v2 messages that enabled each registry to request patient detail from another and use the query result to update its own record. This solution was eventually replaced by converting all the registries to a common technical platform and then creating a central instance of the system that served all of the regional registries in common (so synchronization was no longer an issue now that there was a single database of record, which is much simpler to maintain).
  • LA County IDR is an architecture put in place in Los Angles County to integrate data from the 19+ public health information system both as a means of creating a master database that could be used for synchronization and could be used as a single source to feed data analytics. The Integrated Data Repository was built using a design that was first envisioned as part of the CDC PHIN project. The IDR is a component of the CDC’s National Electronic Disease Surveillance System (NEDSS) implemented in at least 16 state health departments.

The following people helped with this answer: Dave Shaver, Abdul Malik Shakir, Jon Farmer