FHIR Notepad++ Plug-in: Tools for #FHIR developers

I’m pleased to announce that the FHIR Plug-in for Notepad++ that was distributed and tested at the DevDays in Amsterdam last week is now ready for general release.

Notepad++ is a powerful text editor that’s pretty popular with developers – it seems most of us who use windows use it. And it has a flexible and powerful plug-in framework with an active community around that. So it was the logical choice for a set of FHIR tools. The FHIR tools themselves offer useful functionality for FHIR developers (programmers, analysts), based on the kinds of things that we need to do at connectathons or for authoring content for the specification.



The FHIR Toolbox makes the following functionality available:

  • XML / JSON interconversion
  • Interactive and background resource validation
  • FHIR Path execution and debugging (more on that in a subsequent post)
  • Create a new populated resource from its definition (or a profile on it)
  • Connect to a server (including Smart-on-FHIR servers)
  • Create a new populated resource from its definition (or a profile on it)
  • Open a resource from a server
  • PUT/POST a resource to a server
  • Narrative generation



In addition, there’s a visualizer provided. This shows the following kind of information:

  • Narrative for the resource
  • Validation and Path execution results
  • Information about the current element (e.g. viewer for attachments, codes, references)

Note that the FHIR Plug-in doesn’t provide syntax highlighting for XML or JSON – there’s other plug-ins for them.

Additional Notes:

  • The FHIR tools are based on DSTU2
  • There is ongoing active development of the plug-in, so expect glitches and bugs (report bugs using my FHIRServer Github project)
  • To get notified of updates, subscribe to the RSS link from the downloads page. Or the plug-in itself will notify when a new version is released
  • yes, Notepad++ is only available for windows. For OSX developers, well, you can get a windows VM. It’s unlikely that it will be feasible to port the tools to OSX
  • For further documentation, see the Wiki page


Language Localization in #FHIR

Several people have asked about the degree of language localization in FHIR, and what language customizations are available.


The specification itself is published and balloted in English (US). All of our focus is on getting that one language correct.

There are a couple of projects to translate this to other languages: Russian  and  Japanese, but neither of these have gone very far, and to do it properly would be a massive task. We would consider tooling in the core build to make this easier (well, possible) but it’s not a focus for us.

One consequence of the way the specification works is that the element names (in either JSON or XML) are in english. We think that’s ok because they are only references into the specification; they’re never meant to be meaningful to any one other than a user of the specification itself.

Codes & Elements

What is interesting to us is publishing alternate language phrases to describe the codes defined in FHIR code systems, and the elements defined in FHIR resources. These are text phrases that do appear to the end-user, so it’s meaningful to provide these.

A Code System defined in a value set has a set of designations for each code, and these designations have a language on them:

code system

Not only have we defined this structure, we’ve managed to provide Dutch and German translations for the HL7 v2 tables. However, at present, these are all we have. We have the tooling to add more, it’s just a question of the HL7 Affiliates deciding to contribute the content.

For the elements (fields) defined as part of the resources, we’d happily add translations of these too, though there’s no native support for this in the StructureDefinition, so it would need an extension. However no one has contributed content like this yet.

It’s not necessary to do this as part of the specification (though doing it there provides the most visibility and distribution), though we haven’t defined any formal structure for a language pack. If there’s interest, this is something we could do in the future.

Useful Phrases

There’s also a source file that has a set of common phrases – particularly error phrases (which do get displayed to the user) – in multiple languages. This is available as part of the specification. Some of the public test servers use these messages when responding to requests.

Note that we’ve been asked to allow this to be authored through a web site that does translations – we’d be happy to do support this, if we found one that had an acceptable license, could be used for free, and had an API so we could download the latest translations as part of the build (I haven’t seen any service that has those features)

Multi-language support in resources

There’s no explicit support for multi-language representation in resources other than for value set. Our experience is that while mixed language content occurs occasionally, it’s usually done informally, in the narrative, and rarely formally tracked. So far, we’ve been happy to leave that as an extension, though there is ongoing discussion about that.

The one place that we know of language being commonly tracked as part of a resource is in document references, with a set of form letters (e.g. procedure preparation notes) in multiple languages. For this reason, the DocumentReference resource as a language for the document it refers to (content.language).

#FHIR DevDays Amsterdam 2015

In two weeks time – Nov 18 2015 – I will be joining Ewout Kramer from Furore in Amsterdam – along with other core team members James Agnew, Josh Mandel, Lloyd Mckenzie – for the Furore #FHIR DevDays 2015. I’m really looking forward to this – it’s the peak European FHIR event, and we’ve got a great program lined up:

  • Patient Track: Create, update and search patients with FHIR.
  • Terminology Services Track: See if you can work with FHIR’s terminology operations: expand valuesets, validate your codes and get human readable labels for your codes.
  • Profile & Validation Track: Create a profile and an instance, and ask a server to validate the instance according to your profile.
  • SMART on FHIR track: Extend your server or build a client to add OAuth2 to the FHIR REST interface, and see whether they can authenticate and talk.
  • Imaging Track: Imaging, DICOM and FHIR.
  • API Beginners Track: Get your very first FHIR client application up and running.
  • Community Track: Presentations of real life experiences with FHIR.

The full schedule is here: http://goo.gl/gHGmCB, along with the the presenters (Keith Boone, Brad Genereaux, Michel Rutten, Dunmail Hodkinson, Mirjam Baltus, Kevin Mayfield, Marten Smits, René Spronk, Simone Heckmann – what a stellar line up!)

In addition to this, the FHIR Core Team will be announcing a couple of new and exciting implementation features for the community at DevDays – hopefully I’ll see you there!



Clinicians on FHIR

From very early in the FHIR project, we’ve been running Connectathons, where a group of people – mainly developers – gather to test one of a variety of exchange scenarios. The connectathons perform several key functions:

  • Build a community with practical experience using the specification
  • Accelerate the progress of specific functionality towards production
  • Provide detailed QA of the specification

However while these connectathons perform a thorough QA of parts of the specification, there’s others that they don’t check at all. Principally, this is whether the specification offers support for a broad – and realistic – set of clinical use cases. That’s because the developers involved in the technical connectathon pick simple data for the content that is not involved in their exchange concerns.

To balance this, we’ve been running a series of what we initially called ‘clinical connectathons’. Over time, the clinicians who run these – co-chairs of the Patient Care committee (Stephen ChuLaura Heermann Langford, Emma Jones + Russ Leftwich and Viet Nguyen), have found that the ‘connectathon’ approach is not the best way to address the issues, and the event has been renamed to ‘Clinicians on FHIR’.

The attendees to ‘Clinicians on FHIR’ are divided up into groups. Each group is given a series of clinical scenarios, and their task is to create a set of resources that represent the kind of details from the scenario that would be entered in a clinical systems such as an EHR, using a mock-EHR tool maintained at http://clinfhir.com. This tool is maintained by David Hay from Orion Health in NZ to support the Clinicians on FHIR event, but shows real promise as a general teaching/testing tool for FHIR.

The process of creating resources is a great way to help the 3 different types of participants at the event – clinicians, domain experts who control the resources, and technical support who understand the FHIR infrastructure – to understand the process of converting from a set of clinical ideas to a set of resources. It’s also a great way to understand the limits of the existing resource contents – the QA that’s missing from the connectathons.

This meeting, it was my pleasure to provide support to a set of experts working on Medication and Allergy/Intolerance scenarios. In addition to Dr Russ Leftwich, the group included some key contributers to FHIR: the co-chairs of the pharmacy working group (Melva Peters, John Hatem, Marla Albitz, and Scott Robinson).

An aside about this: I often tell people that the single most important part of the FHIR specification is the definitions of the resources, based on a comprehensive understanding of the relevant domain requirements from around the world, and a thorough consensus building approach to their management. This creates the basis for broad buy-in to the use of FHIR, and it’s a gradual slow process that can only be stewarded by experts in the domain who also have the capacity to lead the group to consensus. This typically means, the co-chairs of the relevant committees. They pour their heart and soul into this, and it’s easy for FHIR stakeholders to under-appreciate their work. (see, for example, in the DSTU2 release announcement for DSTU 2, where I mentioned the editors who actually committed content to the specification, but omitted to list the committee co-chairs – who often make a greater contribution).

There’s no simple summary for the outcomes from our group – we worked on several clinical scenarios, found several deficiencies in the resources and in the ClinFhir tool, and created a number of tasks to clarify the specification in the future.

For me, the overall outcome is that the Patient Care co-chairs feel that we’ve found a formula that works for Clinicians on FHIR, and we’re looking at expanding it – both in terms of volume of participants, and in running the event on a more continuous (virtual) basis, which will accelerate our improvement of the specification.

FHIR DSTU2 is published

The FHIR team is pleased to announce that FHIR DSTU is now published at http://hl7.org/fhir. The 2nd DSTU is an extensive rewrite of all parts of the specification. Some of the highlights this version accomplishes:

  • Simplifies the RESTful API
  • Extends search and versioning significantly
  • Increases the power and reach of the conformance resources and tools
  • Defines a terminology service
  • Broadens functionality to cover new clinical, administrative and financial areas
  • Incorporates thousands of changes in existing areas in response to trial use

As part of publishing this version, we have invested heavily in the quality of the process and the specification, and the overall consistency is much improved. A full list of changes to the FHIR standard can be found at http://hl7.org/fhir/history.html#history.

In addition, DSTU2 is published along with several US-realm specific implementations developed in association with the ONC: DAF, SDC, and QICore.

This release has involved an astounding amount of work from the editorial team, which, in addition to me, includes:

  • Lloyd McKenzie
  • Brian Postlethwaite
  • Eric Haas
  • Jason Matthews
  • Mark Kramer
  • Paul Knapp
  • Brett Marquard
  • Ewout Kramer
  • Richard Etterna
  • Claude Nanjo
  • James Agnew
  • Josh Mandel
  • John Moerhke
  • Nagesh (Dragon) Bashyam
  • Alexander Henket
  • Chris Moesel
  • Marc Hadley
  • Rob Hausam
  • Bryn Rhodes
  • Nathan Davis
  • Jason Walonoski
  • Rik Smithies
  • Molly Ullman-Cullere
  • Chris Nickerson
  • Jean Duteau
  • Chi Tran
  • David Hay
  • Tom Lukasik
  • Hugh Glover
  • Chris Millet
  • Fabien Laurent
  • Marla Albitiz
  • Richard Kavanagh
  • Brad Arndt
  • Brett Esler
  • Chris White
  • Jay Lyle
  • Eric Larson
  • Lorraine Constable
  • Ken Rubin

In addition, to this, the HL7 leadership, the wider HL7 community and the wider FHIR Adoption community have all made significant and massive contributions. Additional contributers are recognised in the specification.

Note: there is still much to be done; this is the first full DSTU release, and it will get a lot of use. I’ll make a series of follow up posts describing some of the significant aspects of this release, and our overall plans going forward, over the next couple of weeks


What is the state of CDA R3?


We have seen references to new extension methodologies being proposed in CDA R3; however I can’t seem to find what the current state of CDA R3 is.  Most searches return old results.  The most recent document related to CDA R3 using FHIR instead of RIM.  What is the current state of CDA R3 and where can I find more information.  HL7 pages seem to be pretty old.


The Structured Documents work group at HL7 (the community that maintains the CDA standard), is currently focused on publishing a backwards compatible update to CDA R2 called CDA R2.1. CDA R3 work has been deferred to the future, both in order to allow the community to focus on R 2.1, and to allow FHIR time to mature.

There is general informal agreement that CDA R3 will be FHIR based, but I wouldn’t regard that as formal or final; FHIR has to demonstrate value in some areas that it hasn’t yet done before a final decision could be made. I expect that we’ll have more discussions about this at the HL7 working meeting in Atlanta in October.

Question: #FHIR and patient generated data


With the increase in device usage and general consumer-centric health sites (e.g. myfitnesspal, Healthvault, Sharecare) coupled with the adoption of FHIR, it seems like it is becoming more and more common for a consumer to be able to provide the ability to share their data with a health system. The question I have lies in the intersection of self-reported data and the clinical record.

How are health systems and vendors handling the exchange (really, ingest) of self-reported data?

An easy example is something along the lines of I report my height as 5’10” and my weight as 175 in MyFitnessPal and I now want to share all my diet and bio  data with my provider.  What happens to the height and weight?  Does it get stored as a note?  As some other data point?  Obviously, with FHIR, the standard for transferring become easier, however, I’m curious what it looks like on the receiving end. A more complicated example might the usage of codifying an intake form.  How would i take a data value like “do you smoke” and incorporate that into the EHR?  Does it get stored in the actual clinical record or again, as a note?  If not in the clinical system, how do I report (a la MU) on this data point.


Well, FHIR enables this kind of exchange, but as you say, what’s actually happening in this regard? Well, as you say, it’s more a policy / procedure question, so I have no idea (though the draft MU Stage 3 rule would give credit for this as data from so-called “non-clinical” sources). But what I can do is ask the experts – leads on both the vendor and provider side. So that’s what I did, and here’s some of their comments.

From a major vendor integration lead:

For us, at least, the simplest answer is the always-satisfying “it’s complicated.”

Data generally falls into one of the following buckets:

  1. Data that requires no validation: data that is subjective. PHQ-2/9.
  2. Data that requires no validation (2): data that comes directly from devices/Healthkit/Google Fit.
  3. Data that requires minimal validation: data that is mostly subjective but that a clinician might want to validate that the patient understood the scope of the question – ADLs, pain score, family history, HPI, etc.
  4. Data that requires validation: typically, allergies/problems/meds/immunizations; that is, data that contributes to decision support and/or the physician-authored medical record.
  5. Data that is purely informational and that is not stored discretely.

Depending on what we are capturing, there are different confirmation paths.

Something like height and weight would likely file into (e). Height and weight are (a) already captured as a part of typical OP flow and (b) crucially important to patient safety (weight-based dosing), so it’s unlikely that a physician would favor a patient-reported height/weight over a clinic-recorded value.

That said, a patient with CHF who reports a weight gain > 2lb overnight will likely trigger an alert, and the trend remains important. But the patient-reported value is unlikely to replace a clinic-recorded value.

John Halamka contributed this BIDMC Patient Data Recommendation, along with a presentation explaining it. Here’s a very brief extract:

Purpose: To define and provide a process to incorporate Patient Generated Health Data into clinical practice

Clinicians may use PGHD to guide treatment decisions, similarly to how they would use data collected and recorded in traditional clinical settings. Judgment should be exercised when electronic data from consumer health technologies are discordant with other data

Thanks John – this is exactly the kind of information that is good to share widely.

Question: Solutions for synchronization between multiple HL7-repositories?


In the area of using HL7 for patient record storage, there are use cases to involve various sources of patient information who are involved in the care for one patient. For these people, we need to be able to offer a synchronization between multiple HL7-repositories. Are there any implementations of a synchronization engine between HL7 repositories?


There is no single product that provides a solution like this. Typically, a working solution like this involves a great deal of custom business logic, and such solutions are usually solved using a mixture of interface engines, scripting, and bespoke code and services developed in some programming language of choice. See Why use an interface engine?

This is a common problem that has been solved more than once in a variety of ways with a myriad of products.

Here’s an overview of the challenge:

If by synchronization we mean just “replication” from A to B, then A needs to be able to send and B needs to receive messages or service calls. If by synchronization we mean two-way “symmetric” synchronization then you have to add logic to prevent “‘rattling” (where the same event gets triggered back and forth). An integration engine can provide the transformations between DB records and messages, but in general the concept codes and identifiers must still be reconciled between the systems.

For codes, an “interlingua” like SNOMED, LOINC, etc. is helpful if one or both of the systems uses local codes. The participants may implement translations (lookups) to map to the other participant or to the interlingua (it acts as the mediating correlator) The interface engine can call services, or perform the needed lookups. “Semantic” mapping incorporates extra logic for mapping concepts that are divided into their aspects (like LOINC, body system, substance, property, units, etc. Naturally if all participants actually support the interlingua natively the problem goes away. For identifiers, a correlating EMPI at each end can find-or-register patients based on matching rules. If a simplistic matching rule is sufficient and the receiving repository is just a database, then the integration engine alone could map the incoming demographic profile to a query against the patients table and look up the target patient – and add one if it’s new.

But if the target repository has numerous patients, with probabilistic matching rules (to maximize the rate of unattended matches, i.e. not bringing a human registrar into the loop to do merges), then the receiving system should implement a service of some kind (using HL7/OMG IXS standard, OMG PIDS (ref?), or FHIR), and the integration engine can translate the incoming demographic into a find-or-register call to that service. Such a project will of course require some analysis and configuration, but with most interface engines, there will be no need for conventional programming. Rather, you have (or make) trees that describe the message segments, tables, or service calls, and then you map (drag/drop) the corresponding elements from sources to targets.

An MDM or EMPI product worth its salt will implement a probabilistic matching engine and implement a web-callable interface (SOAP or REST) as described. If the participants are organizationally inside the same larger entity (a provider health system), then the larger organization may implement a mediating correlator just like the interlingua for terminology. The “correlating” EMPI assigns master identifiers in response to incoming feeds (carrying local ids) from source systems; Then that EMPI can service “get corresponding ids” requests to support the scenario you describe. An even tighter integration results if one or both participants actually uses that “master” id domain as its patient identifiers.

Here’s some example projects along these lines:

  • dbMotion created a solution that would allow a clinical workstation to access information about a common patient from multiple independent EMRs. It accomplished this by placing an adapter on top of EHR that exposed its data content in a common format (based upon the RIM) that their workstation application was able to query and merge the patient data from all the EMR into a single desktop view. The actual data in the source EHR were never modified in any way. This was implemented in Israel and then replicated in the US one RHIO at a time. (Note: dbMotion has since been acquired by Allscripts)
  • California State Immunization created a solution that facilitated synchronization of patient immunization history across the nine different immunization registries operating within the state. The solution was based upon a family of HL7 v2 messages that enabled each registry to request patient detail from another and use the query result to update its own record. This solution was eventually replaced by converting all the registries to a common technical platform and then creating a central instance of the system that served all of the regional registries in common (so synchronization was no longer an issue now that there was a single database of record, which is much simpler to maintain).
  • LA County IDR is an architecture put in place in Los Angles County to integrate data from the 19+ public health information system both as a means of creating a master database that could be used for synchronization and could be used as a single source to feed data analytics. The Integrated Data Repository was built using a design that was first envisioned as part of the CDC PHIN project. The IDR is a component of the CDC’s National Electronic Disease Surveillance System (NEDSS) implemented in at least 16 state health departments.

The following people helped with this answer: Dave Shaver, Abdul Malik Shakir, Jon Farmer

Profiles and Exceptions to the Rules

One of the key constructs in FHIR is a “profile”. A profile is a statement of how FHIR resources are used for a particular solution – or, how they should be used. The FHIR resources are a general purpose construct, and you can do kind of general purpose things with them, such as store the data in a PHR, and do generally useful display of a clinical record etc.

But if you’re going to do something more specific, then you need to be specific about the contents. Perhaps, for instance, you’re going to write a decision support module that takes in ongoing glucose and HBA1c measurements, and keeps the patient informed about how well they are controlling their diabetes. In order for a patient or an institution to use that decision support module well, the author of the module is going to have to be clear about what are acceptable input measurements – and it’s very likely, unfortunately, that the answer is ‘not all of them’. Conversely, if the clinical record system is going to allow it’s users to hook up decision support modules like this, it’s going to have to be clear about what kind of glucose measurements it might feed to the decision support system.

If both the decision support system and the clinical records system produce profiles, a system administrator might even able to get an automated comparison to see whether they’re compatible. At least, that’s where we’d like to end up.

For now, however, let’s just consider the rules themselves. A clinical record system might find itself in this situation:

  • We can provide a stream of glucose measurements to the decision support system
  • They’ll come from several sources – labs, point of care testing devices, inpatient monitoring systems, and wearables
  • There’s usually one or more intermediary systems between the actual glucose measurement, and the clinical record system (diagnostic systems, bedside care systems, home health systems – this is a rapidly changing space)
  • Each measurement will have one of a few LOINC codes (say, 39480-9: Glucose [Moles/volume] in Venous blood, 41652-9: Glucose [Mass/volume] in Venous blood,
    14743-9: Glucose [Moles/volume] in Capillary blood by Glucometer)
  • the units of measure will be mg/dL or mmol/L
  • there’ll be a numerical value, perhaps with a greater than or less than comparator (e.g. >45mmol/L)

So you can prepare a FHIR profile that says this one way or another. And then a decision support engine can have a feel for what kind of data it might get, and make sure it can handle it all appropriately.

So that’s all fine. But…

Eventually, the integration engineers that actually bring the data into the system discover – by looking at rejected messages (usually) – 1 in a million inbound glucose measurements from the lab contain a text message instead of a numerical value. The message might be “Glucose value to high to determine”. Now what? From a clinical safety perspective, it’s almost certain that the integration engineers won’t replace “too high to determine’ with a “>N” where N is some arbitrarily chosen number – there’s no number they can choose that isn’t wrong. And they won’t be able to get the source system to change their interface either – that would have other knock-on effects for other customers / partners of the source system. Nor can they drop the data from the clinical record – it’s the actual test result. So they’ll find a way to inject that value into the system.

Btw- aside – some of the things that go in this string value could go in Observation.dataAbsentReason, but they’re not coded, and it’s not possible to confidently decide which are missing reasons, and which are ‘text values’. So dataAbsentReason isn’t a solution to this case, though it’s always relevant.

Now the system contains data that doesn’t conform to the profile it claimed to use. What should happen?

  1. The system hides the data and doesn’t let the decision support system see it
  2. The system changes it’s profile to say that it might also send text instead of a number
  3. The system exposes the non-conformant data to the decision support system, but flags that it’s not valid according to it’s own declarations

Neither of these is palatable. I assume that #1 isn’t possible, at least, not as a blanket policy. There’s going to be some clinical safety reason why the value has to be passed on, just the same as the integration engineers passed it on in the first place, so that there’re not liable.

Option #2 is a good system/programmer choice – just tell me what you’re going to do, and don’t beat around the bush. And the system can do this – it can revise the statement ‘there’ll be a numerical value’ to something like ‘there’ll be a numerical value, or some text’. At least this is clear.

Only it creates a problem – now, the consumer of the data knows that they might get a number, or a string. But why might the get a string? what does it mean? Someone does know, somewhere, that the string option is used 1 in a million times, but there’s no way (currently, at least) to say this in the profile – it just says what’s possible, not what’s good, or ideal, or common. If you start considering the impact of data quality on every element – which you’re going to have to do – then you’re going to end up with a profile that’s technically correct but quite non-comunicative about what the data might be, nor one that provides any guidance as to what it should be, so that implementers know what they should do. (and observationally, if you say that it can be a string, then, hey, that’s what the integration engineers will do to, because it’s quicker….)

That’s what leads to the question about option #3: maybe the best thing to do is to leave the profile saying what’s ideal, what’s intended, and let systems flag non-conforming resources with a tag, or wrong elements with an extension? Then the consumer of the information can always check, and ignore it if they want to.

That is, if they know about the flag, and remember. Which means we’d need to define it globally, and the standard itself would have to tell people to check for data that isn’t consistent with it’s claims… and then we’d have to add overrides to say that some rules actually mean what they say, as opposed to not actually meaning that…. it all sounds really messy to me.

Perhaps, the right way to handle this is to have ideal and actual profiles? That would mean an extension to the Conformance resource so you could specify both – but already the interplay between system and use case profiles is not well understood.

I think this area needs further research.

p.s. There’s more than some passing similarity between this case and the game of ‘hot potato‘ I used to play as a kid: ‘who’s going to do have to do something about this bad data’.

#FHIR Report from the Paris Working Meeting

I’m on the way home from HL7’s 2015 May Working Group Meeting. This meeting was held in Paris. Well, not quite Paris – at the Hyatt Regency at Charles De Gaulle Airport.


A sad and quite unexpected event occurred at this meeting – Helmut Koenig passed away. Helmut Koenig was a friend who had attended HL7 and DICOM meetings for many years. Recently, he had contributed to the DICOM related resources, including ImagingStudy and ImagingObjectSelection resources.

Helmut actually passed away at the meeting itself, and we worked on resolving his ballot comments the next day. Links:

Ballot Summary

The FHIR community continues to grow in leaps and bounds. That was reflected in the FHIR ballot: we had strong participation and many detailed comments about the specification itself. Once all the ballot comments had been processed and duplicates removed, and line items traded amongst the various FHIR related specifications, the core specification had 1137 line items for committees to handle. You can see them for yourself on HL7’s gForge.

This is a huge task and will be the main focus of the FHIR community for the next couple of months as we grind towards publication of the second DSTU. At the meeting itself, we disposed of around 100 line items; I thought this was excellent work since we focused on the hardest and most controversial ones.


We had about 70 participants for the connectathon. Implementers focused on the main streams of the connectathon: basic Patient handling, HL7 v2 to FHIR conversion, Terminology Services, and claiming. For me, the key outcomes of the connectathon were:

  • We got further feedback about the quality of specification, with ideas for improvement
  • Many of the connectathon participants stayed on and contributed to ballot reconciliation through the week.

The connectathons are a key foundation of the FHIR Community – they keep us focused on making FHIR something that is practical and implementer focused.

We have many connectathons planned through the rest of this year (at least 6, and more are being considered). I’ll announce them here as the opportunity arises.


Another pillar of the FHIR Community is our collaborations with other health data exchange communities. In addition to our many existing collaborations, this meeting the FHIR core team met with Continua, the oneM2M alliance, and the IHE test tools team. (We already have a strong collaboration with IHE generally, so this is just an extension of this in a specific area of focus).

With IHE, we plan to have a ‘conformance test tools’ stream at the Atlanta connectathon, which will test the proposed (though not yet approved) TestScript resource, which is a joint development effort between Mitre, Aegis, and the core team. We expect that the collaboration with Continua will lead to a joint connectathon testing draft FHIR based Continua specifications later this year. Working with oneM2M will involve architectural and infrastructural development, and this will take longer to come to fruition.

FHIR Infrastructure

At this meeting, the HL7 internal processes approved the creation of a “FHIR Infrastructure” Work group. This work group will be responsible for the core FHIR infrastructure – base documentation, the API, the data types, and a number of the infrastructure resources. The FHIR infrastructure group has a long list of collaborations with other HL7 work groups such as Implementation Technology, Conformance and Implementation, Structured Documents, Modelling and Methodology, and many more. This just regularises the existing processes in HL7; it doesn’t signal anything new in terms of development of FHIR.

FHIR Maturity model

One of the very evident features of the FHIR specification as it stands today is that the content in it has a range of levels of readiness for implementation. Implementers often ask about this – how ready is the content for use?

We have a range – Patient, for instance, has been widely tested, including several production implementations. While the content might still change further in response to implementer experience, we know that what’s there is suitable for production implementation. On the other hand, other resources are relatively newly defined, and haven’t been tested at all. This will continue to be true, as we introduce new functionality into the specification; some – a gradually increasing amount – will be ready for production implementation, while new things will take a number of cycles to mature.

In response to this, we are going to introduce a FHIR Maturity model grading based on the well known CMM index. All resources and profiles that are published as part of the FHIR specification will have a FMM grade to help implementers understand where content is.

FHIR & Semantic Exchange

I still get comments from some parts of the HL7 community about FHIR and the fact that it is not properly based on real semantic exchange. I think this is largely a misunderstanding; it’s made for 2 main reasons:

  • The RIM mappings are largely in the background
  • We do not impose requirements to handle data properly

It’s true that we don’t force applications to handle data properly. I’d certainly like them to, but we can’t force them to, and one of the big lessons from V3 development was that we can’t, one way or another, achieve that. Implementers do generally want to improve their data handling, but they’re heavily constrained by real world constraints, including cost of development, legacy data, and that the paying users (often) don’t care.

And it’s true that the RIM mappings have proven largely of theoretical value; we’ve only had one ballot comment about RIM mappings, and very few people have contributed to them.

What we do instead, is insist that the infrastructure is computable; of all HL7 specifications, only FHIR consistently has all the value sets defined and published. Anyone who’s done CCDA implementation will know how significant this is.

Still, we have a long way to go yet. A key part of our work in this area is the development of RDF representations for FHIR resources, and the underlying definitions, including the reference models, and we’ll be putting a lot of work into binding to terminologies such as LOINC, SNOMED CT and others.

There’s some confusion about this: we’re not defining RDF representations of resources because we think this is relevant to typical operational exchange of healthcare data; XML and JSON cover this area perfectly well. Where RDF representations will be interface between operational healthcare data exchange and analysis and reasoning tools. Such tools will have applications in primary healthcare and secondary data usage.