Category Archives: FHIR

#FHIR Paradigms

Many of the FHIR tutorials show this diagram:

The goal of this diagram is to show that FHIR defines 3 different ways to exchange FHIR resources:

  • Using the RESTful interface – the most high profile way to exchange data
  • Packaging the resources in messages, and pushing them between systems (similar architecture as HL7 v2)
  • Packaging the resources in documents, with a managed presentation (similar architecture as CDA)

Also, what this diagram is intending to show is that in addition to the 3 ways defined by the FHIR specification itself, there’s other ways to exchange resources. Since the most common alternative method is to use some enterprise web services framework (say. some kind of SOAPy thing) – we called it ‘services’.

But that’s misleading; a ‘service’ is some orchestration of exchange of FHIR resources, and most implementation projects build their services using some combination of RESTful interfaces, messages, and documents, along with other methods of transfer. And that’s a combination that the FHIR community thinks is perfectly valid. So calling the 4th ‘other’ category “services” is… misleading… to say the least.

However there’s something beyond that missing from this diagram. In the last year or so, the FHIR community has gradually become more focused on what is emerging as a distinct 4th paradigm: using persistent data stores of some kind or other to exchange data between systems. Classically, this is associated with analytics – but it doesn’t actually need to be. The typical pattern is:

  • Create some kind of persistent store (can be an SQL database, a nosql db, a big data repository, an RDF triple store)
  • Applications generating data commit resources to the store
  • Applications using the data to the store find and retrieve data from the store at a later time

We haven’t really acknowledged this paradigm of exchange in the specification – but it’s what the RDF serialization is really about. And all the uses I’ve seen have one uniting characteristic: there’s a need to reconcile the data when it is committed, or later, to help with subsequent data analysis. There’s 2 kinds of reconciliation that matter:

  • detecting, preventing or repairing duplicate records
  • matching references (e.g. resolving URLs to their target identity in the database, and storing the native link)

One of the subjects I’d like to discuss in New Orleans next month is to gather the various disparate strands of work in the community around a ‘storage paradigm’ into a single coherent whole – if that’s possible. It’s something that we’ve been slow to take up, mainly because HL7 classically agrees to focus on the ‘interface’ and keep away from vendor design. But that’s happening in the community now has moved past this.

In particular, there’s really interesting and energetic work using new databases (or new features of old databases) to store FHIR resources directly in the database, and performing the analysis directly on the resources. Google presented some really interesting work around this at DevDays in Amsterdam a couple of weeks ago, and we’re working towards hosting a publicly usable example of this.

Clearly, we’ll need a new diagram to keep up with all these changes

Question: LOINC code for smoking start date

Question:

My team is currently working with FHIR DSTU2, and part of the project that we are working on requires  SMOKING information, particularly the date the patient started smoking.  Part of this work is also mapping this particular information to LOINC, and basically our issue is that we are unable to find anything that refers to a patient’s smoking start date in LOINC.  What would be your suggested work around for the above?

We did find quit date, which good – just not start date.

Answer:

Well, there’s no LOINC code to match 74010-0 (Date quit tobacco smoking). So the best option is to make up your own code, and propose that LOINC add a matching code, then change to use that once that’s done.

You can propose new LOINC codes here: https://loinc.org/submissions/

Sharing Information with a Patient Case Manager

Sharing Information with a Patient Case Manager

This post (which is co-authored with Josh Mandel, thanks) describes a general pattern for a clinician to initiate sharing of a clinical record between an EHR (or more generally, a clinical records system – can be more than just an EHR) and some other clinical information service. These might be one the following:

  • Patient assistance organizations like Medic Alert
  • Patient networks such as ‘PatientsLikeMe
  • Voluntary/professional Clinical case registries (e.g. Vermont Oxford network)
  • Social support services
  • Government and Private Preventative/Management Programs(?)

During the clinical workflow, the clinician with consent from the patient, agrees to ‘refer’ the patient’s case to some kind of case manager, and continue to share information about the patient with the case manager. At some stage, either the clinician or the patient may wish to stop further sharing.

Notes:

  • This is not the same as a formal clinical referral, though there’s some similarities – that’ll be a separate post.
  • There’s a spread of balance between clinical and patient interest here – some of them are ‘what the clinician wants, with consent of the patient’ while others are ‘helpful suggestions from the clinician that the patient might want to run with’. The balance of these may influence which parts of the interaction pattern are more applicable
  • There’s other kinds of case manager registries where the patient doesn’t get consulted. Typically, this is for mandatory public health reporting; this is a different problem for which this particular pattern is not applicable, though parts of it would no doubt be useful

Interaction pattern

Up till now, this kind of sharing has been very difficult to organize in practice, because it has needed vendor support one way or another, and vendors just don’t have time for the never-ending stream of requests for this. That translates into the fact that each individual project cannot afford the development. But FHIR, SMART on FHIR, and CDS Hooks change that.

This blog describes a general pattern for using the Argonaut profile and current EHR development to organize this sharing without needing any special arrangements. I’m publishing this because the overall pattern isn’t evident to many participants.

This pattern describes:

  1. How the case manager service can suggest to the clinician that they should initiate the referral
  2. How the case manager service can provide an application that initiates the sharing
  3. How the sharing can be maintained in an ongoing fashion

Pre-requisites

In order to support this pattern, the clinician must be using an EHR that supports the following:

  • Required: Registering and Running SMART on FHIR applications
  • Recommended: An internet facing provider portal (preferred) or patient portal
  • Optional: Registering and using CDS Hooks service for patient-view hooks

Setting up the pattern includes registering the case manager service with the EHR, and vice versa, and probably exchanging public keys. This step is not standardized by the FHIR/SMART on FHIR/CDS Hooks specifications, but it needs to be done.

Interaction

#1 Prompting the clinician to register the patient

The EHR notifies the case manager service that a patient is being viewed. The case manager service uses the local patient id, and patient information – if provided – to determine whether the patient is already registered, or could or should be registered, and, if it wants, can prompt the user to run the registration app.

This part of the pattern is optional: the clinician can always choose to run the registration app (step #2) without being prompted – this just makes the process easier.

Details:

  • EHR calls CDS Hooks Service. Information provided includes something like:
    • hook :  “patient-view”
    • hookInstance : random UUID,
    • patient: {local patient id}
    • context: none
    • prefetch: patient resource, list of patient’s medications, allergies etc (all optional, and case manager service specific)
  • CDS Hooks Service returns a card that says
    •  patient is or is not (or might be) registered with case manager service (based on recognising local id, or matching patient details (if provided)
    •  patient is not registered with case manager service
    •  if prefetch was passed, and patient is not registered with case manager service, and patient meets criteria, a link to a SMART App to register the patient with the case manager service
    • card includes a link the case manager smart app (see #2)

#2 Registering a patient with the case manager service

The user runs the SMART on FHIR application for the case manager, which gathers information from the EHR (as authorized by the user/institution) and then prompts the user for any additional information needed by the case manager service, and then submits all the information to the case manager service.

Note that each case manager service provides their own application that does what they want, though there is likely to be a (or several) general open source frameworks that do most of what is entailed here.

Details:

  1. User runs the registration app. Either as prompted by CDS Hooks Service above, or by a manual app launch
  2. SMART app loads patient details
  3. SMART app loads medications, problems, allergies, subset of lab results as appropriate for case service
  4. SMART app connects to case manager service, and prompt user for additional information as required (typically, this would include patient consent information and reason for referral at least)
  5. user fills out additional information and submits  (As an optional best practice, the information can be submitted as is submitted to register as a FHIR Bundle, including ServiceRequest, practitioner resource, provenance resource, and gathered data, as appropriate)
  6. Case Management Service stores all required information within its own database.

Note that FHIR provides the Questionnaire/QuestionnaireResponse resources. In this case, the app would retrieve a blank Questionnaire from the case manager service, prompt the user to fill out any information that cannot be automatically determined, and then save a QuestionnaireResponse as part of its registration Bundle (see the SDC implementation guide for details on this)

#3 Ongoing sharing of the clinical record

This part of the pattern is optional – there might not be any utility in ongoing sharing, or it might not be appropriate to ask (either clinician or patient), or it might not be consented to. Note that there’s a marked difference between consenting to share ‘this set of information’ and ‘what ever arises in the future’.

How ongoing sharing works depends on whether the provider API end-point (used by the application in #2) is publicly available or not, and whether the ongoing sharing should be patient or provider managed. If it’s not public, the patient API end-point can be used. If the sharing should be managed by the patient, the patient API should be used. If there’s no patient or provider API publicly available, then there’s no way to orchestrate on-going sharing; records can only be shared as a one-time process in step #2 when the clinician chooses to (usually during a referral). (note: whether the provider portal is publicly accessible, or the URL of the patient portal need to be shared with the application at configuration time).

Ongoing sharing uses the SMART on FHIR protocol as profiled in Sync for Science (S4S) until access is revoked. Access to the S4S interface requires a bearer token  – this pattern relates to acquiring the token.

Provider Portal:

If the provider portal is publicly available, then the application (during step #2) shares its bearer token with the case manager service. How this works:

  • The application asks for an access token with offline scope
  • It shares the token with the case manager service
  • The case manager service uses the token to update the patient’s record periodically

Note that it might be useful for the case manager service to have a different token from the case manager application that runs locally and interactively. The application could use the draft IETF token exchange spec for that, though this is not part of the eco-system now.

Patient Portal:

If there is no provider portal, or if ongoing sharing should be managed by the patient, the patient portal can be used instead. The way this works generally is:

  • The case manager service emails the patient a link (email gathered during registration)
  • patient follows the link and logs in or registers etc
  • patient gets redirected to OAuth login on patient’s portal
  • Patient chooses to authorise the case manager service to access patient data. Scopes could include a broad permission like “patient/*.read”, or granular permissions like “patient/AllergyIntolerance.read”, “patient/MedicationStatement.read”, “patient/MedicationOrder.read”, “patient/Medication.read”, “patient/Condition.read”, “patient/Observation.read”, etc.
  • Case manager service uses the bearer token as described above

If there is no Internet-accessible FHIR API endpoint for the EHR’s data, then there’s another solution: to run a service within the hospital’s internal network that hosts SMART on FHIR apps that can have access to the provider portal. That’s, well, possible, but there’s a series of challenges around administration.

Note:

This pattern is for managing this task using the FHIR/SMART on FHIR/CDS Hooks framework alone. There are other patterns – using the Consent resource – that come into play if the EHRs implement a patient managed consent framework (directly or using UMA), or if the EHR vendor is able/willing to do integration work.

#FHIR and R (Stats / graphing language)

I’ve spent the last 2 days at the 2017 Australian R Unconference working on an R client for FHIR.

For my FHIR readers, R is a language and environment for statistical computing and graphics. (Having spent the last couple of days explaining what FHIR is to R people).  My goal for the 2 days was to implement a FHIR client in R so that anyone wishing to perform statistical analysis of information available in R could quickly and easily do so. I was invited to the R Unconference by Prof Rob Hyndman (a family friend) as it would be the best environment to work on that.

My work was a made a lot easier when Sander Laverman from Furore released an R package to do just what I intended to do earlier this week. We started by learning the package, and testing it. Here’s a graph generated by R using data from test.fhir.org:

Once we had the R Package (and the server) all working, I added a few additional methods to it (the other read methods). For sample code and additional graphs, see my rOpenSci write up.

I think it’s important to make the FHIR data easily available to R because it opens up a connection between two different communities – that’s good for both. Many of the participants are the Unconference were involved in health, and aware of how hard it is to make data available for analysis

Restructuring the data

FHIR data is nested, heirarchical, and focused on operational use. I’ve written about the difference between operational and analytical use before. Once we had the data being imported from a FHIR server into a set of R data frames, Rob and I looked at the data and agreed that that most important area to focus on was tools to help reshape the data down to a usable form. The thing about this is that it’s not a single problem – the ‘usable form‘ will depend entirely on what the question that is being asked of the data is.

So I spent most of the time at the Unconference extending my graphQL implementation to allow reshaping of the data (in addition to it’s existing function in assembling and filtering the data). I defined 4 new directives:

I’ve documented the details of this over on the rOpenSci write up, along with examples of how they work. They don’t solve all data restructuring problems by a very long shot, but they do represent a very efficient and reusable way to shift the restructuring work to the server.

There was some interest at the unconference in taking my graphQL code and building it into an R package to allow graphQL query of graphs of data frames in R, to assemble, filter, and restructure them – it’s an idea that’s useful anytime you want to do analysis of a graph of data. But we were all too busy for that – perhaps another time.

Where to now?

I think we should add R support to the AMIA FHIR datathon series, and maybe it’s time to encourage the research track at the main FHIR connectathons to try using R – I think it’s a very powerful tool to add to our FHIR toolkits.

Thanks to Adam Gruer from Melbourne’s Royal Children’s Hospital for helping – those graphs are his. Thanks also the organisers – particularly Nick Tierney (gitmeister estraordinaire). I picked up some ideas from the Unconference that will improve the FHIR connectathons.

Argonaut in Australia, and the MyHR

Project Argonaut is coming to Australia. That is, at least one major US EHR vendor is planning to make their SMART-on-FHIR EHR extension interface available in Australia during 2018 (discussion about this at Cerner Health Conference where I was last week). HL7 Australia will work with them (and any other vendor) to describe what the Argonaut interface looks like in Australia (short answer: not much different: some different patient extensions, a few terminology changes (not RxNorm), maybe a couple of extensions on prescriptions for Reg24 & CTG). Also, HL7 Australia will be planning to engage with Australian customers of the US EHR vendors to help build a community that can leverage the capabilities of the SMART on FHIR interface.

This is a big deal for Australian EHR vendors that compete with the US vendors – they better start offering the same capabilities based on the same standards, or one of their key market advantages will be consigned to the dust of history. They’ll also find themselves competing with established SMART on FHIR Application vendors too. So I look forward to good engagement with Australian companies as we flesh this out (I know at least one will be deeply involved).

This also offers us an opportunity to consider what to do with the MyHR. The government has spent a great deal of money on this, and results have been disappointing. (Yes, the government publishes regular usage stats which show continuous increase, but these are not the important usage metrics, and they’re not the kind of stats that were hoped for back when we were designing the system). And it’s hardly popular amongst the typical candidate users (see, for example, AMA comments, or for more color, Jeremy Knibb’s comments or even David More’s blog).

But I’m not surprised at this. Back when it was the pcEHR, the intentions were solid, and though the original timeline was impossibly tight, it came together in an amazingly quick time (kudos to the implementers). But as it came together, I knew it was doomed. This is inevitable given it’s architecture:

Salient points about this architecture:

  • The providers push CDA documents to the central document repository
  • Patients can view documents about them
  • Patient’s can write their own documents, but providers won’t see them
  • Patient’s can exert their control only by ‘hiding’ documents – e.g. they can only break the flow of information (reminder, the internet age treats censorship as damage and routes around it)
  • Clinicians can find and read documents
  • Each document is it’s own little snap shot. There’s no continuity between them, no way to reconcile information between them
  • There are no notifications associated with the system

You can’t build any process on that system. You can’t even build any reliable analysis on it (stakeholders worried about the government using it for secondary data analysis shouldn’t, in general, worry about this, it’s too hard to get good data out of most of the CDA documents). These limitations are baked into the design. That’s why I went and developed FHIR – so that when the full reality of the system become evident, we’d have a better option than a document repository.

Well, 10 years later, and we’re still trying to load ever more use into the same broken design, and the government sponsors are still wondering why it’s not ‘working’. (at least we stopped calling it the ‘personally controlled’ EHR, since it’s the government controlled EHR). And as long as it exists and is the focus of all government efforts to make digital health happen, it will continue to hold up innovation in this country – a fact which is terribly evident as I travel and see what’s happening elsewhere.

But it doesn’t have to be like this.

The MyHR is built on a bunch of useful infrastructure. There is good ideas in here, and it can do good things. It’s just that everything is locked up into a single broken solution. But we can break it open, and reuse the infrastructure. And the easiest way I can see to do this is to flip the push over. That is, instead of the source information providers pushing CDA documents to a single repository, we should get them to put up an Argonaut interface that provides a read/write API to the patient’s data. Then, you change the MyHR so that it takes that information and generates CDA documents to go into the MyHR – so no change needed to the core MyHR.

What this does is open up the system to all sorts of innovation, the most important of which is that the patient can authorise their care providers to exchange information directly, and build working clinically functional systems (e.g. GP/local hospital, or coordinated care planning), all without the government bureaucrats having to decide in advance that they can’t be liable for anything like that. That is, an actually personally controlled health record system not a government controlled one. And there’s still a MyHR for all the purposes it does exist for

This alternative looks like this:

The salient features of this architecture:

  • All healthcare providers make healthcare information services available using the common Argonaut based interface (including write services)
  • Patients can control the flow at the source – or authorise flows globally through myGov (needs new work on myGov)
  • Systems can read and write data between them without central control
  • The MyHR can pull data (as authorised) from the sources and populate the MyHR as it does now
  • Vendors and providers can leverage the same infrastructure to provide additional services (notifications, say)

The patient can exert control (either directly at the provider level, or through mygov as an OAuth provider) and control the flow of information at the source – they can opt-in or -out of the myHR as appropriate, but they can also share their information with other providers of healthcare services directly. Say, their phone’s very personal health store. Or research projects (e,g, AllofUs). Or, most importantly and usefully, their actual healthcare providers, who can, as authorised by the patient, set up bi-directional flows of information on top of which they can build better coordinated care processes.

These features lead to some important and different outcomes:

  • Healthcare providers and/or system vendors can innovate to build distributed care models that provide a good balance between risk and reward for different populations (instead of the one-size suits bureaucrats that we have now)
  • Patient’s can control the system by enabling the care flows that they want
  • Clinicians can engage in better care processes and improve their process and outcomes (though the US process shows clearly that things get worse before they get better, and you have to plan for that)

This isn’t a small change – but it’s the smallest change I know of that we can make that preserves the MyHR and associated investment, and gives us a healthcare system that can innovate and build better care models. But I don’t know how we’ll think about getting there, given that we’re still focused on “make everyone use the MyHR”.

Note: Adapted from my presentation yesterday at the HL7 Australia Meeting

 

Gender Participation in the #FHIR Community

This is post #3 in my series about why to participate in the FHIR standards process.

A few weeks ago, I attended the AIIA awards night at the kind invitation of the eHealth team from CSIRO. One of the speakers was the Victorian Minister for Small Business, the Hon Philip Dalidakis. The presentation was the day after the sad passing away of another Victorian minister, Fiona Richardson, and in her memory, he made an inspired plea for us all to actively consider whether there’s anything that we can or should do to improve the gender imbalance that’s typical in IT.

HL7 – and the FHIR community – does have the gender imbalance that’s characteristic of IT communities – though it’s also a health community, and so the gender divide is not as stark as it is in some communities. But it’s not anywhere close to 50:50, and his words made me wonder whether we/I are in a position to do anything more about that. After the presentation, I spoke to Minister Dalidakis about what levers we might have in an open standards community to encourage more balanced participation – they’re different to those you can/should use in a commercial setting.  He graciously gave me several pieces of advice, and since then I’ve been discussing this with the FHIR team, and particularly with our female leaders in the community.

FHIR and HL7 are great communities to be involved with, and that’s particularly true if you’re a woman – that’s what our female leaders tell me.

They say this is because:

  • We have a strong governance model that is good at managing contention (we have a fair bit of it to manage!)
  • Everyone is treated equally, and mostly treated well (the issues mentioned here are gender neutral)
  • Our discussions are thoughtful and respectful
  • The healthcare vertical is inherently non-confrontational, non-violent

And FHIR is a great place to contribute. Paula Braun says:

Many of the important indicators about our health…e.g., blood pressure, abnormal lab results, etc…are invisible to us. Without access to this data, we and the professionals we entrust to take care of us, are operating in the dark. The older, outdated ways of accessing and exchanging health information have an “I know better than you” feel to them. It was the equivalent of somebody saying, “Hey there girl, don’t worry your pretty little head about how this all works. It’s much too complicated for you.” FHIR is different. FHIR is a community where motivated people self-select to un-break healthcare…at least the IT part of healthcare. I don’t consider myself a “techie” but I choose to participate in the FHIR community because of the possibilities that FHIR enables, the professionalism that is maintained, and, most importantly, because its fun to be part of a movement that is transforming the dominant assumptions and expectations about healthcare

Michelle Miller (who is rightfully one of the Health Data Management’s 2016 Most Powerful Women in Health Care IT) says:

I participate in the FHIR community because:

  • Even with my bright pink “First-Time Attendee” ribbon, I quickly learned that my input was valued.
  • HL7 FHIR has a focus on those who adopt and implement the standard specification, such that implementer involvement and input is very much respected and encouraged.
  • After getting energized by the fantastic collaboration that occurred during the HL7 work group meetings, I started attending weekly work group conference calls to continue the discussion
  • I feel strongly that all changes, big and small, help make the FHIR specification that much better for the next implementer or developer to adopt.  Adoption is what makes a specification a standard because without adoption, we haven’t really achieved interoperability
  • I have been so impressed with the knowledge, collaboration and overall friendliness of the HL7 FHIR community. The discussion is always thoughtful and respectful, such that I have high confidence in FHIR’s ability to maximize interoperability.

In sum, it is energizing for me to collaborate with such knowledgeable experts on a subject (healthcare) that is so meaningful and impactful (bigger than just me, my job, or even my country).  Despite the diversity in our perspectives (country, vendor, government, technical vs clinical etc.), the FHIR community is genuinely interested in reaching the best conclusion because adoption is what makes a specification a standard and achieves interoperability

Michelle has a full write up about her involvement on the Cerner Blog.

So the FHIR community is a great place for women who want to make a difference to contribute. If you’re thinking about it – we’d love to have you involved; please get in contact with me, or one of:

(though there’s many other valued contributers as well).

Still, there’s plenty we can do to improve:

  • One particularly pertinent issue around gender participation is about time requirements. HL7 is both good and bad here – most participation is remote, and really flexible in terms of hours and location – that’s a big deal. But there’s also face to face meetings that require travel – that can be a real problem, and HL7 has struggled to find a practical solution around remote participation (it’s an inherently hard problem).
  • There’s general agreement that we could do a lot better with regard to welcoming, induction, and initial training procedures – these are actually issues for both genders – so that’s something that we’re going to work on
  • We need to communicate better that the FHIR community is not just engineers and hackers (who lean male) – it’s about health, and clinicians and nurses (and business managers) are just as much implementers with valuable contributions to make. Of course, the FHIR community is comprised of both genders across all these roles
  • Good consultative leadership is hard to find, and we need/want more of that
  • We have good leaders – we need to recognize the ones we have.
  • We could keep an eye on statistics around assignment of administrative duties (“housework”) at HL7 – but we don’t

Note that all these are really about maximizing our human capital. So, we have plenty of potential, but we aren’t really capitalizing on it. Increasingly, we are investing in human capital as our community grows, so watch this space.

Btw, this image from the Madrid meeting shows that we can do better on balance (though, in fact, we are on the whole more balanced than this particular photo):

Contributers recognized for contributions way beyond expectations to getting FHIR R3 published – featuring both Melva and Michelle

p.s. A note about the FHIR and HL7 communities: these are tightly related communities with a good % overlap, but they are also different in nature, processes, and composition, so we had to consider them both.

#FHIR and Bulk Data Access Proposal

ONC have asked the FHIR community to add new capabilities to the FHIR specification to increase support for API-based access and push of data for large number of patients in support of provider-based exchange, analytics and other value-based services.

The background to this requirement is that while FHIR allows for access to data from multiple patients at a time, the Argonaut implementations are generally constrained to a single patient access, and requires human mediated login on a regular basis. This is mainly because the use case on which the Argonaut community focused was patient portal access. If this work is going to be extended to provide support for API based access to a large number of patients in support of provider based exchange, the following questions (among others) need to be answered:

  • how does a business client (a backend service, not a human) get access to the service? How are authorizations handled?
  • How do the client and server agree about which patients are being accessed, and which data is available?
  • What format is the data made available in?
  • How is the request made on a RESTful API?
  • How would the client and server most efficiently ensure the client gets the data it asks for, without sending all data every time?

The last few questions are important because the data could be pretty large – potentially >100000000 resources, and we’ve been focused on highly granular exchanges so far. Our existing solutions don’t scale well.

In response to some of these problems, the SMART team had drafted an initial strawman proposal, which a group of us (FHIR editors, ONC staff, EHR & other vendors) met to discuss further late one night at the San Diego WGM last week. Discussion was – as expected – vigorous. Between us, we hammered out the following refined proposal:


Summary

This proposal describes a way of granting an application access to data on a set of patients. The application can request a copy of all pertinent (clinical) access to the patients in a single download. Note: We expect that this data will be pretty large.

High-level Use Case Description – FHIR-enabled Population Services (this section provided by ONC)

  • Ecosystem outcome expected to enable many specific use case/business needs: Providers and organizations accountable for managing the health of populations can efficiently access to large volumes of informationon a specified group of individuals without having to access one record at a time. This population-level access would enable these stakeholders to: assess the value of the care provided, conduct population analyses, identify at-risk populations, and track progress on quality improvement.
  • Technical Expectations: There would be a standardized method built into the FHIR standard to support access to and transfer of a large amount of data on a specified group of patients and that such method could be reused for any number of specific business purposes.
  • Policy Expectations: All existing legal requirements for accessing identifiable patient information via other bulk methods (e.g., ETL) used today would continue to apply (e.g., through HIPAA BAAs/contracts, Data Use Agreements, etc).

Authorizing Access

Access to the data is granted by using the SMART backend services spec.

Note: We discussed this at length, but we didn’t see a need for Group/* or Launch/* kind of scopes – System/*.read will do fine. (or User/*.*, for interactive processes, though interactive processes are out of scope for this work). This means that a user cannot restrict Authorization down to just a group, but in this context, users will trust their agents.

Accessing Data

The application can do either of the following queries:

 GET [base]/Patient/$everything?start=[date-time]&_type=[type,type]
 GET [base]/Group/[id]/$everything?start=[date-time]&_type=[type,type]

Notes:

  • The first query returns all data on all patients that the client’s account has access to, since the starting date time provided.
  • The second query provides access to all data on all patients in the nominated group. The point of this is that applications can request data on a subset of all their patients without needing a new access account provisioned (exactly how the Group resource is created/identified/defined/managed is out of scope for now – the question of whether we need to do sort this out has been referred to ONC for consideration).
  • The start date/time means only records since the nominated time. In the absence of the parameter, it means all data ever
  • The _type parameter is used to specify which resource types are part of the focal query – e.g. what kind of resources are returned in the main set. The _type parameter has no impact on which related resources are included) (e.g. practitioner details for clinical resources). In the absence of this parameter, all types are included.
  • The data that is available for return includes at least the CCDS (we referred the question of exactly what the data should cover back to the ONC)
  • The FHIR specification will be modified to allow Patient/$everything to cross patients, and to add $everything to Group
  • Group will be added as a compartment type in the base Specification

Asynchronous Query

Generally, this is expected to result in quite a lot of data. The client is expected to request this asynchronously, per rfc 7240. To do this, the client uses the Prefer header:

Prefer: respond-async

When the server sees this return header, instead of generating the response, and then returning it, the server returns a 202 Accepted header, and a Content-Location at which the client can use to access the response.

The client then queries this content location using GET content-location (no prefixing). The response can be one of 3 outcomes:

  • a 202 Accepted that indicates that processing is still happening. This may have an “X-Progress header” that provides some indication of progress to the user (displayed as is to the user – no format restrictions but should be <100 characters in length). The client repeats this request periodically until it gets either a 200 or a 5xx
  • a 5xx Error that indicates that preparing the response has failed. The body is an OperationOutcome describing the error
  • a 200 OK with the response for the original request. This response has one or more Link: headers (see rfc 5988) that list the files that are available for download as a result of servicing the request. The response can also carry a X-Available-Until header to indicate when the response will no longer be available

Notes:

  • This asynchronous protocol will be added as a general feature to the FHIR spec for all calls. it will be up to server discretion when to support it.
  • The client can cancel a task or advise the server it’s ok to delete the outcome using DELETE [content-location]
  • Other than the 5xx response, these responses have no body, except when the accept content type is ‘text/html’, in which case the responses should have an HTML representation of the content in the header (e.g. a redirect, an error, or a list of files to download) (it’s up to server discretion to decide whether to support text/html – typically, the reference/test servers do, and the production servers don’t)
  • Link Headers can have one or more links in them, per rfc 5988
  • Todo: decide whether to add ‘supports asynchronous’ flag to the CapabilityStatement resource

Format of returned data

If the client uses the Accept type if application/fhir+json or application/fhir+xml, the response will be a bundle in the specified format. Alternatively, the client can use the type application/fhir+ndjson. In this case:

  • The response is a set of files in ndjson format (see http://ndjson.org/).
  • Each file contains only resources of a single type.
  • There can be more than one file for each resource type.
  • Bundles are broken up at Bundle.entry.resource – e.g. a bundle is split on Entries so the the bundle json file will contain the bundle without the entry resource, and the resources are found (by id) in the type specific resource files (todo: how does that work for history?)

The nd-json files are split up by resource type to facilitate processing by generic software that reads nd-json into storage services such as Hadoop.

Notes:

  • the content type application/fhir+ndjson will be documented in the base spec
  • We may need to do some registration work to make +ndjson legal
  • We spent some time discussing formats such as Apache Avro and Parquet – these have considerable performance benefits over nd-json but are much harder to produce and consume. Clients and servers are welcome to do content type negotiation to support Parquet/ORC/etc, but for now, only nd-json is required. We’ll monitor implementation experience to see how it goes

Follow up Requests

Having made the initial request, applications should store and retain the data, and then only retrieve subsequent changes. this is done by providing a _start time on the request.

Notes:

  • Todo: Is _start the right parameter (probably need _lastUpdated, or a new one)?
  • Todo: where does the marker time (to go into the start/date of the next follow up request) go?
  • clients should be prepared to receive resources that change on the boundary more than once (still todo)

Subscriptions

The ONC request included “push of data”. It became clear, when discussing this, that server side push is hard for servers. Given that this applications can perform these queries regularly/as needed, we didn’t believe that push (e.g. based on subscriptions) was needed, and we have not described how to orchestrate a push based on these semantics at this time


Prototyping

It’s time to test out this proposal and see how it actually works. With that in mind, we’ll work to prototype this API on the reference servers, and then we’ll hold a connectathon on this API at the New Orleans HL7 meeting in January. Given this is an ONC request, we’ll be working with ONC to find participants in the connectathon, but we’ll be asking the Argonaut vendors to have a prototype available for this connectathon.

Also, I’ll be creating FHIR change proposals for community review for all the changes anticipated in this write up. I guess we’ll also be talking about an updated Argonaut implementation guide at some stage.

 

Question: #FHIR Documents and Composition

Question

What is the relationship between FHIR Documents and  composition resources? Meaning, what is the best way to capture/store in FHIR, documents that are not necessarily related to clinical info (file, images, file reference links to external content systems etc)

If I have several documents that need to be associated to a patient and organizations and was looking for your thoughts on best practices you have seen

Answer

Documents like this – files, images, reference links to content systems – these are best handled using DocumentReference.  Composition is used for documents that have tightly controlled content. a mix of narrative and data.