Lloyd McKenzie on Woody Beeler

Guest post: My close friend Lloyd wanted to share his thoughts on hearing the news about Woody.

My recollections of Woody are similar to Grahame’s.

I started my HL7 international journey in 2000.  In my case, it was in an attempt to understand how I could design my v2 profiles so they would be well aligned with v3.  I quickly learned the foolishness of that notion, but became enamored of the v3 effort.

HL7 was an extremely welcoming organization and Woody played a big part in that welcome.  I was a wet-behind the ears techy from Canada and he was an eminent physician, former organization chair and respected elder of the organization.  Yet he always treated me as an equal.  Over the years, we collaborated on tooling, data models, methodology, processes and the challenges of getting things done in an organization with many diverse viewpoints.  In addition to his sense of humour and willingness to get his hands dirty, I remember Woody for his passion.  He really cared about making interoperability work.  He was willing to listen to anyone who was trying to “solve the problem”, but he had little patience for those who he didn’t sense had similar motivations.

His openness to new ideas is perhaps best exemplified by his reaction to the advent of FHIR.  Woody was one of the founding fathers of v3 and certainly one of its most passionate advocates.  Over his time with HL7, he invested years of his life advocating, developing tools, providing support, educating, guiding the development of methodology and doing whatever else needed to be done.  Given his incredible investment in the v3 standard, it would not be surprising for him to be reluctant to embrace the new up-and-comer that was threatening to upset the applecart.  But he responded to the new development in typical Woody fashion.  He asked probing questions, he evaluated the intended outcomes and considered whether  the proposed path was a feasible and efficient way to satisfy those outcomes.  Once he had satisfied himself with the answers, he embraced the new platform.  Woody took an active role in forming the FHIR governance structures served as one of the first co-chairs of the FHIR govenance board.  To Woody, it was the outcome that mattered, not his ego.

Woody embraced life.  He loved traveling with his wife Selby (and his kids or grandkids when he could).  He loved new challenges.  He loved his work, but he wasn’t afraid to play either.  He was an active participant in after-hours WGM poker games.

It was with reluctance that Woody stepped back from his HL7 activities after his diagnosis with cancer, but as he expressed it at the time, he had discovered that he only had time for two of three important things – fighting his illness, spending time with his family and doing the work he loved with HL7.  He chose the right two priorities.

While version 3 might not have had the success we thought it would when we were developing it, the community that evolved under HL7 v3 and the knowledge we gleaned in that effort has formed the essential foundation and platform that enabled the building of FHIR.  I am grateful to have had Woody in my life – as a mentor, a co-worker and a friend.  I am grateful too for everything he helped build.  Woody’s priority was to focus on really making a difference.  In that he has set the bar very high for the rest of us.

Thank you for everything you’ve done Woody.  We miss you.

Woody Beeler has passed away

Today, Woody Beeler passed away after battling cancer for a few years. Woody was a friend, an inspiration, and my mentor in health care standards, and I’m going to miss him.

I first met Woody in 2001 at my first HL7 meeting. It was Woody who drew me into the HL7 community, and who educated me about the impact that standards could have. Many people at HL7 have told me the same thing – it was Woody that inspired them to become part of the community.

When I remember Woody, I think of his humour, his passion for developing the best standards possible, and his commitment to building the community out of which standards arise. And I remember the way Woody was prepared to roll up his sleeves and get his hands dirty to get the job done. To the point of learning significant new technical skills long after retirement age had come and gone. Truly, a Jedi master at healthcare standards.

For many years, Woody was the v3 project lead for HL7. Woody wasn’t blind to the issues with v3, but it was the best option available to the community at the time – so he gave everything he had to bring v3 to completion.  And it was Woody who mentored me through the early stages of establishing the FHIR community.

It’s my goal that in the FHIR project we’ll keep Woody’s commitment to community and healthcare outcomes – and doing whatever is needed – alive.

Vale Woody

(pic h/t Rene Spronk, who maintains a history of HL7 v3 – see http://ringholm.com/docs/04500_en_History_of_the_HL7_RIM.htm)

FHIR Product Priorities for Release 4

Now that we’ve published Release 3 of FHIR, it’s time for us to consider our main priorities for the next FHIR release. This is my draft list of product priorities that we’ll be discussing – and trying to execute – at the Madrid meeting next week:

  • Normative: push to normative for
    • Foundation / API / XML / JSON / Bundle / OperationOutcome
    • Terminology Service (ValueSet / CodeSystem / ExpansionProfile)
    • StructureDefinition / CapabilityStatement
    • Patient / RelatedPerson / Practitioner / Organization / ?Endpoint
  • Position a core set of clinical resources (‘health base’?) for normative in R5 (or Observation | AllergyIntolerance | MedicationStatement normative for R4?)
  • JSON: ? use manifest for extensions, parameters resource (see blog post) (note that discussion on this didn’t go very well – probably will be dropped)
  • RDF: more ontology bindings + resolve status of JSON-LD
  • Data Analytics: support for a bulk data analysis bridge format (Apache Parquet?)
  • API: better control over retrieving graphs, and value added query support (tabular format?)
  • Patterns: change the W5 framework to a pattern (logical model), tie the patterns to ontology, and use of patterns to drive more consistency (and how to do this without decreasing quality)
  • Services: more services. Candidates: conformance, registry, personal health summary?, etc?
  • Deployment: get a clear standards path for smart on fhir / cds-hooks (and alignment with UMA/Heart)
  • FM: work on alignment between FM resources and the rest of FHIR

Note that this list is written anticipating that the normal standards development process occur, and the content as a whole is maintained. I’d expect that this would amount to 1000s of tasks. So this list is not a list of ‘what will change in R4’, but an indication of where particular focus will be applied by the FHIR leadership (so don’t be concerned if a particular issue of yours is not on this list, as long as it’s in gForge)

Proposal for #FHIR JSON format change: @manifest

There’s a long running discussion in the FHIR community about the way the JSON format handles extensions, and operation invocations (“Parameters”) resource.  Various implementers keep proposing format changes to the JSON format around extensions, but the last time we made an attempt to change this, it was roundly quashed at ballot.

The underlying problem is that there’s 2 different (though overlapping) communities that use the JSON format for FHIR:

  • the interoperability community, who value consistency and robustness
  • the app writing community who value conciseness much more

From the perspective of the second community, the current JSON format doesn’t work well for representing either extensions, or the parameters of an operation. With this in mind, and drawing on the practices of the JSON-LD community, I’d like to advance a proposal for a manifest approach to extensions and parameters in the FHIR JSON format.

The way this would work is that we start with the existing format, and add a “@manifest” property, which contains information about how extensions and parameters have been represented in the json format. Applications reading the JSON format can either read the properties directly, based on their implicit knowledge of the manifest, or read the manifest and process accordingly.

As an example, consider this example Patient resource:

{
  "resourceType": "Patient",
  "id": "ex1",
  "extension": [
    {
      "url": "http://example.org/StructureDefinition/trials",
      "valueCode": "renal"
    }
  ],
  "active": true
}

This uses an extension following as specified in FHIR Release 3. The same resource rendered using a manifest might look like this:

{
  "resourceType": "Patient",
  "id": "ex1",
  "@manifest" : {
    "trials" : {
      "extension" : "http://example.org/StructureDefinition/trials",
      "type" : "code",
      "list" : false
    }
  },
  "trials": "renal",
  "active": true
}

Note: It’s important to note that processing the JSON directly and ignoring the manifest is a convenient but fragile approach; changes in naming or type would be transparent to an application that processed via the manifest, but would likely break an application that processed using the ‘trials’ name directly. That doesn’t mean that applications should not do this; just that it should only be used where the client and server are fairly closely linked and managed.

Aside: I think of this as ‘interoperability’ vs ‘operability’. At heart, FHIR is a specification for an API between disparate systems with different designs and life cycles (and customers – see ‘drive-by interoperability‘). But lots of people are using it as a client/server format for singly maintained applications (often because there’s no strong technical boundary between the internal and external use) – and it’s in that tightly managed context that the manifest approach brings the most benefit with a manageable risk.

It’s also possible to take the manifest and move it out of band:

{
  "resourceType": "Patient",
  "id": "ex1",
  "@manifest" : "http://healthintersections.com.au/patient.manifest.json",
  "trials": "renal",
  "active": true
}

And then, at http://healthintersections.com.au/patient.manifest.json:

{
  "@manifest" : {
    "trials" : {
      "extension" : "http://example.org/StructureDefinition/trials",
      "type" : "code",
      "list" : false
    }
  }
}

Of course, if the manifest is not available at the nominated address, applications that use the manifest will not be able to process the instance correctly – if at all. So that’s an obvious risk that needs to be managed.

Readers familiar with JSON-LD will have seen the obvious similarities with JSON-LD’s @context. We’re not actually using ‘@context‘, though, because while what we are doing is structurally similar, we’re using it for a different purpose.

You could use the same technique with regard to parameters on an operation. Take, for example, this input to the $expand operation:

{
  "ResourceType" : "Parameters",
  "parameter" : [
    {
    "name" : "coding",
    "valueCodeableConcept" : {
      "coding" : {
        "system" : "http://loinc.org",
          "code" : "1963-8",
      "display" : "test"
      }
    }
  },
  {
    "name" : "valueSet",
    "resource": {
      "resourceType" : "ValueSet",
    [etc]
    }
  }
  ]
}

With an in-line manifest, this might look like this:

{
  "ResourceType" : "Parameters",
  "@manifest" : {
    "code" : {
      "parameter" : " http://hl7.org/fhir/OperationDefinition/ValueSet-validate-code#coding",
      "type" : "Coding",
      "list" : false
    }
    "vs" : {
      "parameter" : " http://hl7.org/fhir/OperationDefinition/ValueSet-validate-code#valueSet",
      "type" : "Resource",
      "list" : false
    }
  }
  "code" : {
    "coding" : {
      "system" : "http://loinc.org",
        "code" : "1963-8",
    "display" : "test"
    }
  },
  "vs" : {
      "resourceType" : "ValueSet",
    [etc]
    }
  }
}

Or, we could refer to a manifest defined in the specification itself:

{
  "ResourceType" : "Parameters",
  "@manifest" : "http://hl7.org/fhir/r4/validation.manifest.json",
  "code" : {
    "coding" : {
      "system" : "http://loinc.org",
        "code" : "1963-8",
    "display" : "test"
    }
  },
  "vs" : {
      "resourceType" : "ValueSet",
    [etc]
    }
  }
}

Several Questions I’ve had from the few people who’ve looked at this idea already:

  • Why not do this in XML too? Well, we could. But I don’t think it has value, because people using FHIR in tightly bound client/server type environments (where the @manifest approach is nost beneficical) are almost exclusively using JSON. So the cost/benefit is not there for XML. Also, in XML, schema validation matters more.
  • What about JSON schema then? It’s possible to generate a JSON schema for this, if the generation tooling knows what the manifest is going to say. No such tooling exists right now, but it could be written. Or else someone could easily write a convert to convert from the @manifest form to the existing normal form.
  • What about the reference implementations? They’d be enhanced to support this transparently on read, and there would be some kind of configuration to allow the user to control the manifest, and then it would write according to the manifest.
  • Would this be instead of the existing approach? I propose that it’s an additional approach: the existing extension and parameter format is still valid, and can still be used, but implementations can use the @manifest if they want – and can mix and match. e.g. some extensions represented using @manifest, and others (not known in the manifest) represented the existing way

For follow up / discussion, see fhir.chat.org, though comments here are also welcome.

#FHIR Testing is Coming

The FHIR Team has been working with the HL7 Education Work Group to introduce FHIR certification testing so that members of the FHIR community can demonstrate their knowledge of the specification. There’s going to be 2 levels of certification test.

FHIR Proficiency Test

This test ascertains whether a candidate has basic knowledge of the FHIR specification – what areas it covers, what resources, data types, and profiles are, some basic overview of the way RESTful interfaces work. This test is open to anyone, and it works very much like the existing V2 and CDA tests – though it’s a little easier than them.

Anyone can sit – and pass – this closed book test.

FHIR Professional Credentials 

This is a much harder test – it explores the functionality of the FHIR specification deeply, and to pass it requires considerable experience working with the specification. The idea of this test is that if you pass it, you’ve met our expectations for being an expert and providing advice to other implementers about how to implement the specification properly.

This is an open book test – you have a copy of the specification when sitting it – and it has pre-requisites including demonstrated practical experience in healthcare IT, and ongoing exposure to the FHIR community. The credentialing process will itself be approved by the appropriate authorities so that if you have met the credentialing criteria (including passing the test), you’ll be allowed to put letters after your name. The Professional Credentials will require ongoing maintenance.

Introducing the tests

There’s not a lot of detail here – we’re still working to resolve the process and requirements for the tests. So I can’t tell you, for instance, how much the tests will cost. At least, not at this stage. These details will be released as final decisions are made. The education committee plans to announce the proficiency test at the Madrid HL7 meeting in a few weeks, and then have it available by the September meeting. The Professional Credentials will follow.

At this point, I just wanted to give everyone a heads up about what is coming

Note: Some HL7 insiders have already worked with us prototyping the tests – we thank you for your support, and we’re planning to grandfather you in when the time comes.

 

Question: CCDA and milliseconds in timestamp

Question:

We are exchanging CCDs over an  HIE. While consuming a CCD from a particular partner, we are having issues parsing the dates provided in the CCD. Most often, we cannot process the effectiveTime in various sections of the CCD.

We have been told that our partner is using a C-CDA conformant CCD. The CCD parser on our side cannot handle any of the effectiveTime values which contain milliseconds, such as:
<effectiveTime value=”20170217194506.075″/>

Our vendor informed us that:

“The schema you referenced (for a CCD) is the general data type specification for CDA. There are additional implementation documents that further constrain the data that should be reported. For example, the sections shown below are from the HL7 Implementation Guide for Consolidated CDA Release 1.1. As you can see, a US Realm date/time field (which includes the ClinicalDocument/effectiveTime element) allows for precision to the second.
The guides for other CCD implementations – C32, C-CDA R2.0, etc. – include identical constraints for /ClinicalDocument/effectiveTime. These constraints are the basis for our assertion that milliseconds are not acceptable in ClinicalDocument/effectiveTime/@value.”

The schemas provided for C-CDA and CDA do allow for a milliseconds value, according to their RegEx pattern. The CCDA schematrons have notes that specify:

The content of time SHALL be a conformant US Realm Date and Time (DTM.US.FIELDED) (2.16.840.1.113883.10.20.22.5.3).

Even though the schematron may not have the checks to flag a millisecond value, they do make that statement, which does not allow for milliseconds.

Please provide guidance on whether milliseconds are acceptable in C-CDAs and/or CCDs.  If milliseconds are not allowed, then why don’t the schemas/schematrons trigger errors when millisecond values are found?

Answer:

Firstly, some background: The CDA schema is a general schema that applies to all use of the CDA specification anywhere. So the schema allows milliseconds. CCDA is a specification that builds on CDA to make further restrictions. Most of these can’t be stated in schema (because schema does not allow for different content models for elements with the same name). So these extra constraints are made as schematron, so they can also be tested for CCDA documents.

The CCDA specification says, concerning date times, that they SHALL conform to DTM.US.FIELDED, and about dates like that, it says:

1. SHALL be precise to the day (CONF:81-10078).

2. SHOULD be precise to the minute (CONF:81-10079).

3. MAY be precise to the second (CONF:81-10080).

4. If more precise than day, SHOULD include time-zone offset (CONF:81-10081).

It sure sounds like you can’t use milliseconds. But, unfortunately, that’s a wrong interpretation. The DTM.US.FIELDED template on TS is an ‘open template’ which means that anything not explicitly prohibited is allowed.

Since milliseconds are not explicitly prohibited, they are allowed.

(Yes, you can then ask, why say “MAY be precise to the second”? this is careless language for a specification, and creates exactly this kind of trouble)

Note: this answer comes from Calvin Beebe (coming HL7 chair) on the HL7 Structured Documents Email List, and I’m just archiving the answer here so it can be found from search engines.

#FHIR, CDS-Hooks, and Clinical Decision Support

This is a guest post written by Kevin Shekleton from Cerner, and first posted to the HL7 CDS email list. Reproduced here for wider availability by agreement with Kevin


TL;DR: CDS Hooks will be working with the HL7 Clinical Reasoning team to make sure our approaches are complementary, and to ensure that CDS Hooks is on a path to standardization. The CDS Hooks implementation community should expect no changes to our open, community-based development process (but should expect to see increased interest and engagement from the community).

As briefly mentioned a few days ago on an earlier thread, there is some news to share from the HL7 WGM a couple weeks ago. I didn’t share this immediately at that time because frankly, I wasn’t sure as to the details yet. Rather than posting a vague post I was waiting until we had a few more discussions before communicating this out. 🙂
During the WGM, I attended a joint meeting between the CDS, CQI, and FHIR-I working groups. During this meeting, one of the topics of discussion was a new project scope statement (PSS) to semantically align Clinical Reasoning to CDS Hooks. There was an acknowledgement by the HL7 working group of the support and interest CDS Hooks has within the EHR and CDS vendor communities, so ensuring Clinical Reasoning aligns (where/when possible) to CDS Hooks is beneficial to those planning to support both projects.
The CDS working group has been working on a model for clinical decision support within FHIR called Clinical Reasoning (formerly known as CDS on FHIR). I’ve fielded many questions from folks all asking the same thing: “What is the difference between Clinical Reasoning and CDS Hooks?”
At the end of the joint meeting, several of us stuck around afterwards to discuss the two projects in further detail. Specifically, we began to directly address that aforementioned question: “What is the difference between Clinical Reasoning and CDS Hooks?”. We all agreed that none of us have ever had a very clear response to that question, mainly because each of us have been focused on our respective projects and have not sat down recently to compare/contrast the specifics of our approaches and goals.
Bryn Rhodes (primary architect of Clinical Reasoning), Isaac Vetter (Epic), and I proceeded to work for the next 6 hours or so on educating each other on architecture specifics, projects goals, and design constraints of each project. In doing so, we came away with the following high level takeaways:
  • CDS Hooks is designed solely around the execution of external clinical decision support.
  • Clinical Reasoning was designed primarily around the definition, sharing, and execution of local clinical decision support. However, the project also defines capabilities for external decision support that are based on older specifications, resulting in the overlap with CDS Hooks.
Based upon our initial work that afternoon/night, we all agreed on several things:
  • Continuing our conversations was in the best interest of both projects.
  • Both projects should be complementary
  • The sweet spot of Clinical Reasoning is in the space of local CDS
  • The sweet spot of CDS Hooks is in the space of external CDS
To reflect this, we modified the aforementioned project scope statement to commit to continuing these discussions in 2017 with the goal of more directly aligning the projects. Specifically, we agreed to explore moving CDS Hooks as an independent entity within the HL7 CDS community to solve the problem of external CDS, leaving the current Clinical Reasoning project to focus on the problem of local CDS.
What does this mean to all of you who are implementing CDS Hooks?
Not much, actually. We’re not changing our focus or design. The simplicity and focus of CDS Hooks has been one of its best strengths which is evident in the broad support, interest, and ease of development within the community. We will not compromise that.
What does this mean for how the project is managed?
Again, not much. CDS Hooks will remain a consensus driven open source project using modern development practices and tooling and following its own independent release process. I think this has worked well for other projects like SMART. I am working on a more formal project governance (more on this later) that should allow us to continue operating as-is while simultaneously satisfying HL7’s requirements.
Additionally, all of the conversations and work we’re just now starting will be done in full view of the community. Any proposed changes to CDS Hooks will continue to be logged as Github Issues and discussed with those interested, we’ll still run Connectathon tracks to get implementer feedback, and we’ll continue to use this mailing list and Zulip for discussions.
How does this benefit CDS Hooks, Clinical Reasoning, and the community?
First, the entire CDS community will have a clear understanding of Clinical Reasoning and CDS Hooks as well as when it’s appropriate to use each approach.
Second, both projects are going to be strengthened by our continued joint work to align on shared needs, identify gaps, and work in complementary directions rather than potentially pulling in opposing directions.
Finally, having CDS Hooks published under HL7 will benefit organizations that are constrained to recommending or implementing HL7 standards.
I’m excited for the work we’re all going to do within the CDS standards communities and specifically, CDS Hooks. The community support around CDS Hooks has been outstanding and I’m looking forward to working towards a 1.0 releases of a CDS Hooks spec this year as well as our first production implementations.