Category Archives: CDA

Argonaut in Australia, and the MyHR

Project Argonaut is coming to Australia. That is, at least one major US EHR vendor is planning to make their SMART-on-FHIR EHR extension interface available in Australia during 2018 (discussion about this at Cerner Health Conference where I was last week). HL7 Australia will work with them (and any other vendor) to describe what the Argonaut interface looks like in Australia (short answer: not much different: some different patient extensions, a few terminology changes (not RxNorm), maybe a couple of extensions on prescriptions for Reg24 & CTG). Also, HL7 Australia will be planning to engage with Australian customers of the US EHR vendors to help build a community that can leverage the capabilities of the SMART on FHIR interface.

This is a big deal for Australian EHR vendors that compete with the US vendors – they better start offering the same capabilities based on the same standards, or one of their key market advantages will be consigned to the dust of history. They’ll also find themselves competing with established SMART on FHIR Application vendors too. So I look forward to good engagement with Australian companies as we flesh this out (I know at least one will be deeply involved).

This also offers us an opportunity to consider what to do with the MyHR. The government has spent a great deal of money on this, and results have been disappointing. (Yes, the government publishes regular usage stats which show continuous increase, but these are not the important usage metrics, and they’re not the kind of stats that were hoped for back when we were designing the system). And it’s hardly popular amongst the typical candidate users (see, for example, AMA comments, or for more color, Jeremy Knibb’s comments or even David More’s blog).

But I’m not surprised at this. Back when it was the pcEHR, the intentions were solid, and though the original timeline was impossibly tight, it came together in an amazingly quick time (kudos to the implementers). But as it came together, I knew it was doomed. This is inevitable given it’s architecture:

Salient points about this architecture:

  • The providers push CDA documents to the central document repository
  • Patients can view documents about them
  • Patient’s can write their own documents, but providers won’t see them
  • Patient’s can exert their control only by ‘hiding’ documents – e.g. they can only break the flow of information (reminder, the internet age treats censorship as damage and routes around it)
  • Clinicians can find and read documents
  • Each document is it’s own little snap shot. There’s no continuity between them, no way to reconcile information between them
  • There are no notifications associated with the system

You can’t build any process on that system. You can’t even build any reliable analysis on it (stakeholders worried about the government using it for secondary data analysis shouldn’t, in general, worry about this, it’s too hard to get good data out of most of the CDA documents). These limitations are baked into the design. That’s why I went and developed FHIR – so that when the full reality of the system become evident, we’d have a better option than a document repository.

Well, 10 years later, and we’re still trying to load ever more use into the same broken design, and the government sponsors are still wondering why it’s not ‘working’. (at least we stopped calling it the ‘personally controlled’ EHR, since it’s the government controlled EHR). And as long as it exists and is the focus of all government efforts to make digital health happen, it will continue to hold up innovation in this country – a fact which is terribly evident as I travel and see what’s happening elsewhere.

But it doesn’t have to be like this.

The MyHR is built on a bunch of useful infrastructure. There is good ideas in here, and it can do good things. It’s just that everything is locked up into a single broken solution. But we can break it open, and reuse the infrastructure. And the easiest way I can see to do this is to flip the push over. That is, instead of the source information providers pushing CDA documents to a single repository, we should get them to put up an Argonaut interface that provides a read/write API to the patient’s data. Then, you change the MyHR so that it takes that information and generates CDA documents to go into the MyHR – so no change needed to the core MyHR.

What this does is open up the system to all sorts of innovation, the most important of which is that the patient can authorise their care providers to exchange information directly, and build working clinically functional systems (e.g. GP/local hospital, or coordinated care planning), all without the government bureaucrats having to decide in advance that they can’t be liable for anything like that. That is, an actually personally controlled health record system not a government controlled one. And there’s still a MyHR for all the purposes it does exist for

This alternative looks like this:

The salient features of this architecture:

  • All healthcare providers make healthcare information services available using the common Argonaut based interface (including write services)
  • Patients can control the flow at the source – or authorise flows globally through myGov (needs new work on myGov)
  • Systems can read and write data between them without central control
  • The MyHR can pull data (as authorised) from the sources and populate the MyHR as it does now
  • Vendors and providers can leverage the same infrastructure to provide additional services (notifications, say)

The patient can exert control (either directly at the provider level, or through mygov as an OAuth provider) and control the flow of information at the source – they can opt-in or -out of the myHR as appropriate, but they can also share their information with other providers of healthcare services directly. Say, their phone’s very personal health store. Or research projects (e,g, AllofUs). Or, most importantly and usefully, their actual healthcare providers, who can, as authorised by the patient, set up bi-directional flows of information on top of which they can build better coordinated care processes.

These features lead to some important and different outcomes:

  • Healthcare providers and/or system vendors can innovate to build distributed care models that provide a good balance between risk and reward for different populations (instead of the one-size suits bureaucrats that we have now)
  • Patient’s can control the system by enabling the care flows that they want
  • Clinicians can engage in better care processes and improve their process and outcomes (though the US process shows clearly that things get worse before they get better, and you have to plan for that)

This isn’t a small change – but it’s the smallest change I know of that we can make that preserves the MyHR and associated investment, and gives us a healthcare system that can innovate and build better care models. But I don’t know how we’ll think about getting there, given that we’re still focused on “make everyone use the MyHR”.

Note: Adapted from my presentation yesterday at the HL7 Australia Meeting

 

Question: CCDA and milliseconds in timestamp

Question:

We are exchanging CCDs over an  HIE. While consuming a CCD from a particular partner, we are having issues parsing the dates provided in the CCD. Most often, we cannot process the effectiveTime in various sections of the CCD.

We have been told that our partner is using a C-CDA conformant CCD. The CCD parser on our side cannot handle any of the effectiveTime values which contain milliseconds, such as:
<effectiveTime value=”20170217194506.075″/>

Our vendor informed us that:

“The schema you referenced (for a CCD) is the general data type specification for CDA. There are additional implementation documents that further constrain the data that should be reported. For example, the sections shown below are from the HL7 Implementation Guide for Consolidated CDA Release 1.1. As you can see, a US Realm date/time field (which includes the ClinicalDocument/effectiveTime element) allows for precision to the second.
The guides for other CCD implementations – C32, C-CDA R2.0, etc. – include identical constraints for /ClinicalDocument/effectiveTime. These constraints are the basis for our assertion that milliseconds are not acceptable in ClinicalDocument/effectiveTime/@value.”

The schemas provided for C-CDA and CDA do allow for a milliseconds value, according to their RegEx pattern. The CCDA schematrons have notes that specify:

The content of time SHALL be a conformant US Realm Date and Time (DTM.US.FIELDED) (2.16.840.1.113883.10.20.22.5.3).

Even though the schematron may not have the checks to flag a millisecond value, they do make that statement, which does not allow for milliseconds.

Please provide guidance on whether milliseconds are acceptable in C-CDAs and/or CCDs.  If milliseconds are not allowed, then why don’t the schemas/schematrons trigger errors when millisecond values are found?

Answer:

Firstly, some background: The CDA schema is a general schema that applies to all use of the CDA specification anywhere. So the schema allows milliseconds. CCDA is a specification that builds on CDA to make further restrictions. Most of these can’t be stated in schema (because schema does not allow for different content models for elements with the same name). So these extra constraints are made as schematron, so they can also be tested for CCDA documents.

The CCDA specification says, concerning date times, that they SHALL conform to DTM.US.FIELDED, and about dates like that, it says:

1. SHALL be precise to the day (CONF:81-10078).

2. SHOULD be precise to the minute (CONF:81-10079).

3. MAY be precise to the second (CONF:81-10080).

4. If more precise than day, SHOULD include time-zone offset (CONF:81-10081).

It sure sounds like you can’t use milliseconds. But, unfortunately, that’s a wrong interpretation. The DTM.US.FIELDED template on TS is an ‘open template’ which means that anything not explicitly prohibited is allowed.

Since milliseconds are not explicitly prohibited, they are allowed.

(Yes, you can then ask, why say “MAY be precise to the second”? this is careless language for a specification, and creates exactly this kind of trouble)

Note: this answer comes from Calvin Beebe (coming HL7 chair) on the HL7 Structured Documents Email List, and I’m just archiving the answer here so it can be found from search engines.

Mapping between CCDA and FHIR

Across the board, many FHIR implementers have the same challenge: mapping between some other format, and a set of FHIR resources. Right now, the pressing issue is mapping between CCDA and FHIR (in both directions). At the HL7 Orlando meeting in January 2016, we held a Birds of a Feather session about mapping. At this meeting, there was general consensus that we – the FHIR community – would like to have a mapping framework that

  • allows us to exchange maps that express detailed transforms from one format to another (in terms of a previous post, executional maps)
  • means that the transforms are portable e.g. the same transform can be run on multiple different system
  • can be shared through a FHIR repository
  • allows to move data into and out of FHIR resources (or between them)

While we were at it, we noted that it would be pretty cool if the language could be used for transforms that didn’t involve FHIR either. And we decided that we’d focus on CCDA <-> FHIR as a primary use case, since that’s an operational issue for so many people. Also, we noted that there’s no existing standard that meets these requirements, or that can easily meet them. MDMI was brought forward as a candidate, but it’s not clear that MDMI solves the problems we usually encounter.

After the meeting, I sat down with Keith Duddy, one of the editors of the other leading candidate specification, which is QVT. After a long discussion about what we were trying to do, and a review of the possible candidates, Keith and I designed a mapping language for FHIR that is very much based on QVT, but leverages a number of design features and philosophies from the existing FHIR work. The work includes an abstract syntax (which is a resource, the StructureMap resource), and a concrete syntax, and a transform host API that delegates implementation specific transformation actions to the host engine. In addition, we’ve prototyped it in the context of CCDA –> FHIR Mapping. Btw, this arrangement means that Keith can take all the credit for the good parts, and I can get the blame for all the other parts.

So this mapping language is published, and open for community review. I expect that there will be robust discussion: is it rich enough? too complex? why do we need to do this? can’t we just use [insert name here]. And maybe there’s some candidate out there we haven’t found… so to help with the evaluation, we’re working on the transform from CCDA to FHIR for the Montreal connectathon.

So here’s an initial contribution to get discussion going:

Download all these, and then you can execute the validator jar with the following parameters:

 ccd.xml -transform -defn validation-min.xml.zip 
   -txserver http://fhir2.healthintersections.com.au/open 
   -folder [cda structure definitions=] -folder [map files folder] 
   -map http://hl7.org/fhir/StructureMap/cda 
   -output bundle.xml

The transform will produce bundle.xml from the CCDA input files. Alternatively, if you want the java source, see org.hl7.fhir.dstu3.utils.Transformer in the FHIR svn

Transform Pseudo Code

To help other implementations along, here’s a bunch of pseudo code for the transform engine (it’s actually pretty simple!). Note that to use this, you need to two things:

Object model – both the source object model, and a target object model express the same meta-level API, which has the following features:

  • – list children (name) returns list of Value : given a name, return all the children that have the given name
  • – make child (name) returns Value: create the appropriate object, and return it (can’t be used if the object is polymorphic)
  • – create (typename) returns Value: create an object of the specified type name (type name comes from the mapping language)
  • – set property(name, Value): set the property name to value. if name is cardinality 0..*, add to the list

And then you need a FluentPath engine that compiles and evaluates fluent path expressions:

  • – parse(string) returns expression: parse the string expression
  • – evalute(expression, Value) returns boolen: apply the expression to a value, and see whether it is true or not

Here’s the logic for the structure map transform

 transform(Value src, StructureMap map, Value tgt)
   create new variables 
   add src as new variable "src" in mode source to variables 
   add tgt as new variable "tgt" in mode target to variables 
   run transform by group (first group in map)
 
 transform by group(group, variables)
   transform for each rule in the group
 
 transform by rule(rule, variables)
   clone the variables (make us a new copy so changes don't propagate backwards)
   check there's only one source (we don't handle multiple sources yet)
   for each matching source element (see logic below - a list of variables)
     - for each target, apply the targets
     - for each nested rule, transform by rule using the variables
     - for each dependent rule, apply the dependent rule
 
 processing dependent rules
   look through the current map for a matching rule (by name)
   look through all the other known maps (in the library) for a matching group (by name)
   if match count != 0, it's an error
   check the variable cont provided matches the variable count expected by the group (and check whether they are source or target)
   make a new variables
   for each parameter, get the variable given the name, and add it to the new variables (in order)
   transform by the group using the new variables
 
 Finding Match Source Elements(source, variables)
   get the source variable with the name "source.context"
     - check it's not null (error if it is)
   if there's a condition, get the fluent path engine to check it against the source variable
     - if it fails, return an empty list
   if there's a check, get the fluent path engine to check it against the source variable
     - if it fails, blow up with some error message

   if there's a source.element
     get a list of the children with the name source.element from the source variable
   else
     turn the source variable into a list
 
   for each element in the list from the previous step, 
     clone source variables
     if there's a source.variable, add the element as to the cloned variable list using the source.element name
     add the closed variable list to the results
   return the results
 
 Apply a target(target, variables)
   get the target variable with the name "target.context"
     - check it's not null (error if it is)
   check there's an element (todo: support this)
   if there's a transform 
     value = run the tranform
     set the given name on the target variable to the value
   else
     value = make the given name on the target variable
 
 Run the transform(parameters, vars)
   depends on the transform:
   create: ask the object API to create an object of the type in parameter 1
   copy: return the value of parameter 1 (see below)
   evaluate: use the fluent path engine to execute aginst the p value of 
     parameter 1 using the string value of parameter 2 as the expression
   pointer: return the correct URL to reference the object that is the value of parameter 1 
     - which is right is a deployment decision

 The value of a parameter
   if it's an id (e.g. not a constant, just a word)
     then it's a reference to the named variable
   else 
     it's some kind of constant 

Question: Entering the #FHIR Bandwagon

Question:

HL7 v1 came first, followed by V2, followed by CDA, V3. For newer entrants into Standards and Interoperability is the central dogma of S&I is something like Learn V2 first, then CDA, then V3 and then FHIR and then SMART on FHIR and then Newer or a person can just straightaway buy an FHIR textbok or Collect all FHIR blogs at one place and start reading from scratch?

Answer:

V2, CDA, FHIR+Smart on FHIR are all different approaches to solving various healthcare interoperability problems. If you’re deeply invested in the standards process, then learning all of v2, CDA, and FHIR will give you deeper perspective about what has and hasn’t worked, and what ideas last across the various specifications. But if you’re just solving a particular problem, then you just pick one, learn it, and give into the job.

Which one to learn? 

  • Actually, this is simple. If you have to use one, because of legal requirements, or because your trading partners have already chosen one – that’s by the far the most likely situation – then you don’t have to decide. Otherwise, you’d use FHIR

How to learn? From a previous post, these are the books I recommend:

None of these cover FHIR. But, in fact, Tim invited me to join with him for the forthcoming 3rd edition of his book, to cover FHIR, and this should be available soon. In the meantime, as you say, there’s the FHIR blogs.

Question: #CDA – how to handle Observation Component code

Question:

I’m unpacking a Summarizations of Episode Note” CDA document. The section “Relevant diagnostic test and/or laboratory data” has an organizer classCode=”BATTERY”, and followed by many component.observation elements.  All of these have valid values (eg, type=PQ value=86 unit=mg/dl), but *all* have the observation.code element containing nullFlavor=”UNK”.

The <text>…</text> data does have both the test descriptions and results for display formatting, however.

Here’s an example snip:

  <component>
   <observation classCode="OBS" moodCode="EVN">
    <templateId root="2.16.840.1.113883.10.20.22.4.2" />
    <code nullFlavor="UNK" />
    <text>
      <reference value="#ID0EFTDIABA" />
    </text>
    <statusCode code="completed" />
    <effectiveTime value="20151208091900" />
    <value xsi:type="PQ" value="86" unit="mg/dL" />

Are these component.observation segments constructed correctly?  If so, can you direct me to some literature to make sense of this apparent list of values with no codes?

Answer:

Well, technically, I think are conformant. What the template identified by 2.16.840.1.113883.10.20.22.4.2 says is:

SHALL contain exactly one [1..1] code (CONF:13908), where the @code SHOULD be selected from (CodeSystem: 2.16.840.1.113883.6.1 LOINC)

Well, it contains a code, not taken from LOINC. So ‘correct’ in that it meets that rule.

But it’s not really correct. As you’ve found, it’s not possible to interpret an observation without a code, unless the context is crystal clear – which it isn’t, because it could only be clear in the template – about the inferred code (that’d be bad practice anyway). I certainly think that the committee that wrote that rule above never intended for an implementer to use a nullFlavor on the code, though there’s always edge cases where maybe it’s valid

So, no, not correct. But my experience is that you’ll have to live with it, and ignore the data or infer meaning manually or something. See Rick’s ongoing series on “CDA in the Wild” – you’re lucky!

Use of CDA Observation.value

CDA has an all important element Observation.value:

cda-observation-snapshot

Observation.value is a really important element because it contains the really key clinical data – laboratory and clinical measurement. It can be any type – any of the CDA types, that is – and it can repeat. So, two questions:

  • What types actually get used in Observation.value?
  • What about repeats – does that happen?

What types actually get used in Observation.value?

We’ve seen the following types get used in CDA Observation.value:

  • PQ + IVL_PQ
  • ST / SC
  • BL
  • INT
  • RTO (RTO_INT_INT for Titres and Trisomy / Down Syndrome risk assessment results)
  • CE / CD / CV
  • TS + IVL_TS (see LOINC for IVL_TS usage)
  • CO
  • ED
  • SLIST_PQ

It’s not that other types aren’t allowed to be used, but these are the ones we’ve seen used; this list may help implementers to prioritise their work.

This type cannot be used: CS

Do repeats get used?

We’ve seen repeats used a few times, for things like Pulse Oximetry. We’ve never seen repeats where they have different types (to me, that sounds like a really bad idea – I think you should not do that).

Thanks to Rick Geimer, Alexander Henket, Diego Kaminker and Keith Boone for input to this one.

#FHIR and mapping approaches

Across the board, many FHIR implementers have the same challenge: mapping between some other format, and a set of FHIR resources. The other format might be a CIMI model, an openEHR model, a CCDA document, or a series of other things. At the HL7 Orlando meeting in the second week of January 2016, we’ll be holding a BOF (Birds of a Feather) on Tuesday evening. In this meeting, we’ll be:

  • seeing some mapping tool demos
  • considering how to flesh out an interoperable format for mappings, and create an eco-system around that
  • gathering a set of requirements to drive the format./eco-system
  • identifying significant applications where exchanging mappings will be useful (e.g. CCDA…)

One problem when it comes to mappings is that there’s a wide variety of things that are called ‘mappings’. I find it useful to classify mappings into 4 categories:

  • Skeletal. There is just enough information to indicate to readers where the different models align structurally. Anything else is left to the reader’s imagination
  • Notional. The mappings are done at the class and attribute level (or equivalent), and indicate that the definitions of the elements establish that these are about the same thing. However the types of the attributes might not match up, and there might be all sorts of special conditions and assumptions that need to be true for the mappings to be valid
  • Conceptual. The mappings are done at the class and attribute level, down to the level of primitive data types. However not all the value domains of all attributes are fully mapped, and there may be special cases not accounted for
  • Executional. The mappings account for the full value domain of all the source attributes, and all the special cases are handled. The mapping notation is precise enough that all the construction issues such as when to create new instances of repeating elements is fully described. These mappings can be fed into conversion engines that are then capable of transforming instances correctly from one format to another

The mapping notations become more complex as you move through this classification; and it becomes less easy to read their intent. In fact, the full executable mapping cases is proper 3GL code, and it’s not unusual for implementers to maintain both executional mappings in code, and notional (usually) mappings as documentation of the code. But fully executional mappings are the most useful for real world use

FHIR’s existing mappings (on the mappings tab throughout the specification) are mostly Notional or Conceptual, though there’s a smattering of skeletal and executional ones as well. We’re going to ramp that up in the FHIR specification itself, and in a raft of implementation guides of various forms. As we work through our use cases, I suspect we’ll find a need for a mix of these levels of mappings, and we need to keep this classification in mind – unless someone comes up with (or knows) a better one.

 

What is the state of CDA R3?

Question

We have seen references to new extension methodologies being proposed in CDA R3; however I can’t seem to find what the current state of CDA R3 is.  Most searches return old results.  The most recent document related to CDA R3 using FHIR instead of RIM.  What is the current state of CDA R3 and where can I find more information.  HL7 pages seem to be pretty old.

Answer

The Structured Documents work group at HL7 (the community that maintains the CDA standard), is currently focused on publishing a backwards compatible update to CDA R2 called CDA R2.1. CDA R3 work has been deferred to the future, both in order to allow the community to focus on R 2.1, and to allow FHIR time to mature.

There is general informal agreement that CDA R3 will be FHIR based, but I wouldn’t regard that as formal or final; FHIR has to demonstrate value in some areas that it hasn’t yet done before a final decision could be made. I expect that we’ll have more discussions about this at the HL7 working meeting in Atlanta in October.

Question about storing/processing Coded values in CDA document

Question
If a cda code is represented with both a code and a translation, which of the following should be imported as the stored code into CDR:
  1. a normalised version of the Translation element (using the clinical terminology service)
  2. the Code element exactly as it is in the cda document
The argument for option 1 is that since the Clinical Terminology Service is the authoritative source of code translations, we do not need to pay any attention to a ‘code’ field, even though the authoring system has performed the translation themselves (which may or may not be correct). The argument for option 2 is that Clinicians sign off on the code and translation fields provided in a document.  Ignoring the code field could potentially modify the intended meaning of the data being provided.
Answer
First, to clarify several assumptions:
“Clinicians sign off on the code and translation fields provided in a document”
Clinicians actually sign off on the narrative, not the data. Exactly how a code is represented in the data – translations or whatever – is not important like maintaining the narrative. Obviously there should be some relationship in the document, but exactly what that is is not obvious. And exactly what needs to be done with the document afterwards is even less obvious.
The second assumption concerns which is the ‘original’ code, and which is the ‘translation’. There’s actually a number of options:
  • The user picked code A, and some terminology server performed a translation to code B
    • The user picked code A, and some terminology server in a middle ware engine performed a translation to code B
  • The user picked code X, and some terminology server performed translations to both code A and B
  • The user was in some workflow, and this as manually associated with codes A and B in the source system configuration
  • The user used some words, and a language processor determined codes A and B
    • The user used some words, and two different language processors determined codes A and B

ok, the last is getting a bit fanciful – I doubt there’s one workable language processor out there, but there are definitely a bunch out there being evaluated. Anyway, the point is, the relationship between code A and code B isn’t automatically that one is translated from the other. The language in the data types specification is a little loose:

A CD represents any kind of concept usually by giving a code defined in a code system. A CD can contain the original text or phrase that served as the basis of the coding and one or more translations into different coding systems

It’s loose because it’s not exactly clear what the translations are of:

  • “a code in defined in a code system”
  • “the original text”
  • the concept

The correct answer is the last – each code, and the text, are all representations of the concept. So the different codes may capture different nuances, and it may not be possible to prove that the translation between the two codes is valid.

Finally, either code A or code B might be the root, and the other the translation. The specification says 2 different things about which is root: the original one (if you know which it is), or the one that meets the conformance rule (e.g. if the IG says you have to use SNOMED CT, then you put that in the root, and put the other in the translation, irrespective of the relationship between them).

Actually, which people do depends on what their trading partner does. One major system that runs several important CDRs ignores the translations altogether….

Turning to the actual question: what should a CDR do?

I think that depends on who’s going to be consuming / processing the data. If the CDR is an analysis end point – e.g. data comes in, and analysis reports come out, and also if the use cases are closed, then you could be safe to mine the CD looking for the code your terminology server understands, and just store that as a reference.

But if the use cases aren’t closed, so that it turns out that a particular analysis would be better performed against a different code system, then it would turn out that storing just the one understood reference would be rather costly. A great case is lab data that is coded by both LOINC and SNOMED CT – each of those serves different purposes.

This some applies if other systems are expected to access the data to do their own analysis – they’ll be hamstrung without the full source codes from the original document.

So unless your CDR is a closed and sealed box – and I don’t believe such a thing exists at design time – it’s really rather a good idea to store the Code element exactly as it is in the CDA document (and if it references that narrative text for the originalText, make sure you store that too)

 

 

Question: Clinical Documents in FHIR

Question:

Nice to see so many articles on FHIR to understand this new technology for interoperability. I have a very basic and silly question.
I want to understand i would like to transfer whole Clinical Document from one client to another using FHIR. Since in FHIR everything is being referred to Resource i am not able to find out the relation among them so that i can store it as a single record. and what is the mechanism to update individual Resources inside the Record. If possible please share some sample and case study.
Thanks a ton in advance.

Answer:

There’s multiple things that “transfer” could mean. If you mean “transfer” in the XDS sense, of establishing a common registry/repository of documents, then you want DocumentReference (see the XDS Profile). This is the technical basis of the forthcoming MHD profile from IHE.

If you mean, some kind of “message” in a v2 push based sense, there isn’t a predefined approach, since no one has been asking for this.