Category Archives: NEHTA

Question: NEHTA CDA & GP referrals

Question

Is there any example of NEHTA compliant CDA document that I can look at from a perspective of a GP referral form ( http://nhv.org.au/uploads/cms/files/VIC%20-%20GP%20Referral%20(March%202013).rtf )? Is there a tool that can be used to navigate and generate the CDA from a HL7 v2 message?

Answer

There’s been a lot of discussion over the last decade or so about creating a CDA document for these kind of referral forms. I saw a pretty near complete set of functional requirements at one point. But for various reasons, the project to build this has got any funding, either as a NEHTA project, or a standards Australia project (it’s actually been on the IT-14-6-6 project list for a number of years, mostly with my name on it).

So right now, there’s no NEHTA compliant document. As a matter of fact, I don’t know of anything quite that like that from any of the national programs, though no doubt one of my readers will – please comment if you do. There is a project investigating this in the US National Program (S&I framework( but they’re not using CDA.

On the other part of the question, no, unfortunately not. NEHTA provides both C# and java libraries that implement the NEHTA described CDA documents, but it’s an exercise left to the implementer to convert from a v2 message to a CDA document. That’s primarily because there’s so much variability between v2 messages that there’s no safe way to write a single converter

I tried to do that with just AS 4700.2 messages, which are much more constrained than the general case, and it wasn’t really successful; the PITUS project is working on the fundamental alignment needed to get ti right in the future.

The PCEHR Review, and the “Minimum Composite of Records” #2

This post is a follow up to a previous post about the PCEHR review, where I promised to talk about medications coding. The PCEHR review quotes Deloittes on this:

The existing Australian Medications Terminologies (AMT) should be expanded to include a set of over the counter (OTC), medicines and the Systematised Nomenclature of Medicine for Australia (SNOMED-CT -AU) should become universal to promote the use of a nationally consistent language when recording and exchanging health information.

Then it says:

Currently there are multiple sources of medication lists available to the PCEHR with varying levels of clinical utility and functionality. From some sources there is an image of the current medication list, from some sources the current medication list is available as text, from some sources the information is coded and if the functionality existed would allow for import and export into and out of clinical systems as well as transmission by secure messaging from health care provider to health care provider.

Note: I’m really not sure what “there is an image of the current medication list” means. As a separate choice to “list is available as text”, I guess that implies it’s actually made available as a bitmap (or equivalent). I’ve never seen anything like that, so I have no idea what they mean.

And further:

The NPDR should be expanded to include a set of over the counter (OTC) medicines to improve its utility.

Over the counter medication is essential to detect such issues as poor compliance with Asthma treatment, to show up significant potential side effects with prescription only medicines and to allow for monitoring and support for drug dependent persons. The two main data sources of data are complementary and neither can do the job of the other. The curated current medications list together with adverse events, could be sourced from the GP, Specialist, Hospital or Aged Care Facility clinical information system, the discharge summary public or private is immediately clinically useful and will save time for the clinician on the receiving end.

It is imperative that further work be done on software systems to make the process of import and export and medication curation as seamless as possible to fit in to and streamline current workflow.

My summary:

  • Extend AMT and NPDR to include over-the-counter medicines
  • Work with all the providers to get consistently encoded medicines and medication lists so the medications can be aggregated, either in the PCEHR or in individual systems

Over-the-counter medicines

Well, I guess this means pharmacists only things (such as ventolin, which they mention) and doesn’t include supermarket type things like aspirin or hay-fever medications. I don’t know how realistic this is from a pharmacist workflow perspective (e.g. getting consent, signing the NPDR submission), but let’s (for the sake of argument) assume that it is realistic. That will mean that each OTC product they sell will need to be coded in AMT (once AMT is extended to cover them). I don’t know how realistic this is – can it be built into the supply chain?  Let’s assume that it can be, so that pharmacists can make this work, and that we’ll then be able to add this to NPDR.

However there’s a problem – this recommendation appears to assume that the NPDR is already based on AMT. I’ve got a feeling that it’s not. Unfortunately, good information in public is not really available. By repute, AMT adoption isn’t going well.

Is that true? What would really be helpful in resolving this question would be to fill this table out:

Coding System SHS/ES eRef/SL DS NPDR
AMT v2
ICPC 2+
MIMS
PBS
Vendor Specific
Snomed-CT
First Data Bank
Text only

Where each cell contains 3 numbers:

  • Number of systems certified for PCEHR connection
  • Number of documents uploaded that contain codes as specified
  • Number of medications in PCEHR coded accordingly

Some notes about this table:

  • Even just gathering data from the PCEHR to fill this table out would be a significant amount of work – it’s not just running a few sql queries. And even then, it wouldn’t cover non-PCEHR exchange – if it did, it would be even more interesting
  • Although a system may encode medications in one of these coding systems, some medications might not be coded at all since they don’t appear on the list of codes. And some systems encode immunizations differently from medications and some do it the same (ICPC 2+ is used for immunizations by a few systems)
  • Although most medications use codes in production systems, for a variety of reasons many medications in the PCEHR are not coded, they’re just plain text (in fact, other than the NPDR, I think plain text would be the biggest number). I know of several reasons for this:
    • There was no financial penalty for not coding medications at all
    • The PCEHR system returns warnings and/or errors if the medications are not coded the way it supports
    • The PCEHR system is extremely limited in what it does support
    • There’s no systematic way to find out what it does support
    • Trouble shooting failed documents is extremely hard for a variety of reasons
    • There’s lack clarity around safety and where liability falls when aggregating
    • note: I don’t claim that this list is complete nor have I assigned priorities here

If we saw the counts for this table, we’d have a pretty good feel for where medication coding is in regard to the PCEHR. And I’m pretty sure that it would show that we have a long way to go before we can get consistent encoding.

Consistently Encoding Medications

Getting consistently encoded medications list is far more difficult that merely getting alignment around which technical coding system to use. Beyond that, different systems take different approaches to tracking medications as they change. For instance, the standard Discharge Summary specification says that there are two medication lists:

  • Current Medications On discharge: Medications that the subject of care will continue or commence on discharge
  • Ceased Medications: Medications that the subject of care was taking at the start of the healthcare encounter (e.g. on admission), that have been stopped during the encounter or on discharge, and that are not expected to be recommenced

There’s a hole between those two definitions: medications that the patient took during the admission that have residual effect. But many discharge summary systems can’t produce these two lists. Some have 3 lists:

  • Medications at admission
  • Medications during inpatient stay
  • Medications at discharge

Others only track medications prescribed by the hospital, and not those already taken by the patient, or if it tracks those, it does so as text. Some systems don’t know which inpatient medications are continuing after admission or not. Even if the system can track these medications in fully coded detail, there’s no guarantee that the clinical staff will actually code them up if they didn’t prescribe them. Some institutions have medication management systems that track every administration, while others don’t.

Finally, there’s the question of to what degree GP’s and specialists record medications that are not relevant to the problem(s) they are interested in (particularly specialists). Humans (or clinicians – I’m not convinced they’re the same things 😉 ) are good at dealing with degeneracy in these lists, and it saves them a lot of time (mention a medication generically, but no relevant details). Computers are not good at dealing with degeneracy, so in the future, the clinical staff will all have to be unerringly precise in their records.

Note that I’ve moved into pure clinical practice at the end there; in order to meet the PCEHR goal, we need to align:

  • Medications coding
  • Medication tracking system functionality
  • Clinical practice

And while we’re at it, we need to jettison the existing data which will be come legacy and no longer useful to all the existing systems.

I’m not holding my breath. The PCEHR review says:

The PCEHR Value Model suggests that of the total gross annual theoretical benefit potential of eHealth in Australia, medication management is the greatest individual driver of benefits ($3.2 billion or 39% of gross benefits).

I’m not convinced that will be a net profit 🙁

p.s. does anyone know where the PCEHR Value Model is published?

 

The PCEHR Review, and the “Minimum Composite of Records”

So the PCEHR review has finally been released, and I’ve been reading with considerable interest. I’m going to stick to analysing the technical recommendations that they make, starting with a group of recommendations they call the “Minimum Composite of Records”:

19. Expand the existing Australian Medications Terminologies (AMT) data set to include a set of over the counter (OTC) medicines.
20. Widen the existing National Prescribing and Dispensing Repository (NPDR) to include the expanded set of over the counter (OTC) medicines.
21. Implement a minimum composite of records to allow transition to an opt-out model by a target date of 1st January 2015 inline with recommendation 13. This will dramatically improve the value proposition for clinicians to regularly turn to the MyHR, which must initially include:

  • Demographics
  • Current Medications and Adverse Events
  • Discharge summaries
  • Clinical Measurements

The section that explains these starts with the following paragraph:

A common theme in the consultation process was the need for a minimum data set to make up a viable clinical record. Many of the submissions also pointed out that it was imperative for the data
standards to be widely and universally adopted to allow the MyHR to function. The more clinically relevant material that was present within the MyHR the faster the rate of adoption and therefore the faster the return on investment will be

I’m really pleased to see the question of wide and universal adoption of standards mentioned – that’s what I would have said to the panel if I’d made my own submission. From these general comments at the introduction, the review seems to get rather distracted by medications coding issues, before suddenly coming back to the question of “minimum composite of records”. So, what does that mean?

  • Demographics – I cannot imagine what this means beyond what is already in place? The documents include demographics – my consistent feedback from clinicians is that they contain too much of them, and I couldn’t figure out from the text what they thought this meant.
  • Current Medications and Adverse Events – well, that’s consistently been a focus of what we’ve already done, but the section indicates that this is about the medications coding. So more on that in the next post
  • Discharge summaries – again, this is something that has already been prioritised, but the section points out that this doesn’t apply to private hospitals. And, in fact, private hospitals aren’t really a good fit for the current discharge summary because of the way their business works, so the basic discharge specification may need to be reviewed to make it a better fit for that use
  • Clinical Measurements – the section says “capture vital signs to prevent avoidable hospitalisation and demonstrate meaningful use of PCEHR.” – uh? How will capturing vital signs – data of relevance today during an admission – will “prevent avoidable hospitalisation”? That was submission from the Aged Care industry, so perhaps they’re saying, if the PCEHR contained a record of the patients baseline vital signs, then we can know whether they’re actually significantly impaired if they have an emergency admission outside their normal facility? – it seems like a super narrow use for me

They say: “All other functionality … should be deprioritised while these data sets are configured” – but what other functionality is that? It’s not obvious to me.

So, the Minimum composite of records actually means:

  • Improve medications coding
  • Cater for discharge summaries from private hospitals
  • Add support for vital signs
  • Continue to focus on implementation activities in support of these things

Have I read that right? Comments welcome…

I’ll take up the question of medications coding (recommendations #19 and #20) in the next post.

CDA Use in the PCEHR: Lessons learned

I wrote an article for the latest edition of Pulse IT (page 53) called “CDA Use in the PCEHR: Lessons learned”:

One of the key foundations of the PCEHR is that the CDA (Clinical Document Architecture) is used for all the clinical documents that are part of the PCEHR. This article describes the lessons learned from using CDA for the PCEHR.

Here’s a summary of my lessons learned:

  • When using CDA (or anything else) make the documentation easy to read and navigate, do not assume prior knowledge, and make it as short and flat as possible
  • CDA is both too simple, and too complex. Adoption requires expertise, and policies and tools to leverage that expertise as much as possible
  • The presence of both Narrative and Data means that you can do much better checking of the implementations. However it also means that you need to
  • CDA specifications need to be specific about how the clinical narrative should be presented, as this is the most important part of the document
  • the CDA Narrative/Data pattern allows for interoperability in the presence of poor agreement about the underlying data;  whether this is a good thing depends on your perspective
  • The existence of the narrative/data pattern means that a thorough conformance testing framework is required to ensure quality
  • The implementation community in Australia still has a long way to go before we have mastered the exchange of codes from terminologies
  • Syntax is less important than content. Interoperability can only fully meet business goals when there is substantial business alignment

The conclusion is pretty simple: we’ve got a long way to go yet, at lots of levels. I suspect that some of the issues are going to burn other programs too.

I’m posting this here to serve as the place for discussion about what I wrote. Btw, if you’re making comments, please take note of this disclaimer in the article:

This article is not evaluating the PCEHR program itself, nor even how the PCEHR program used CDA, but just about what was learned about CDA.

btw, I am always happy to contribute to Pulse IT. It’s a wonderful resource for our community here in Australia – Simon’s done a great job.

The importance of examples

On an HL7 mailing list, there’s a rather active discussion that is happening about a particular feature in CCDA (allergy lists). It turns out that one of the bits of the CCDA specification is somewhat obtuse (surprise!), and there’s more than one opinion on what it means and how it’s supposed to be used. I’ll probably end up posting on the specific subject when (if) there’s closure to the subject.

But it’s not the first time, in fact, it’s happened in the Australian PCEHR too – something that seems self evident to the authors of the specification, but actually turns out to have more than one interpretation when the CDA document is being populated (actually, it’s not about CDA, this can happen with any specification). So when you discover that, the natural question is, well, what have the existing implemented systems been doing about this? And very often the answer is… we don’t know, and we have no way to find out either.

Any big program that is integrating multiple systems should include the following in their integration testing / approval process:

  1. Integrating systems should have to produce several different documents, each corresponding to a pre-defined clinical case
  2. The documents should be manually compared against the defined case
  3. The instance examples should be posted in a repository available to all the implementers, along with the assessment notes (because approval is never a binary thing)
    • if the project is a national project, that means a public repository

It’s hard to state how important this is once in you are in the close out stages of the project. Want to know if some proposed operation is safe, and it depends on what feeder systems are doing? You can just go look. Want to know if someone has already done something wrong? you can go look (supposing, of course, that the test cases provide coverage, but often they do).

Examples: they’re really important for a specification, and they’re just as important for a project. If I was allowed to change only one thing about the PCEHR implementation project, I’d pick #3 (we already do 1 and 2).

UUID in CDA/SPL? – uppercase or lowercase?

UUIDs may be represented either in uppercase or lowercase. Lowercase is common in Unix, and uppercase is common on windows (COM).

HL7 never said, in the context of data types R1, whether UUIDs should be uppercase, lowercase, or either. Though the literal form implies that it should be uppercase:

   INT hexDigit : "0"     { $.equal(0); }
                | "1"     { $.equal(1); }
                | "2"     { $.equal(2); }
                | "3"     { $.equal(3); }
                | "4"     { $.equal(4); }
                | "5"     { $.equal(5); }
                | "6"     { $.equal(6); }
                | "7"     { $.equal(7); }
                | "8"     { $.equal(8); }
                | "9"     { $.equal(9); }
                | "A"     { $.equal(10); }
                | "B"     { $.equal(11); }
                | "C"     { $.equal(12); }
                | "D"     { $.equal(13); }
                | "E"     { $.equal(14); }
                | "F"     { $.equal(15); }

But no one ever read or understood the literal form anyway ;-). More importantly, the schema allows both. Here’s the regex:

 [0-9a-zA-Z]{8}-[0-9a-zA-Z]{4}-[0-9a-zA-Z]{4}-[0-9a-zA-Z]{4}-[0-9a-zA-Z]{12}

Actually, there were earlier versions where it only allowed uppercase, consistent with the abstract literal form. In practice, though, people used a mix of either, and by the time it was brought to committee, it was too late to close the door. We did publish a technical clarification that uppercase only UUIDs were required, but too many implementers had existing CDA and SPL documents with lowercase GUIDs, and so we have to accept either.

In the Australian pcEHR, about 95% of UUIDs are lowercase, and 5% are uppercase. Implementers should be aware that comparison of UUIDs is case insensitive – don’t get caught out doing a case sensitive comparison, because eventually a bug may happen (though when is unsure, since the only cases of re-use of the same UUID at the moments are from the same authoring system, and so far, systems have been consistent. So it’s unlikely, but not impossible).

What we won’t allow in the PCEHR is to use mixed case within the one UUID – all uppercase, or all lowercase.

p.s. The subject came up in conformance, so I thought I’d clarify here, since many PCEHR implementers follow my blog

Updated note about MIMS codes in CDA documents

When representing MIMS codes (from their Integrated Data Solutions Product) in CDA documents, you use the OID 1.2.36.1.2001.1005.11.1:

<code code="83510101" codeSystem="1.2.36.1.2001.1005.11.1"
  codeSystemName="MIMS Standard Code set" codeSystemVersion="20110900" 
  displayName="Ganfort 0.3/5 Eye drops 3 mL [1] (Restricted - PBS/RPBS) rpt: 5"> 
 <originalText><!--insert originalText here--></originalText> 
 <translation code="78835011000036104" codeSystem="1.2.36.1.2001.1004.100" 
   codeSystemName="Australian Medicines Terminology (AMT)" codeSystemVersion="2.25" 
  displayName="GANFORT 0.03% / 0.5% eye drops: solution, 3 mL"/> 
</code>

For documentation of this:

What’s not clear is quite what goes in the code. After discussion with MIMS, the only acceptable code to be used in association with the OID 1.2.36.1.2001.1005.11.1 is a full triple comprised of Product : Form : Pack. This would be a number with at least 5 digits,  such as the 8 digit one shown above. The last 4 digits are the form and the pack codes (2 digits each).

This is just a heads up to anyone working with MIMS codes in CDA documents. The OID registration should be updated shortly to be specific about this too.

Update: the OID registry has been updated

 

Process for Conformance Checking a CDA Document

One of the things I’ve done a lot of this year is conformance checking CDA documents in several different contexts. Since someone asked, here’s my basic methodology for conformance checking a CDA document:

1. Read it by hand

In this step, I read the XML directly in an XML editor. I’m doing the following things:

  • Getting an overall sense of the content of the document
  • Checking for gross structural errors that might prevent automated checks
  • Checking that the document metadata (realm, templateId, id, code) makes basic sense

2. Validate the document

In this step I do the following things:

  • Check that the document conforms to the base CDA schema
  • Use appropriate schematrons (if available) to check the document against the applicable CDA IG (if there’s no schematron, then I’ll have to do a manual comparison)

For each error reported, the first thing to investigate is whether the error is a true error or not. There’s valid CDA documents that don’t conform to the schema, and whether that matters or not depends on the context. There’s always areas where the schematrons themselves may falsely report errors, so everything has to be checked.

If there are schematrons, I always double the document anyway, and check that it’s valid against the specification, since the schematrons cannot check everything. I particularly keep my eyes open for co-occurrence constraints, since these are often missed in schematrons

3. Check the data 

The next step is to manually review a number of specific types of data in the document:

  • Dates – are the document and event dates internally coherent? do intervals finish after they start? are the timezones coherent (they often aren’t).  Do the precisions make sense? (I particularly investigate any date times with 000000 for the time portion)
  • Codes – are the code systems registered? Are the codes valid? (Private codes can’t be checked, but public ones are often wrong – check code & display name). Check for display names with no codes, mismatches between codes and originalText. Check version information if provided. Some of these checks can be automated, but most can’t
  • Identifiers – do the root values make sense? Are the OIDs registered? Are UUIDs used properly? are any of the identifiers re-used in the document? should they be? (often the same participant gets different identifiers in different places in the document when they shouldn’t, or vice versa)
  • Quantities- are the UCUM units valid? (If they have to be)
  • RIM structural codes – are these correct?

I nearly always find errors in these areas – it’s real hard to get this stuff correct. This is useful: https://hl7connect.healthintersections.com.au/svc/ids

4. Extensions

Check for extensions. What extensions have been added? Are they valid against the rules laid down in the relevant IG? (there’s pretty much no rules in the CDA standard itself)

 

5. Check narrative

I render the document using an appropriate/applicable stylesheet. I check it for base coherence – one thing to particularly look for is information that is likely to come from pre-formatted ascii that hasn’t been appropriately styled. Test data is often short, whereas real clinical data is longer; this is easy to miss on the part of the developers. Then I systematically check the narrative twice:

  • I read the narrative, and ensure that the information in the narrative doesn’t disagree with the data in the entries
  • I read the entries, and check that the narrative agrees with the data

While I’m doing this, I make a list of information that’s in the narrative and not the data, or vice versa. It will depend on the IG and the rules it makes as to whether anything on this list is a problem or not

6. Review the Clinical Picture

Finally, I review the clinical picture described by the data in the document. Does it make sense at all? Very often, actually, it doesn’t, because the document is filled with test data that doesn’t match a real clinical picture. But in spite of that, this a very useful step – I’ve caught some fundamental errors in implementation or even understanding by querying things in the document that don’t make sense.

 

Clinical Safety and sharing data

An institution has a health record eco-system that is distributed and poorly connected. Due to technical, procedural and policy issues, the data is divided into a series of different silos, and there’s not a lot of inter-connection between them. Though – presumably – the systems have connection points through the patient clinical process, because of differences in perspective and purpose, different information is collected, and because of various system design approaches and various lapses in system continuity (the fog of war), the data is a little out of sync.

This is hardly unusual in healthcare.

Then, you create a new integration point between two of the silos, and present newly available information at the point of care. Mostly, this is good, because the clinical decision makers have more information on which to base their decision. It’s more likely that they’ll pick up some fact that might otherwise have been ignored.

Only there’s a problem: the new information is occasionally wrong. The clinical users aren’t fully aware of the various error vectors in the new data. Processing the new data takes extra time. And then the refrain starts – This is Clinically Unsafe!!! It Must Stop!!!

This is regular problem in integration – it’s almost guaranteed to happen when you connect up a new silo. Here’s yet another example of exactly this: “PCEHR EXPOSES MORE EXAMPLES OF PBS ERRORS“, and, of course, most people will report this as a problem for the pcEHR.

Only, I’m not so sure. The problem is in the underlying data, and it really exists. The PCEHR just exposes the problems. It’s no longer hidden, and so now it will start being dealt with. Isn’t that the point?

Generically, exposing a new data silo will cause these issues to become visible. How do you decide whether this is a less or more clinically safe?

  • To what degree is the information wrong?
  • How much would the clinical decision makers rely on it in principle?
  • How much would the clinical decision makers understand the systemic issues with it?
  • How is the underlying purpose of the data affected by the errors that were previously undetected?
  • What remediation is available when errors are detected – can anything be done to fix them, or to reduce their occurence?

On the subject of less or more clinically safe, many people do not understand one aspect of the clinical safety systems approach. Here’s an illustration:

safety-process

 

You change a clinical/workflow practice. The change eliminates a set of adverse events, doesn’t eliminate others, and introduces some new ones (less than it solves, hopefully). Ideally, the numbers of the latter two approach zero, but in practice, if the number of the last (new events) is lower than the number of the first (eliminated events), then the change is worth making anyway- probably. There’s two problems with that:

  • The new events are usually *worse* – more systemic, at the least – then the old ones. That makes for better stories, particularly in the news
  • Distribution of blame to the maker of the change is unequal (introducing new errors is more costly than not eliminating existing errors)

The debate over data quality issues in the pcEHR – and in other contexts – will continue, but it most likely won’t be informed by any insight around how to think about clinical systems change and safety.

But for integration experts: The list I provided above is a useful place to start for issues to consider when doing a clinical safety check of a new integration (cause you do do that, don’t you!). However many real data quality issues are only visible after the integration has been put in place – they don’t even arise during the testing phase. So you need to put systems in place to allow users to report them, at least.

Technical Error in CDA Implementation Guides for ETP, PCEHR

There’s a technical error that runs through most of the CDA implementation guides that we have, including ETP, and other pcEHR related ones.

Background

The problem relates to the codes used for HealthcareFacility.code:

cda-facility-role

 

Here, ServiceDeliveryLocationRoleType is a value set which contains codes taken from the RoleCode code system. The difference between a value set and a code system seems to be easily confuse people.

A code system defines a set of codes and gives them an assigned meaning. Sometimes, it creates relationships between them that provide additional information about how to use the codes and/or understand their meaning.

A value set is a list of codes defined by other code systems. A value set doesn’t define it’s own codes, doesn’t create it’s own meaning. Value sets are used to define a set of codes that can be used in some context. (Aside: “Value set” is a terrible name for this idea – so utterly misleading. I very much wish we could change it).

Part of the confusion is that very often people leave the value set out of the picture – they say, “use this code system for this element” (i.e. LOINC for test result name). But what this actually means is “use the (set of all codes) defined by [the code system]” (i.e. use (all) the [LOINC codes] for test result name)) – where the value set is “(Set of all codes)”. Where this is useful is because most of the time what actually happens is that you need to say “use (this subset) of the codes defined [by the code system]. Sticking with LOINC, that’s because LOINC defines all sorts of codes, not just codes for test result names. So there’s always a value set in the picture, but sometimes it’s implicit.

So the diagram above is saying:

“Use the (set of codes that represent service delivery location types) from the [RoleCode code system]”

When this is represented in the CDA document, we do this:

<code code=”HOSP” codeSystem=”2.16.840.1.113883.5.111”
  codeSystemName=”RoleCode” displayName=”Hospital”/>

Here, the code “HOSP” comes from the code system “2.16.840.1.113883.5.111”, which is RoleCode. There’s no need to represent the value set “ServiceDeliveryLocationRoleType” because that has nothing to do with the meaning of the code – the meaning of “HOSP” is defined by the RoleCode system, and so that’s what we represent.

Advanced note for people who like hard stuff: it’s not quite 100% true to say that the value set has nothing to do with the meaning. See here and note that a later version of the data types (not usable in CDA) caters for also representing the value set in order to support advanced edge case processing.

The error

Unfortunately the CDA Implementation guides commonly have an error, which manifests in ETP (the released Australian standards), and in several PCEHR specifications, notably the NPDR Prescribe and Dispense documents, and the discharge summary specification. The error is that the OID assigned to the value set HL7ServiceDeliveryLocationRoleType  has been used instead of the OID assigned to the RoleCode code system, so that the code looks like this in the CDA document:

<code code=”HOSP” codeSystem=”2.16.840.1.113883.1.11.17660” 
  codeSystemName=” HL7ServiceDeliveryLocationRoleType” displayName=”Hospital”/>

In effect, this code claims to be something different to what it is: “HOSP” as defined by the RoleCode code system.

This has the potential to cause a great deal of confusion: there’s no real difference in meaning, but computationally it’s easy to think that these are different by mistake. Particularly given that the OIDs are so opaque and share such a lot of common root. Testimony to how easy they are to confuse is the fact that this error has survived multiple repeated reviews at many stages, including by me, and has already been implemented by many implementers.

Worse, some CDA IGs are using the correct code system “2.16.840.1.113883.5.111” for the same codes.

The solution

Given that we now have CDA documents in production that use both the wrong and right codes, the only solution available to us is to not note that for the purposes of CDA usage in Australia, codes with the OID
2.16.840.1.113883.1.11.17660 are the same as codes with the OID 2.16.840.1.113883.5.111 when those codes are in the ServiceDeliveryLocationRoleType value set. Formal advice will be issued through NEHTA channels for the PCEHR documents in due course. This blog is just a heads up in the hope that it helps implementers not get caught out later.