Category Archives: Australia

Question: where did the v2 messages and events go in FHIR?


I’m relatively new to the HL7 scene having implemented a few V2 messaging solutions (A08 , A19, ORU) and the V3/CDA work on PCEHR.  I am trying to get my head around FHIR.  So far I am stumped in how I would go about for example implementing the various trigger/messages I have done in V2.  Is there any guidance?  I cant find much.  Is that because the objective of FHIR is implementers are free to do it anyway they like?  If you could send me some links that would be a good starting point that would be great


Most implementers of FHIR use the RESTful interface. In this approach, there’s no messaging events: you just post the Patient, Encounter etc resources directly to the server. The resources contain the new state of the Patient/Encounter etc, and the server (or any system subscribed to the server) infers the events as needed.

A few implementers use the messaging approach. In this case, the architecture works like v2 messaging, with triggers, and events. However, because the resources are inherently richer than the equivalent v2 segments (e.g. see, we didn’t need to define a whole set of A** messages, like in V3. Instead, there’s just “admin-notify” in the event list.

For XDS users (HIE or the PCEHR in Australia), see IHE’s Mobile Health Document Specification.


Terminology Services Connectathon in Australia

This is out today:

Adoption and use of clinical terminology in Australia has received a major boost with the signing of a licensing agreement between the CSIRO and the National E-Health Transition Authority (NEHTA) to grant users within Australia free access to a comprehensive suite of tools to support browsing, authoring, mapping, maintaining, and querying terminology.

These tools will be invaluable for implementers of clinical terminology to move towards unified clinical coding and improved patient safety.

  • NEHTA’s LINGO™ enables users to author local extensions to SNOMED CT-AU using the same robust browser-based authoring tool used by the National Clinical Terminology Service.
  • CSIRO’s Ontoserver is a terminology server that provides a sophisticated means of querying, searching, filtering and ranking SNOMED CT-AU and other standard clinical terminologies including an application programming interface (API) that allows for quick and easy way for implementers to add SNOMED CT based data capture fields to their system.
  • CSIRO’s Snapper is backed by Ontoserver and enables users to create local data sub-sets and maps.

“This licensing agreement between NEHTA and CSIRO enables both the private and public health sectors in Australia to access these tools to support the use and maintenance of terminology products. This will significantly improve the implementation and management of clinical data for enhanced patient outcomes,” said NEHTA CEO Peter Fleming.

The licensing agreement and national implementation will enable NEHTA to establish a fully syndicated terminology service providing national support for the re-use of locally-built reference sets, simple portal-based access to terminology products, and simplified maintenance processes to cascade SNOMED CT-AU updates into other products and support improved vendor testing processes.

I think that this is a great step forward for healthcare applications in Australia; laying down a solid terminology infrastructure is a real opportunity for us to improve healthcare applications around the country, though it will take some time for the application providers to figure out how to use it well, and then to start to make use of the powerful possibilities it offers.

The press release goes on to say:

NEHTA invites all interested to participate in a series of three Connectathons, with the first scheduled for February 2016.

Here’s some additional provisional details about the first connectathon:

  • It’s planned to be in Brisbane Feb 10/11
  • It’ll be held in association with HL7 Australia, and in addition to the CSIRO Ontoserver, the HL7 Australia Terminology Server will be part of the connectathon. Other terminology services may also be represented
  • Attendance is open to any software development team that produces healthcare applications that run in Australia (ISVs, jurisdictions, etc)
  • The technical focus will be the ValueSet and Concept Map resources, and the Value Set Expansion, Validation, and Translation operations
  • I don’t think there’ll be any charge for attending the connectathon

The connectathons are a key opportunity for vendors – large and small – to learn

  • what terminology services can do
  • what deployable terminology service solutions exist, including open source ones
  • why making use of them will be a key strategic requirement to make their customers happy and keep up with the market

Note that the connectathon details are still subject to change.

Preparing for the Australia #FHIR Connectathon

It’s 10 days or sp until the Australian FHIR Connectathon, which is Friday. This post is to help people who are preparing for that connectathon. There’s 3 tracks at the Australian Connectathon:

Track 1: Patient resource (Introductory)

This track serves as the simple introductory task for anyone who hasn’t been to a connectathon before, though previous attendees will find it useful for extending their experience and knowledge. The patient scenario is to write a client that can follow this simple sequence:

  • Search for a patient
  • Get a selected patient’s full details
  • Edit and Update the patient details
  • Create a new patient record

Or, alternatively, to write a server that is able to support some or all of these functions.

This is useful because the same patterns and problems apply to all the other resources, and very nearly everyone has to implement the patient resource.

If you’re writing a client, our experience is that your minimum preparation is to start the day with a functioning development environment of your choice – to be able to develop and execute. If you don’t have that set up, you can lose most of the day just getting that done. If you’re writing a server, then the minimum functionality is to have a working web server that you know how to extend

Beyond the ability to develop and execute, any additional work you do before hand to work non these scenarios means that on the day you’ll get further into the scenario, and you’ll get that much more out of it.

Track technical lead: Grahame Grieve

Track 2: Smart on FHIR

This track is more advanced; it uses more functionality and makes deeper use of the functionality that FHIR provides, and adds to this additional context and security related content. For further information about this track, see the Chicago Connectathon Details

Track technical lead: Josh Mandel

Track 3: Clinical Connectathon

The first 2 tracks are distinctively developer tracks – to participate, you really need to be able to develop product (which is not the same as “being a developer”). Still, there are many users of interoperability specifications who are interested in how FHIR works, and these participants have as much to gain from a hands on experience learning FHIR as developers do – and the FHIR specification and community have just as much to learn from them too. With this in mind, the FHIR core team is working towards a connectathon experience that is focused on the end-user experience with FHIR. We held our first “Clinical Connectathon” in Chicago – you can read the summary report from it.

The 3rd track will be a follow up to the Chicago connectathon. Participants in this track will use tools provided by the core team to match the capabilities of the FHIR specification against the requirements and tasks of routine clinical workflow. There’s no preparation needed, except to turn up with a working laptop (or even tablet) that has the latest version of your favourite web browser installed (no support for old versions of IE).

Participants should not that this whole clinical track is a work in progress – it needs mature tooling from the core team, and we are still working towards that goal. This connectathon will be exercising the tooling that supports it as much as it’s going to be exercising clinical scenarios against the FHIR specification.

String clinical lead: Stephen Chu. Stream technical lead: David Hay

#FHIR Updates

Several FHIR related updates:

Just a note in response to a question from an implementer: we are going through a period of making many substantial changes to the specification in response to user feedback. Right now, the test server ( is far behind – that’s work for next month. This doesn’t affect the DSTU test server (

p.s. someone asked why I put the Hash tag (#FHIR) in my blog post headings – that’s because I can’t see how to get my blog post auto-tweeter to add the # all by itself (and I don’t want to write my own)

HL7 Australia #FHIR Forum and Connectathon

On Thursday & Friday 6-7 November 2014 Hl7 Australia is holding a FHIR Forum and Connectathon in Melbourne.

Day 1 is focused on education:

Keynote: FHIR in context … a step forward for patients Andrew Yap, Alfred Hospital Melbourne
FHIR – A US perspective David McCallie, CMIO, Cerner
Implementing FHIR for Australian GP Systems Brett Esler, Oridashi
FHIR / openEHR collaboration Heather Leslie, Ocean
FHIR & the Telstra eHAAS design Terry Roach, Capsicum / Telstra
Introduction to SMART on FHIR Josh Mandel, Smart / Boston Childrens
Using Terminologies with FHIR Michael Lawley, CSIRO
Using FHIR in new and unexpected ways – actually including the Patient in the system Brian Postlethwaite, Telstra (DCA)
Clinical records using FHIR David Hay, Orion Healthcare
Panel: What are the prospects for FHIR Adoption in Australia?

  • Grahame Grieve, Health Intersections (FHIR Project)
  • Richard Dixon Hughes, DH4 (Standards)
  • Tim Blake, Semantic Consulting / DOH (Government)
  • Peter Young, Telstra  – DCA (Industry)
  • Malcom Pradhan – Alcidion (Clinical)


Im really pleased about this program: it’s a great line up of speakers from Australia and New Zealand talking about what they’re actually doing with FHIR. Also, I’m really pleased to welcome David McCallie, the CMIO for Cerner, who’ll be joining us from USA by video to discuss Cerner’s plans for FHIR and discuss the broader prospects for the adoption of FHIR in the USA. Finally, we’re really lucky and extremely pleased to have Josh Mandel from Boston Children’s Hospital Informatics Program present. Josh will be talking about SMART on FHIR, and describing how that works as an EHR extensibility framework.

On Day 2, we’ll be holding a connectathon. We’ll have 3 streams of activity:

  • Basic Patient Stream – this is suitable for any developer with no prior experience of FHIR necessary – all you need is a working development environment of your choice
  • Smart on FHIR – this is for EHR vendors who want to experiment with using Smart n FHIR as a plug-in framework for their system, or for anyone who’s interested in writing an EHR plug-in – as many clinical departments will be
  • Clinical Connectathon – this is for non-developers who still want hands on experience with FHIR – use the clinical connectathon tools to learn how real world clinical cases are represented in FHIR resources

I hope to see all of you there. To register, go to, or you can see the formal program announcement.

p.s. it doesn’t say so on the program, but there’ll be a conference dinner on the Thursday night.

Question: NEHTA CDA & GP referrals


Is there any example of NEHTA compliant CDA document that I can look at from a perspective of a GP referral form ( )? Is there a tool that can be used to navigate and generate the CDA from a HL7 v2 message?


There’s been a lot of discussion over the last decade or so about creating a CDA document for these kind of referral forms. I saw a pretty near complete set of functional requirements at one point. But for various reasons, the project to build this has got any funding, either as a NEHTA project, or a standards Australia project (it’s actually been on the IT-14-6-6 project list for a number of years, mostly with my name on it).

So right now, there’s no NEHTA compliant document. As a matter of fact, I don’t know of anything quite that like that from any of the national programs, though no doubt one of my readers will – please comment if you do. There is a project investigating this in the US National Program (S&I framework( but they’re not using CDA.

On the other part of the question, no, unfortunately not. NEHTA provides both C# and java libraries that implement the NEHTA described CDA documents, but it’s an exercise left to the implementer to convert from a v2 message to a CDA document. That’s primarily because there’s so much variability between v2 messages that there’s no safe way to write a single converter

I tried to do that with just AS 4700.2 messages, which are much more constrained than the general case, and it wasn’t really successful; the PITUS project is working on the fundamental alignment needed to get ti right in the future.

The PCEHR Review, and the “Minimum Composite of Records” #2

This post is a follow up to a previous post about the PCEHR review, where I promised to talk about medications coding. The PCEHR review quotes Deloittes on this:

The existing Australian Medications Terminologies (AMT) should be expanded to include a set of over the counter (OTC), medicines and the Systematised Nomenclature of Medicine for Australia (SNOMED-CT -AU) should become universal to promote the use of a nationally consistent language when recording and exchanging health information.

Then it says:

Currently there are multiple sources of medication lists available to the PCEHR with varying levels of clinical utility and functionality. From some sources there is an image of the current medication list, from some sources the current medication list is available as text, from some sources the information is coded and if the functionality existed would allow for import and export into and out of clinical systems as well as transmission by secure messaging from health care provider to health care provider.

Note: I’m really not sure what “there is an image of the current medication list” means. As a separate choice to “list is available as text”, I guess that implies it’s actually made available as a bitmap (or equivalent). I’ve never seen anything like that, so I have no idea what they mean.

And further:

The NPDR should be expanded to include a set of over the counter (OTC) medicines to improve its utility.

Over the counter medication is essential to detect such issues as poor compliance with Asthma treatment, to show up significant potential side effects with prescription only medicines and to allow for monitoring and support for drug dependent persons. The two main data sources of data are complementary and neither can do the job of the other. The curated current medications list together with adverse events, could be sourced from the GP, Specialist, Hospital or Aged Care Facility clinical information system, the discharge summary public or private is immediately clinically useful and will save time for the clinician on the receiving end.

It is imperative that further work be done on software systems to make the process of import and export and medication curation as seamless as possible to fit in to and streamline current workflow.

My summary:

  • Extend AMT and NPDR to include over-the-counter medicines
  • Work with all the providers to get consistently encoded medicines and medication lists so the medications can be aggregated, either in the PCEHR or in individual systems

Over-the-counter medicines

Well, I guess this means pharmacists only things (such as ventolin, which they mention) and doesn’t include supermarket type things like aspirin or hay-fever medications. I don’t know how realistic this is from a pharmacist workflow perspective (e.g. getting consent, signing the NPDR submission), but let’s (for the sake of argument) assume that it is realistic. That will mean that each OTC product they sell will need to be coded in AMT (once AMT is extended to cover them). I don’t know how realistic this is – can it be built into the supply chain?  Let’s assume that it can be, so that pharmacists can make this work, and that we’ll then be able to add this to NPDR.

However there’s a problem – this recommendation appears to assume that the NPDR is already based on AMT. I’ve got a feeling that it’s not. Unfortunately, good information in public is not really available. By repute, AMT adoption isn’t going well.

Is that true? What would really be helpful in resolving this question would be to fill this table out:

Coding System SHS/ES eRef/SL DS NPDR
AMT v2
Vendor Specific
First Data Bank
Text only

Where each cell contains 3 numbers:

  • Number of systems certified for PCEHR connection
  • Number of documents uploaded that contain codes as specified
  • Number of medications in PCEHR coded accordingly

Some notes about this table:

  • Even just gathering data from the PCEHR to fill this table out would be a significant amount of work – it’s not just running a few sql queries. And even then, it wouldn’t cover non-PCEHR exchange – if it did, it would be even more interesting
  • Although a system may encode medications in one of these coding systems, some medications might not be coded at all since they don’t appear on the list of codes. And some systems encode immunizations differently from medications and some do it the same (ICPC 2+ is used for immunizations by a few systems)
  • Although most medications use codes in production systems, for a variety of reasons many medications in the PCEHR are not coded, they’re just plain text (in fact, other than the NPDR, I think plain text would be the biggest number). I know of several reasons for this:
    • There was no financial penalty for not coding medications at all
    • The PCEHR system returns warnings and/or errors if the medications are not coded the way it supports
    • The PCEHR system is extremely limited in what it does support
    • There’s no systematic way to find out what it does support
    • Trouble shooting failed documents is extremely hard for a variety of reasons
    • There’s lack clarity around safety and where liability falls when aggregating
    • note: I don’t claim that this list is complete nor have I assigned priorities here

If we saw the counts for this table, we’d have a pretty good feel for where medication coding is in regard to the PCEHR. And I’m pretty sure that it would show that we have a long way to go before we can get consistent encoding.

Consistently Encoding Medications

Getting consistently encoded medications list is far more difficult that merely getting alignment around which technical coding system to use. Beyond that, different systems take different approaches to tracking medications as they change. For instance, the standard Discharge Summary specification says that there are two medication lists:

  • Current Medications On discharge: Medications that the subject of care will continue or commence on discharge
  • Ceased Medications: Medications that the subject of care was taking at the start of the healthcare encounter (e.g. on admission), that have been stopped during the encounter or on discharge, and that are not expected to be recommenced

There’s a hole between those two definitions: medications that the patient took during the admission that have residual effect. But many discharge summary systems can’t produce these two lists. Some have 3 lists:

  • Medications at admission
  • Medications during inpatient stay
  • Medications at discharge

Others only track medications prescribed by the hospital, and not those already taken by the patient, or if it tracks those, it does so as text. Some systems don’t know which inpatient medications are continuing after admission or not. Even if the system can track these medications in fully coded detail, there’s no guarantee that the clinical staff will actually code them up if they didn’t prescribe them. Some institutions have medication management systems that track every administration, while others don’t.

Finally, there’s the question of to what degree GP’s and specialists record medications that are not relevant to the problem(s) they are interested in (particularly specialists). Humans (or clinicians – I’m not convinced they’re the same things 😉 ) are good at dealing with degeneracy in these lists, and it saves them a lot of time (mention a medication generically, but no relevant details). Computers are not good at dealing with degeneracy, so in the future, the clinical staff will all have to be unerringly precise in their records.

Note that I’ve moved into pure clinical practice at the end there; in order to meet the PCEHR goal, we need to align:

  • Medications coding
  • Medication tracking system functionality
  • Clinical practice

And while we’re at it, we need to jettison the existing data which will be come legacy and no longer useful to all the existing systems.

I’m not holding my breath. The PCEHR review says:

The PCEHR Value Model suggests that of the total gross annual theoretical benefit potential of eHealth in Australia, medication management is the greatest individual driver of benefits ($3.2 billion or 39% of gross benefits).

I’m not convinced that will be a net profit 🙁

p.s. does anyone know where the PCEHR Value Model is published?


The PCEHR Review, and the “Minimum Composite of Records”

So the PCEHR review has finally been released, and I’ve been reading with considerable interest. I’m going to stick to analysing the technical recommendations that they make, starting with a group of recommendations they call the “Minimum Composite of Records”:

19. Expand the existing Australian Medications Terminologies (AMT) data set to include a set of over the counter (OTC) medicines.
20. Widen the existing National Prescribing and Dispensing Repository (NPDR) to include the expanded set of over the counter (OTC) medicines.
21. Implement a minimum composite of records to allow transition to an opt-out model by a target date of 1st January 2015 inline with recommendation 13. This will dramatically improve the value proposition for clinicians to regularly turn to the MyHR, which must initially include:

  • Demographics
  • Current Medications and Adverse Events
  • Discharge summaries
  • Clinical Measurements

The section that explains these starts with the following paragraph:

A common theme in the consultation process was the need for a minimum data set to make up a viable clinical record. Many of the submissions also pointed out that it was imperative for the data
standards to be widely and universally adopted to allow the MyHR to function. The more clinically relevant material that was present within the MyHR the faster the rate of adoption and therefore the faster the return on investment will be

I’m really pleased to see the question of wide and universal adoption of standards mentioned – that’s what I would have said to the panel if I’d made my own submission. From these general comments at the introduction, the review seems to get rather distracted by medications coding issues, before suddenly coming back to the question of “minimum composite of records”. So, what does that mean?

  • Demographics – I cannot imagine what this means beyond what is already in place? The documents include demographics – my consistent feedback from clinicians is that they contain too much of them, and I couldn’t figure out from the text what they thought this meant.
  • Current Medications and Adverse Events – well, that’s consistently been a focus of what we’ve already done, but the section indicates that this is about the medications coding. So more on that in the next post
  • Discharge summaries – again, this is something that has already been prioritised, but the section points out that this doesn’t apply to private hospitals. And, in fact, private hospitals aren’t really a good fit for the current discharge summary because of the way their business works, so the basic discharge specification may need to be reviewed to make it a better fit for that use
  • Clinical Measurements – the section says “capture vital signs to prevent avoidable hospitalisation and demonstrate meaningful use of PCEHR.” – uh? How will capturing vital signs – data of relevance today during an admission – will “prevent avoidable hospitalisation”? That was submission from the Aged Care industry, so perhaps they’re saying, if the PCEHR contained a record of the patients baseline vital signs, then we can know whether they’re actually significantly impaired if they have an emergency admission outside their normal facility? – it seems like a super narrow use for me

They say: “All other functionality … should be deprioritised while these data sets are configured” – but what other functionality is that? It’s not obvious to me.

So, the Minimum composite of records actually means:

  • Improve medications coding
  • Cater for discharge summaries from private hospitals
  • Add support for vital signs
  • Continue to focus on implementation activities in support of these things

Have I read that right? Comments welcome…

I’ll take up the question of medications coding (recommendations #19 and #20) in the next post.

Question: use of HL7 v2 for specialist Letter


When sending a Specialist Letter HL7 V2 REF_I12 back to a GP, should the Referring Provider in the PRD segment point to the GP (the originator of the Referral) or the Specialist (the originator of the Specialist Letter please?

Would the Originating referral identifier RF1.6 for the Specialist Letter be the identifier of the original referral (I believe it would be this) or that of the specialist letter please? In the specialist letter HL7, would OBR-16 (ordering provider), refer to the GP?


Well, I’m going to take this as an Australian question, because that’s the only context in which I’ve heard of a “Specialist Letter”.

The roles in the PRD are a repeating field. In the Australian context, we have a Role IR = Intended Recipient which should be used to indicate the recipient used of the message, and there should only be one of these in the message. This removes ambiguity of where the message is destined.  The other roles are the roles in the context of the scenario and not related to the messaging, so “Referring Provider in the PRD segment points to the GP (the originator of the Referral)”

The definition for RF1-6:

The first component is a string of up to 15 characters that identifies an individual referral. It is assigned by the originating application, and it identifies a referral,and the subsequent referral transactions, uniquely among all such referrals from a particular processing application. 

And for RF1-11:

The first component is a string of up to 15 characters that identifies an individual referral. It is typically assigned by the referred-to provider application responding to a referral originating from a referring provider application, and it identifies a referral, and the subsequent referral transactions, uniquely among all such referrals for a particular referred-to providerprocessing application. For example, when a primary care provider (referring provider) sends a referral to a specialist (referred-to provider), the specialist’s application system may accept the referral and assignit a new referral identifier which uniquely identifies that particular referral within the specialist’s application system. This new referral identifier would be placed in the external referral identifier field when the specialist responds to the primary care physician.

So the purpose of RF1 is to identify the referral (not the message or documents within ie. OBR-3).

GP -> Specialist referral:

RF1||||||123^GP Practice^1CA696CA-C91D-466E-BC11-B5C9B7B99ACA^GUID

Specialist referral -> GP:

RF1||||||123^GP Practice^1CA696CA-C91D-466E-BC11-B5C9B7B99ACA^GUID|||||AB354^Specialist Practice^1BC63E55-3FEC-4B2E-8A61-0DE1796C3410^GUID

RF1-6 is a required field. So a problem may arise when the specialist sends a reply to the GP and the originating referral identifier is not known, say if the original referral was received not via REF^I12, ie rather paper/fax. In this case, all you could use would be very unique dummy value such as a GUID be used in this instance in RF1-6. Leaving it blank may not be acceptable to some receivers.

And so, yes, in the specialist letter HL7, OBR-16 (ordering provider), would refer to the GP.

Note: Thanks to Jared Davison from Medical Objects for this answer.

Further Analysis of CDA vulnerabilities

This is a follow up to my previous post about the CDA associated vulnerabilities, based on what’s been learnt and what questions have been asked.

Vulnerability #1: Unsanitized nonXMLBody/text/reference/@value can execute JavaScript

PCEHR status: current approved software doesn’t do this. Current schematrons don’t allow this. CDA documents that systems receive from the PCEHR will not include nonXmlBody.

General status: CDA documents that systems get from anywhere else might. In fact, you might get HTML to display from multiple sources, and how sure are you that the source and the channel can’t be hacked? This doesn’t have to involve CDA either – AS 4700.1 includes a way to put XHTML in a v2 segment, and there’s any number of other trading arrangements I’ve seen that exchange HTML.

So what can you do?

  • Scan incoming html to prevent active content in the HTML. (schematrons, or use a modified schema)
  • don’t view incoming html in the clinical system – use the user’s standard sandbox (e.g. the browsers)
  • change the protocol to not exchange raw html directly

Yeah, I know that this advice is wildly impractical. The privilege of the system architect is to balance between what has to be done, and what can’t be done 😉

FHIR note: FHIR exchanges raw HTML like this. We said – for this reason exactly – that no active content is allowed. We’re going to release tightened up schema and schematron, and the public test servers are tightening up on this.

Vulnerability #2: Unsanitized table/@onmouseover can execute JavaScript

PCEHR status: these documents cannot be uploaded to the pcEHR, are not contained in the PCEHR, and the usual PCEHR stylesheet is unaffected anyway.

General status: CDA documents that you get from anywhere else might include these attributes. If the system isn’t using the PCEHR stylesheet, then it might be susceptible.  Note: this also need not involve CDA. Anytime you get an XML format that will be transformed to HTML for display, there might be ways to craft the input XML to produce active content – though I don’t know of any other Australian standard that works this way in the healthcare space

So what can you do?

Vulnerability #3: External references, including images

PCEHR status: There’s no approved usage of external references in linkHtml or referenceMultimedia, but use in hand written narrative can’t be ruled out. Displaying systems must ensure that this is safe. There will be further policy decisions with regard to the traceability that external references provide.

General Status: any content that you get from other systems may include images or references to content held on external servers, whether CDA, HTML, HL7 v2, or even PDF. If you are careless with the way you present the view, significant information might leak to the external server, up to and including the users password, or a system level authorization token, or the user’s identity. And no matter how careful you are, you cannot prevent some information leaking to the server – the users network address, and the URL of the image/reference. A malicious system could use this to track the use of the document it authored – but on the other hand, this may also be appropriate behaviour.

So what can you do?

  • Never put authentication information in the URL that is used to initiate display of clinical information (e.g. internally as a application library)
  • Never let the display library make the network call automatically – make the request in the application directly, and control the information that goes onto the wire explicitly
  • If the user clicks an external link, make it come up in their standard browser (using the ShellExec on windows or equivalent), so whatever happens doesn’t happen where the code has access to the clinical systems
  • The user should be warned about the difference between known safe and unknown content – but be careful, don’t nag them (as the legal people will want, every time; but soon the users click the warning by reflex, and won’t even know what it says)

Final note: this is being marketed as a CDA exploit. But it’s an exploit related to the ubiquity of HTML and controls, and it’s going to be more and more common…

Update – General mitigating / risk minimization approaches

John Moehrke points out that there’s a series of general foundations that all applications should be using, which mitigate the likelihood of problems, and or the damage that can be caused.


Always know who the other party (parties) your code is communicating with, establish the identity well, and ensure communications with them are secure. If the party is a user using the application directly, then securing communications to them isn’t hard – then the focus is on login. But my experience is that systems overlook authenticating other systems that they communicate with, even if they encrypt the communications – which makes the encryption largely wasted (see “man in the middle“). Authenticating your trading partners properly makes it much harder to introduce malicious content (and is the foundation on which the PCEHR responses above rest on). Note, though, the steady drum of media complaints about the administrative consequences of properly authenticating the systems the PCEHR interacts with – properly authenticating systems is an administrative burden, which is why it’s often not done.


A fundamental part of application design is properly managed authorization, and to do so throughout the application. For instance, don’t assume that you can enforce all proper access control by managing what widgets are visible or enabled in the UI; eventually additional paths to execute functionality will need to be provided, in order to support some kind of workflow integration/management. Making the privileges explicit in operational code is much safer. And means that rogue code running the UI context doesn’t have unrestricted access to the system (though a hard break like between client/server is required to really make that work)

Audit logging

Log everything. With good metadata. Then, when there is belief that the system is penetrated, you can actually know whether it is or not. Make sure that the security on the system logs is particularly strong (no point keeping them, but making it easy for the malicious code to delete them). If nothing else, this will help trace an attacker, and prevent them from making the same attack again because no one can figure out what they did

Note: none of this is healthcare specific. It’s all just standard application design, but my experience is that a lot of systems in healthcare aren’t particularly armored against assault because it just doesn’t happen much. But it’s still a real concern.