Category Archives: Australia

Question: HL7 v2 referrals in Australia

Question:

It seems there are lots of different ways to format a letter type message in Australian Health IT

  1. Pretend its a path report and use ORU with the appropriate LOINC code. This seems the most common
  2. Do it “properly”: use REF
  3. Use PIT, still around I understand
  4. MyEHR CDA type

And within that

  1. 1/ use the FT plain-text format
  2. 2/ embed as an RTF document
  3. 3/ embed as PDF

So that’s 12 permutations at least and I’m sure I’m missing some. Which options have the best chance of getting parsed/rendered correctly by the current crop of GP software (MDW, Best Practice, Zedmed, etc)

Answer:

Well, I think that right now there is no great answer to this question. I consulted Peter Young from Telstra Health, who is leading a vendor consortium that is working with Australian Digital Health Agency (ADHA) on this particular issue. Peter says:

There is work underway in collaboration between HL7 Australia, ADHA and a number of industry players to create a standard mechanism of sharing messages between clinical applications and capable of being transmitted via SMD based on an HL7 Ref I12. It leverages the recently published pathology messaging profile and is part of a broader program led by ADHA  to improve interoperability between applications. Initially there is a baseline specification that provides essential structured data with a clinical document that can be in  either of RTF, pdf, html or text. The idea is that receiving applications will support any document format; if not natively, at least use a viewer. The work plan also includes extensions to add more structured data.  Most of the major clinical and messaging vendors are involved in the work program.

The link to the simplified profile is https://confluence.hl7australia.com/display/OO/Appendix+8+Simplified+REF+profile

I think that the best bet right now is the REF I12 message, with PDF for the letter. But this is an area of active work.

Argonaut in Australia, and the MyHR

Project Argonaut is coming to Australia. That is, at least one major US EHR vendor is planning to make their SMART-on-FHIR EHR extension interface available in Australia during 2018 (discussion about this at Cerner Health Conference where I was last week). HL7 Australia will work with them (and any other vendor) to describe what the Argonaut interface looks like in Australia (short answer: not much different: some different patient extensions, a few terminology changes (not RxNorm), maybe a couple of extensions on prescriptions for Reg24 & CTG). Also, HL7 Australia will be planning to engage with Australian customers of the US EHR vendors to help build a community that can leverage the capabilities of the SMART on FHIR interface.

This is a big deal for Australian EHR vendors that compete with the US vendors – they better start offering the same capabilities based on the same standards, or one of their key market advantages will be consigned to the dust of history. They’ll also find themselves competing with established SMART on FHIR Application vendors too. So I look forward to good engagement with Australian companies as we flesh this out (I know at least one will be deeply involved).

This also offers us an opportunity to consider what to do with the MyHR. The government has spent a great deal of money on this, and results have been disappointing. (Yes, the government publishes regular usage stats which show continuous increase, but these are not the important usage metrics, and they’re not the kind of stats that were hoped for back when we were designing the system). And it’s hardly popular amongst the typical candidate users (see, for example, AMA comments, or for more color, Jeremy Knibb’s comments or even David More’s blog).

But I’m not surprised at this. Back when it was the pcEHR, the intentions were solid, and though the original timeline was impossibly tight, it came together in an amazingly quick time (kudos to the implementers). But as it came together, I knew it was doomed. This is inevitable given it’s architecture:

Salient points about this architecture:

  • The providers push CDA documents to the central document repository
  • Patients can view documents about them
  • Patient’s can write their own documents, but providers won’t see them
  • Patient’s can exert their control only by ‘hiding’ documents – e.g. they can only break the flow of information (reminder, the internet age treats censorship as damage and routes around it)
  • Clinicians can find and read documents
  • Each document is it’s own little snap shot. There’s no continuity between them, no way to reconcile information between them
  • There are no notifications associated with the system

You can’t build any process on that system. You can’t even build any reliable analysis on it (stakeholders worried about the government using it for secondary data analysis shouldn’t, in general, worry about this, it’s too hard to get good data out of most of the CDA documents). These limitations are baked into the design. That’s why I went and developed FHIR – so that when the full reality of the system become evident, we’d have a better option than a document repository.

Well, 10 years later, and we’re still trying to load ever more use into the same broken design, and the government sponsors are still wondering why it’s not ‘working’. (at least we stopped calling it the ‘personally controlled’ EHR, since it’s the government controlled EHR). And as long as it exists and is the focus of all government efforts to make digital health happen, it will continue to hold up innovation in this country – a fact which is terribly evident as I travel and see what’s happening elsewhere.

But it doesn’t have to be like this.

The MyHR is built on a bunch of useful infrastructure. There is good ideas in here, and it can do good things. It’s just that everything is locked up into a single broken solution. But we can break it open, and reuse the infrastructure. And the easiest way I can see to do this is to flip the push over. That is, instead of the source information providers pushing CDA documents to a single repository, we should get them to put up an Argonaut interface that provides a read/write API to the patient’s data. Then, you change the MyHR so that it takes that information and generates CDA documents to go into the MyHR – so no change needed to the core MyHR.

What this does is open up the system to all sorts of innovation, the most important of which is that the patient can authorise their care providers to exchange information directly, and build working clinically functional systems (e.g. GP/local hospital, or coordinated care planning), all without the government bureaucrats having to decide in advance that they can’t be liable for anything like that. That is, an actually personally controlled health record system not a government controlled one. And there’s still a MyHR for all the purposes it does exist for

This alternative looks like this:

The salient features of this architecture:

  • All healthcare providers make healthcare information services available using the common Argonaut based interface (including write services)
  • Patients can control the flow at the source – or authorise flows globally through myGov (needs new work on myGov)
  • Systems can read and write data between them without central control
  • The MyHR can pull data (as authorised) from the sources and populate the MyHR as it does now
  • Vendors and providers can leverage the same infrastructure to provide additional services (notifications, say)

The patient can exert control (either directly at the provider level, or through mygov as an OAuth provider) and control the flow of information at the source – they can opt-in or -out of the myHR as appropriate, but they can also share their information with other providers of healthcare services directly. Say, their phone’s very personal health store. Or research projects (e,g, AllofUs). Or, most importantly and usefully, their actual healthcare providers, who can, as authorised by the patient, set up bi-directional flows of information on top of which they can build better coordinated care processes.

These features lead to some important and different outcomes:

  • Healthcare providers and/or system vendors can innovate to build distributed care models that provide a good balance between risk and reward for different populations (instead of the one-size suits bureaucrats that we have now)
  • Patient’s can control the system by enabling the care flows that they want
  • Clinicians can engage in better care processes and improve their process and outcomes (though the US process shows clearly that things get worse before they get better, and you have to plan for that)

This isn’t a small change – but it’s the smallest change I know of that we can make that preserves the MyHR and associated investment, and gives us a healthcare system that can innovate and build better care models. But I don’t know how we’ll think about getting there, given that we’re still focused on “make everyone use the MyHR”.

Note: Adapted from my presentation yesterday at the HL7 Australia Meeting

 

Cultural Factors affecting Interoperability

One of the under-appreciated factors that affects how successful you’ll be at ‘interoperability’ (for all the various things that it might mean) is your underlying culture of working with other people – your and their underlying expectations about whether and when you’ll compromise with other people in order to pursue a shared goal.

Culture varies from organization to organization, and from person to person. And even more, it varies from country to country. As I work with different countries, it’s clear that in some countries, it’s harder to get people to to sacrifice their short term personal interests for shared long term communal interests. There’s plenty of research about this – mostly phrased in terms of micro-economics. And it very often comes to down trust (or lack thereof). Trust is a critical success factor for success at compromise and collaboration. And it’s pretty widely observed that the level of trust that people have in institutions of various kinds is reducing at the moment (e.g. 1 2 3).

Plenty has been written about subject, and I’m not going to add to it. Instead, I’m going to make a couple of related observations that I think are under-appreciated when it comes to interoperability:

The first is that smaller countries with a bigger dominant country that can easily overpower them next door (my go-to examples: Estonia, Denmark, New Zealand) have populations that are much more motivated to collaborate and compromise with each other than countries that are economically (and/or politically) without peer in their geographic area.

And so you might think that these countries are better at interoperability than others…? well, sort of:

Countries that have a cultural disadvantage with regard to interoperability are the countries that produce the great interoperability technologies and methodologies (they have to!), but countries that have a cultural advantage for interoperability are much better at taking those technologies and methodologies and driving them home so the task is complete.

If my theory is right, then when you look at what countries are doing, you should look for different lessons from them, depending upon their cultural situation with regard to interoperability.

p.s. If my theory is right, one really has to wonder how bad are the cultural headwinds against interoperability here in Australia..

p.p.s. I found very little about this on the web. References in comments would be great.

 

 

Question: where did the v2 messages and events go in FHIR?

Question:

I’m relatively new to the HL7 scene having implemented a few V2 messaging solutions (A08 , A19, ORU) and the V3/CDA work on PCEHR.  I am trying to get my head around FHIR.  So far I am stumped in how I would go about for example implementing the various trigger/messages I have done in V2.  Is there any guidance?  I cant find much.  Is that because the objective of FHIR is implementers are free to do it anyway they like?  If you could send me some links that would be a good starting point that would be great

Answer:

Most implementers of FHIR use the RESTful interface. In this approach, there’s no messaging events: you just post the Patient, Encounter etc resources directly to the server. The resources contain the new state of the Patient/Encounter etc, and the server (or any system subscribed to the server) infers the events as needed.

A few implementers use the messaging approach. In this case, the architecture works like v2 messaging, with triggers, and events. However, because the resources are inherently richer than the equivalent v2 segments (e.g. see Patient.link), we didn’t need to define a whole set of A** messages, like in V3. Instead, there’s just “admin-notify” in the event list.

For XDS users (HIE or the PCEHR in Australia), see IHE’s Mobile Health Document Specification.

 

Terminology Services Connectathon in Australia

This is out today:

Adoption and use of clinical terminology in Australia has received a major boost with the signing of a licensing agreement between the CSIRO and the National E-Health Transition Authority (NEHTA) to grant users within Australia free access to a comprehensive suite of tools to support browsing, authoring, mapping, maintaining, and querying terminology.

These tools will be invaluable for implementers of clinical terminology to move towards unified clinical coding and improved patient safety.

  • NEHTA’s LINGO™ enables users to author local extensions to SNOMED CT-AU using the same robust browser-based authoring tool used by the National Clinical Terminology Service.
  • CSIRO’s Ontoserver is a terminology server that provides a sophisticated means of querying, searching, filtering and ranking SNOMED CT-AU and other standard clinical terminologies including an application programming interface (API) that allows for quick and easy way for implementers to add SNOMED CT based data capture fields to their system.
  • CSIRO’s Snapper is backed by Ontoserver and enables users to create local data sub-sets and maps.

“This licensing agreement between NEHTA and CSIRO enables both the private and public health sectors in Australia to access these tools to support the use and maintenance of terminology products. This will significantly improve the implementation and management of clinical data for enhanced patient outcomes,” said NEHTA CEO Peter Fleming.

The licensing agreement and national implementation will enable NEHTA to establish a fully syndicated terminology service providing national support for the re-use of locally-built reference sets, simple portal-based access to terminology products, and simplified maintenance processes to cascade SNOMED CT-AU updates into other products and support improved vendor testing processes.

I think that this is a great step forward for healthcare applications in Australia; laying down a solid terminology infrastructure is a real opportunity for us to improve healthcare applications around the country, though it will take some time for the application providers to figure out how to use it well, and then to start to make use of the powerful possibilities it offers.

The press release goes on to say:

NEHTA invites all interested to participate in a series of three Connectathons, with the first scheduled for February 2016.

Here’s some additional provisional details about the first connectathon:

  • It’s planned to be in Brisbane Feb 10/11
  • It’ll be held in association with HL7 Australia, and in addition to the CSIRO Ontoserver, the HL7 Australia Terminology Server will be part of the connectathon. Other terminology services may also be represented
  • Attendance is open to any software development team that produces healthcare applications that run in Australia (ISVs, jurisdictions, etc)
  • The technical focus will be the ValueSet and Concept Map resources, and the Value Set Expansion, Validation, and Translation operations
  • I don’t think there’ll be any charge for attending the connectathon

The connectathons are a key opportunity for vendors – large and small – to learn

  • what terminology services can do
  • what deployable terminology service solutions exist, including open source ones
  • why making use of them will be a key strategic requirement to make their customers happy and keep up with the market

Note that the connectathon details are still subject to change.

Preparing for the Australia #FHIR Connectathon

It’s 10 days or sp until the Australian FHIR Connectathon, which is Friday. This post is to help people who are preparing for that connectathon. There’s 3 tracks at the Australian Connectathon:

Track 1: Patient resource (Introductory)

This track serves as the simple introductory task for anyone who hasn’t been to a connectathon before, though previous attendees will find it useful for extending their experience and knowledge. The patient scenario is to write a client that can follow this simple sequence:

  • Search for a patient
  • Get a selected patient’s full details
  • Edit and Update the patient details
  • Create a new patient record

Or, alternatively, to write a server that is able to support some or all of these functions.

This is useful because the same patterns and problems apply to all the other resources, and very nearly everyone has to implement the patient resource.

If you’re writing a client, our experience is that your minimum preparation is to start the day with a functioning development environment of your choice – to be able to develop and execute. If you don’t have that set up, you can lose most of the day just getting that done. If you’re writing a server, then the minimum functionality is to have a working web server that you know how to extend

Beyond the ability to develop and execute, any additional work you do before hand to work non these scenarios means that on the day you’ll get further into the scenario, and you’ll get that much more out of it.

Track technical lead: Grahame Grieve

Track 2: Smart on FHIR

This track is more advanced; it uses more functionality and makes deeper use of the functionality that FHIR provides, and adds to this additional context and security related content. For further information about this track, see the Chicago Connectathon Details

Track technical lead: Josh Mandel

Track 3: Clinical Connectathon

The first 2 tracks are distinctively developer tracks – to participate, you really need to be able to develop product (which is not the same as “being a developer”). Still, there are many users of interoperability specifications who are interested in how FHIR works, and these participants have as much to gain from a hands on experience learning FHIR as developers do – and the FHIR specification and community have just as much to learn from them too. With this in mind, the FHIR core team is working towards a connectathon experience that is focused on the end-user experience with FHIR. We held our first “Clinical Connectathon” in Chicago – you can read the summary report from it.

The 3rd track will be a follow up to the Chicago connectathon. Participants in this track will use tools provided by the core team to match the capabilities of the FHIR specification against the requirements and tasks of routine clinical workflow. There’s no preparation needed, except to turn up with a working laptop (or even tablet) that has the latest version of your favourite web browser installed (no support for old versions of IE).

Participants should not that this whole clinical track is a work in progress – it needs mature tooling from the core team, and we are still working towards that goal. This connectathon will be exercising the tooling that supports it as much as it’s going to be exercising clinical scenarios against the FHIR specification.

String clinical lead: Stephen Chu. Stream technical lead: David Hay

#FHIR Updates

Several FHIR related updates:

Just a note in response to a question from an implementer: we are going through a period of making many substantial changes to the specification in response to user feedback. Right now, the test server (http://fhir-dev.healthintersections.com.au) is far behind – that’s work for next month. This doesn’t affect the DSTU test server (http://fhir.healthintersections.com.au)

p.s. someone asked why I put the Hash tag (#FHIR) in my blog post headings – that’s because I can’t see how to get my blog post auto-tweeter to add the # all by itself (and I don’t want to write my own)

HL7 Australia #FHIR Forum and Connectathon

On Thursday & Friday 6-7 November 2014 Hl7 Australia is holding a FHIR Forum and Connectathon in Melbourne.

Day 1 is focused on education:

Keynote: FHIR in context … a step forward for patients Andrew Yap, Alfred Hospital Melbourne
FHIR – A US perspective David McCallie, CMIO, Cerner
Implementing FHIR for Australian GP Systems Brett Esler, Oridashi
FHIR / openEHR collaboration Heather Leslie, Ocean
FHIR & the Telstra eHAAS design Terry Roach, Capsicum / Telstra
Introduction to SMART on FHIR Josh Mandel, Smart / Boston Childrens
Using Terminologies with FHIR Michael Lawley, CSIRO
Using FHIR in new and unexpected ways – actually including the Patient in the system Brian Postlethwaite, Telstra (DCA)
Clinical records using FHIR David Hay, Orion Healthcare
Panel: What are the prospects for FHIR Adoption in Australia?

  • Grahame Grieve, Health Intersections (FHIR Project)
  • Richard Dixon Hughes, DH4 (Standards)
  • Tim Blake, Semantic Consulting / DOH (Government)
  • Peter Young, Telstra  – DCA (Industry)
  • Malcom Pradhan – Alcidion (Clinical)

 

Im really pleased about this program: it’s a great line up of speakers from Australia and New Zealand talking about what they’re actually doing with FHIR. Also, I’m really pleased to welcome David McCallie, the CMIO for Cerner, who’ll be joining us from USA by video to discuss Cerner’s plans for FHIR and discuss the broader prospects for the adoption of FHIR in the USA. Finally, we’re really lucky and extremely pleased to have Josh Mandel from Boston Children’s Hospital Informatics Program present. Josh will be talking about SMART on FHIR, and describing how that works as an EHR extensibility framework.

On Day 2, we’ll be holding a connectathon. We’ll have 3 streams of activity:

  • Basic Patient Stream – this is suitable for any developer with no prior experience of FHIR necessary – all you need is a working development environment of your choice
  • Smart on FHIR – this is for EHR vendors who want to experiment with using Smart n FHIR as a plug-in framework for their system, or for anyone who’s interested in writing an EHR plug-in – as many clinical departments will be
  • Clinical Connectathon – this is for non-developers who still want hands on experience with FHIR – use the clinical connectathon tools to learn how real world clinical cases are represented in FHIR resources

I hope to see all of you there. To register, go to www.hl7.org.au, or you can see the formal program announcement.

p.s. it doesn’t say so on the program, but there’ll be a conference dinner on the Thursday night.

Question: NEHTA CDA & GP referrals

Question

Is there any example of NEHTA compliant CDA document that I can look at from a perspective of a GP referral form ( http://nhv.org.au/uploads/cms/files/VIC%20-%20GP%20Referral%20(March%202013).rtf )? Is there a tool that can be used to navigate and generate the CDA from a HL7 v2 message?

Answer

There’s been a lot of discussion over the last decade or so about creating a CDA document for these kind of referral forms. I saw a pretty near complete set of functional requirements at one point. But for various reasons, the project to build this has got any funding, either as a NEHTA project, or a standards Australia project (it’s actually been on the IT-14-6-6 project list for a number of years, mostly with my name on it).

So right now, there’s no NEHTA compliant document. As a matter of fact, I don’t know of anything quite that like that from any of the national programs, though no doubt one of my readers will – please comment if you do. There is a project investigating this in the US National Program (S&I framework( but they’re not using CDA.

On the other part of the question, no, unfortunately not. NEHTA provides both C# and java libraries that implement the NEHTA described CDA documents, but it’s an exercise left to the implementer to convert from a v2 message to a CDA document. That’s primarily because there’s so much variability between v2 messages that there’s no safe way to write a single converter

I tried to do that with just AS 4700.2 messages, which are much more constrained than the general case, and it wasn’t really successful; the PITUS project is working on the fundamental alignment needed to get ti right in the future.

The PCEHR Review, and the “Minimum Composite of Records” #2

This post is a follow up to a previous post about the PCEHR review, where I promised to talk about medications coding. The PCEHR review quotes Deloittes on this:

The existing Australian Medications Terminologies (AMT) should be expanded to include a set of over the counter (OTC), medicines and the Systematised Nomenclature of Medicine for Australia (SNOMED-CT -AU) should become universal to promote the use of a nationally consistent language when recording and exchanging health information.

Then it says:

Currently there are multiple sources of medication lists available to the PCEHR with varying levels of clinical utility and functionality. From some sources there is an image of the current medication list, from some sources the current medication list is available as text, from some sources the information is coded and if the functionality existed would allow for import and export into and out of clinical systems as well as transmission by secure messaging from health care provider to health care provider.

Note: I’m really not sure what “there is an image of the current medication list” means. As a separate choice to “list is available as text”, I guess that implies it’s actually made available as a bitmap (or equivalent). I’ve never seen anything like that, so I have no idea what they mean.

And further:

The NPDR should be expanded to include a set of over the counter (OTC) medicines to improve its utility.

Over the counter medication is essential to detect such issues as poor compliance with Asthma treatment, to show up significant potential side effects with prescription only medicines and to allow for monitoring and support for drug dependent persons. The two main data sources of data are complementary and neither can do the job of the other. The curated current medications list together with adverse events, could be sourced from the GP, Specialist, Hospital or Aged Care Facility clinical information system, the discharge summary public or private is immediately clinically useful and will save time for the clinician on the receiving end.

It is imperative that further work be done on software systems to make the process of import and export and medication curation as seamless as possible to fit in to and streamline current workflow.

My summary:

  • Extend AMT and NPDR to include over-the-counter medicines
  • Work with all the providers to get consistently encoded medicines and medication lists so the medications can be aggregated, either in the PCEHR or in individual systems

Over-the-counter medicines

Well, I guess this means pharmacists only things (such as ventolin, which they mention) and doesn’t include supermarket type things like aspirin or hay-fever medications. I don’t know how realistic this is from a pharmacist workflow perspective (e.g. getting consent, signing the NPDR submission), but let’s (for the sake of argument) assume that it is realistic. That will mean that each OTC product they sell will need to be coded in AMT (once AMT is extended to cover them). I don’t know how realistic this is – can it be built into the supply chain?  Let’s assume that it can be, so that pharmacists can make this work, and that we’ll then be able to add this to NPDR.

However there’s a problem – this recommendation appears to assume that the NPDR is already based on AMT. I’ve got a feeling that it’s not. Unfortunately, good information in public is not really available. By repute, AMT adoption isn’t going well.

Is that true? What would really be helpful in resolving this question would be to fill this table out:

Coding System SHS/ES eRef/SL DS NPDR
AMT v2
ICPC 2+
MIMS
PBS
Vendor Specific
Snomed-CT
First Data Bank
Text only

Where each cell contains 3 numbers:

  • Number of systems certified for PCEHR connection
  • Number of documents uploaded that contain codes as specified
  • Number of medications in PCEHR coded accordingly

Some notes about this table:

  • Even just gathering data from the PCEHR to fill this table out would be a significant amount of work – it’s not just running a few sql queries. And even then, it wouldn’t cover non-PCEHR exchange – if it did, it would be even more interesting
  • Although a system may encode medications in one of these coding systems, some medications might not be coded at all since they don’t appear on the list of codes. And some systems encode immunizations differently from medications and some do it the same (ICPC 2+ is used for immunizations by a few systems)
  • Although most medications use codes in production systems, for a variety of reasons many medications in the PCEHR are not coded, they’re just plain text (in fact, other than the NPDR, I think plain text would be the biggest number). I know of several reasons for this:
    • There was no financial penalty for not coding medications at all
    • The PCEHR system returns warnings and/or errors if the medications are not coded the way it supports
    • The PCEHR system is extremely limited in what it does support
    • There’s no systematic way to find out what it does support
    • Trouble shooting failed documents is extremely hard for a variety of reasons
    • There’s lack clarity around safety and where liability falls when aggregating
    • note: I don’t claim that this list is complete nor have I assigned priorities here

If we saw the counts for this table, we’d have a pretty good feel for where medication coding is in regard to the PCEHR. And I’m pretty sure that it would show that we have a long way to go before we can get consistent encoding.

Consistently Encoding Medications

Getting consistently encoded medications list is far more difficult that merely getting alignment around which technical coding system to use. Beyond that, different systems take different approaches to tracking medications as they change. For instance, the standard Discharge Summary specification says that there are two medication lists:

  • Current Medications On discharge: Medications that the subject of care will continue or commence on discharge
  • Ceased Medications: Medications that the subject of care was taking at the start of the healthcare encounter (e.g. on admission), that have been stopped during the encounter or on discharge, and that are not expected to be recommenced

There’s a hole between those two definitions: medications that the patient took during the admission that have residual effect. But many discharge summary systems can’t produce these two lists. Some have 3 lists:

  • Medications at admission
  • Medications during inpatient stay
  • Medications at discharge

Others only track medications prescribed by the hospital, and not those already taken by the patient, or if it tracks those, it does so as text. Some systems don’t know which inpatient medications are continuing after admission or not. Even if the system can track these medications in fully coded detail, there’s no guarantee that the clinical staff will actually code them up if they didn’t prescribe them. Some institutions have medication management systems that track every administration, while others don’t.

Finally, there’s the question of to what degree GP’s and specialists record medications that are not relevant to the problem(s) they are interested in (particularly specialists). Humans (or clinicians – I’m not convinced they’re the same things 😉 ) are good at dealing with degeneracy in these lists, and it saves them a lot of time (mention a medication generically, but no relevant details). Computers are not good at dealing with degeneracy, so in the future, the clinical staff will all have to be unerringly precise in their records.

Note that I’ve moved into pure clinical practice at the end there; in order to meet the PCEHR goal, we need to align:

  • Medications coding
  • Medication tracking system functionality
  • Clinical practice

And while we’re at it, we need to jettison the existing data which will be come legacy and no longer useful to all the existing systems.

I’m not holding my breath. The PCEHR review says:

The PCEHR Value Model suggests that of the total gross annual theoretical benefit potential of eHealth in Australia, medication management is the greatest individual driver of benefits ($3.2 billion or 39% of gross benefits).

I’m not convinced that will be a net profit 🙁

p.s. does anyone know where the PCEHR Value Model is published?