It seems like there’s been a meme running around Australian clinical circles about how the coming of e-health, and particularly the deluge of mobile apps that is just around the corner means that there needs to be good clinical safety governance. Here’s a good example:
Dr Fernando called for clinical software for personal mobile devices to be regulated now rather than “waiting and letting the courts decide”. “I don’t want to constrain the use of what I think will be a potentially wonderful medical device, but I do think there is some guidance required and litigation is not the way to provide guidance,” she told Australian Doctor. Dr Fernando called on the TGA and the Australian Communications and Media Authority to work together with professional organisations to “list, evaluate and classify” software.
The thing is, this whole clinical safety governance thing is pretty arcane in the context of software. What does clinical safety actually mean? What would regulation actually achieve? I don’t think there’s any easy answers, and I thought I’d illustrate this using a safety case that we can all get our heads around: email.
The problem is actually easy to describe, and I’m sure most of us have seen it in one way or another: when you hit send, where is your email going? Here’s two cases:
- When I look at an email that I have sent in gmail, and reply to it, the new message goes to the person(s) I sent the original to. When I do the same in outlook, the new message goes to me.
- When I respond to an email from a mailing list, some lists configure the mail to respond to the list by default, and some to respond to the sender by default.
What’s the safety risks here?
In the first case, I may think that I’ve sent a response to someone, but actually I haven’t. Of course, I’ll notice almost immediately – unless I’m in a hurry, and I rush off before checking for new email. The second case, on the other hand is both common and serious. It leads to one of two problems:
- You respond to a message, thinking that you’re sending it everyone, but you accidentally only send it to the person you are replying directly to. Sometimes you realize later, but only once the consequences of mis-sending the email become clear
- You respond to a message, thinking that you’re sending it just to the person you are responding to, as a side note, but you accidentally send it to everyone
The second case can be quite serious – in fact, I’m sure it’s cost lives in the wrong place. It seems like a regular occurrence on email lists where the default reply is to the lists: a passionate argument gets going, and someone sends a message worded on the lines of “gee, Bob, I don’t know why you bother, Alice is a total Moron”. No amount of “Eve desparately wishes to withdraw that last email which was sent, written and conceived entirely in error and Eve would like to assure you that she never meant a word of it” emails are going to undo the damage that’s been done.
So we have three problems with escalating levels of seriousness, but quite different manifestations of risk. But how do you quantify these risks and compare them to each other? And there’s a more insidious level of risk that’s even more intangible: the cumulative effect of these problems is to leave people wary of email, and to not use/rely on it too much (and, of course, I’ve only barely begun to scratch the surface of that problem, but it’s enough just to deal with these specific things for now).
I think that this is a good case for considering clinical safety not because I particularly think that clinicians should use email – though it is widely used – but because any e-health solution is going to involve messaging tools, and email is an instructive case. You can rewrite the use cases above around clinical messaging – I’ll leave that as an exercise for the reader.
As far as I am concerned, there’s three root causes underlying these problems:
- inconsistent behaviour between software products (the second case is complicated by inconsistent behavior between applications, though all the ones I use are now consistent around this case)
- (in the second case) the email standards allows emails to specify a reply-to address – usually from: and reply-to: are the same, but not always
- people send lots of emails (I send on average >100 emails per day)
If we’re going to do something useful in email safety in regards to governance and regulations, then clearly this is an important area to address: it costs us efficiency, lost opportunities, and has occasional social disasters from which anything will happen, and there’s a background loss of trust in the overall useability and reliability of the system.
What controls can we put in place to mitigate the danger here?
- The most obvious thing to try it to nag the user to ask if they’re sure. But this won’t help. Users will quickly come to click “yes” in the check as a pure reflex action, and even if they realise it’s wrong, they’ll still probably hit “yes” (a problem I have in different contexts where the software nags me about things I do all the time). Making the “yes” and “no” buttons appear in random places will annoy a user no end, but won’t make them think
- A better approach is to implement an automatic 5 minute delay between sending, and actual sending. During this period, the message can be genuinely recalled. For various workflow and psychological reasons, users are more likely to realise that there’s a problem in those first 5 minutes rather than later (until the evidence of the mistake arises). Obviously this will only catch a portion of errors. (Note that this solution is implemented in several clinical reporting packages – reports sit in a holding tray for a configurable period of time before being distributed)
- It’d be nice to show the destination address in a different colour if it was for yourself, a single other person, or a whole list. Unfortunately, while this seems innately obvious to a person, it’s not at all obvious to software – addresses are all the same, and how would it know? but you could reliably show if the from: and reply-to: addresses are different – that’d be something.
- Gee, the rules so far haven’t achieved much, let’s just impose firewalls across all the email system ports and shut the entire system down. Not such a good idea? alright let’s try something else:
- It would be good to insist that all messaging systems work the same way, so that people only have to get used to one way of doing things, not more
- Similarly it would be good to insist that all email lists work the same way
So that’s my list of “controls” or mitigations. 1 that doesn’t work. 1 that mitigates a little on a % basis, but has an obvious price when you’re in a hurry, 1 that lead to a useful but still limited idea but that hasn’t been implemented, and 2 regulation ideas. If you have other ideas, feel free to make them in the comments. (I didn’t bother listing providing safety training – real users don’t show up for training, or don’t bother paying attention, and anyway, training would never address such a complex issue successfully if at all)
Guess we’ll just have to regulate. We clearly can’t trust the software developers to have any idea what they are doing; they sure never think things through or do anything like useability and safety assessments. So let’s just go ahead and make two new regulations that all email systems have to follow:
- When a user chooses to reply to an email that they themselves sent, the reply should go to the destination address of the email
- All email lists must hence forth be set to fix the reply-to: header to the reply-to: header of the original email
Sounds good…. no. I’m sure I just made anyone who understands email and/or software development blow up. These rules are utterly ridiculous. There’s so many problems with them – they mean well, but they just won’t work. And by the time you’ve finished scoping and qualifying them enough so that they actually are realistic, they won’t mean anything anymore. Just a short list of problems with them:
- What’s email? Do you want to include facebook messaging? Internal system email? Or external email? (Outcome: you can’t define the scope)
- What’s your governance scope? Do you have clean-room governance where all the systems inside the sandbox are under your governance, and all the systems outside your governance are outside the box? (Answer: no)
- Software packages actually work differently for reasons – both commercial and utility related. You can legislate against these factors, but you can’t make them actually go away
- The actual way that email addressing and routing works is way more complex that those rules describe. There’s all sorts of legitimate reasons for the complexity, and the rules as above would prevent many valid safe uses of email accidentally
- They still don’t actually fix the problem – just make it less likely
That’ll do -I could go on, but I think that’s enough to make the point. Regulation won’t work. But even worse, as well as not working, regulations are always written – have to be written – for today’s problems. And they require specific solutions for todays problems that end up preventing tomorrow’s solutions. (Email too open to abuse? build a social network and move your messaging in there…)
So if regulation isn’t the answer, then what? Some say, trust the developers
Trust the developers
Naturally, I’m drawn to this. I’m one myself, and I know how hard all the developers I’ve worked with work to develop safe useful software. We think about it (obsess about it). And most of us get judged on our success in this regard in a particularly harsh environment – the market.
On the other hand, I talk to users too. And so I know the gap between acceptable behaviour, and the things vendors/developers actually do. One story: one vendor had a system that only had one first name field. Users routinely entered middle names in the first name field. The vendor was required to write an interface that only accepted the actual given name in the first name field of the message. Solution: write a script to just delete all the middle names from the production clinical database. Enough said….
The safety risks are genuine, but we can’t mitigate them, and we can’t legislate them away, and we can’t trust the software developers? what to do…..?
Ideas are welcome in the comments as well, but what I think we should do is:
- Set up an e-health adverse clinical event reporting authority (like AusCert)
- Require every vendor selling clinical software in Australia to sign up for event notifications from the authority
- create a new sub-committee of IT-14 (Australian Health Informatics Standards committee) that tracks the notifications and develops handbooks and/or standards in response to reported the clinical events (yes, I know this is closing the door after the horse bolts, but this is how airlines work – and look, I’ve finally succumbed and made an airline comparison)