Monthly Archives: August 2011

Question: how to use ISO 21090 with ISO 11179


We are trying to understand what metadata about the type needs to be stored in a ISO 11179 metadata repository beyond simply naming or asserting that a data element is supposed to use a particular 21090 type when being exchanged.

Well, I have an implementation of ISO 21090, and an implementation of an ISO 11179 based registry – though as it’s an end-user focused registry, it didn’t try to be formally compliant. This table summarises the extra information I added to the registry for each data type:

 ST Min and Max Length, Regular expression
 ED  media type list
 INT  min and max values
 PQ  min and max values and precision, fixed unit or dimension, Whether uncertainty is allowed
 TS  timezone? Time part – n/a, opt or mandatory, min/max allowed times relative to now
 EN  type – person, string, organisation (corresponds to the inbuilt flavors)
 AD  if I had implemented the Australian Post interface, I’d add a flag for whether it had to be valid according to Australia Post
 TEL  email or tel? (I don’t insist on a formally correct tel: url, btw)

And then, of course, there’s CD.

I don’t represent CD directly in the registry. It’s … too powerful.. for my users. Instead, they pick a value set. The value set specifies the binding to a code system (as I add support for them) and whether the binding is closed or open, and whether translations are to be done where possible. CD is involved behind the scenes.


Question: How to code Substance Administration in CDA?


I have a question for storing Substance Administration information using CDA R-MIM.
I found that there is only one element under Substance Administration to store dosage, called DoseQuantity, which is not enough for storing full information for a prescribed medication. For example, in a clinical statement, “Neomycin 500 mg 1 tablet b.i.d.” where the dosage should be 500mg and 1 tablet. How can I deal with them? Or I only record “1 tablet”? How about if the clinical statement only looks like “Neomycin 500 mg b.i.d.”? Should “500 mg” be recorded?

This answer to this question has a guest author: Lloyd McKenzie, all round v3 expert and past co-chair of the pharmacy committee at HL7 (thanks Lloyd).

Dosage in v3 is a complex combination of several attributes:

  • SubstanceAdministration.effectiveTime conveys information such as start and end date, as well as frequency and things like “Institution Specified”.  (GTS hides all sorts of wonderful things)
  •, LabeledDrug.code and Material.code all identify what drug is being used.  Which you populate depends on what you have (UPC and similar manufacturer-assigned ids for, NDC codes and other drug codes in LabeledDrug.code, Material.code for other things like radiation, food, etc.)
  • SubstranceAdministration.administrationUnitCode allows you to convey special dosages like “scoop”, “puff”, etc. which are not the form of the drug, nor discrete measurements
  • doseQuantity allows you to convey the quantity, and when the quantity is a measured amount, the unit (mg, ml, etc.)
  • rateQuantity allows you to convey the period of time over which a single dose is administered for IVs, etc.

Unfortunately there’s a few key RIM structures that CDA doesn’t expose

  • LabeledDrug.formCode.  That’s where something like “tablet” would appear.
  • Contained ingredients with a “quantity” attributes.  That’s where strength would go

Given that you don’t have either, you’ve got two choices:

  • Adopt one or both as RIM extensions to CDA
  • Encode them as part of the drug code/drug name

So in your examples, you’d have the following:

 Neomycin 500mg tablet:
 bid: SubstanceAdministration.effectiveTime[PIVL_TS].period = 12 h
 1: SubstranceAdministration.doseQuantity

If you really want to have “500mg” and “tablet” as separate attributes, you’ll have to create the necessary RIM structures.  (Refer to the pharmacy model for indications of how.)

If you don’t have the tablet part, you also have the option of the following:

 bid: SubstanceAdministration.effectiveTime[PIVL_TS].period = 12 h
 500 mg: SubstranceAdministration.doseQuantity

The pharmacy committee has put together a really handy document that explains how to encode a variety of dosage instructions, including more complex tapering doses, alternating doses, etc.  There will be a few situations where CDA isn’t fully expressive enough, but it will at least give you a starting point.  You can find the current draft of the document here:

Question: where does IVL_TS.value come from?

The abstract IVL<TS> type has the interval properties “low”, “high”, “width” and “center”. So why does the IVL_TS type (the XML rendition of the abstract, in principle, statement of the IVL<TS> data type) have a value attribute in it – where does that come from?


The IVL<T> type is defined as “A set of consecutive values of an ordered base data type”. As such, it has the following properties:

 low   the low limit
 high  the high limit
 width  The difference between high and low boundary
 center  The arithmetic mean of the IVL (low plus high divided by 2)
 lowClosed  Specifies whether low is included in the IVL (is closed) or excluded from the IVL (is open)
 highClosed  Specifies whether high is included in the IVL (is closed) or excluded from the IVL (is open)

The low, high, width and center properties are related to each other, if you know ay two of them, then you know all of them. But you might only know one of them. When this in principle definition is represented in XML in IVL_TS, the format is :

<!-- type IVL -->
   nullFlavor = (...)
   operator = (I | E | A | H | P) : I
   Content: ( low, high, center, width )

Operator relates to some other aspect that we’ll ignore here. low and high carry a “inclusive” attribute for lowClosed and highClosed. So in the XML, an IVL<TS> looks like this:

   <low value='200012041000'/>
   <high value='200012041030'/>

 The value attribute

In an instance, this form is also allowed:

<effectiveTime value="20001204"/>

Where did this come from? Well, the data types define the notion of “promotion”. The concept is simple – you can “promote” 2000-12-04 into an interval that includes from the start of the day (midnight) through to the end of the day (every instance up to but not including midnight). So if you converted 2000-12-04 to an interval, it would look like this:

   <low value='200012040000' inclusive="true"/>
   <high value='200012050000' inclusive="false"/>

And since you can do that, why not let that go in the instance? Anyone encountering a simple time value in place of an interval can just automatically expand it into an interval.

The XML definition of the IVL_TS doesn’t mention the value attribute, because it’s sort of not really the IVL_TS type, it’s the TS type being used in it’s place – which is allowed.

However because you have to describe this in schema, the IVL_TS extends SXCM_TS (defined to support the full been mighty and terrifying GTS), and that extends TS. So in the XML R1 data types used in CDA,  IVL_TS extends TS, and you can put either a proper interval, or just a value.


How much should we engage with a standard?

This morning I was on a conference call with Nicholas Oughtibridge from the NHS when he briefly outlined a series of levels at which an organisation can engage with standards. I thought his overview was highly worthwhile passing on, and with Nicholas’ permission, I share his document below. The document is shared under the UK Open Government Licence at Note that this requires attribution with “Contains public sector information licensed under the UK Open Government Licence v1.0.” (consider that atttributed)

Here is Nicholas’s document (thanks very much). I’ll comment a little below.


Levels of engagement framework

For each standard the contributors to the healthcare system in an enterprise or jurisdiction (referred to as a contributor in the remainder of this document) can choose their level of engagement in its design, development and maintenance.  Often standards come with associated products, where level of engagement is also a choice.  The level of engagement continuum ranges from avoidance through to control of all aspects of design, development and maintenance.  For simplicity, the continuum is broken down into the five tranches of control, leadership, involvement, acceptance and avoidance.  These five tranches are described in a framework below with illustrative examples.

There is a risk that multiple contributors will seek to control a particular area of standardisation.  This risk needs to be managed with a single governance model with representation for all contributors.


The highest level of engagement is control.  For this level, the contributor values the standard sufficiently to want to control the standard.  Typically a controlling contributor will fund and manage the design, development and maintenance of the standard so that it meets all of their requirements within the context of the healthcare system in an enterprise or jurisdiction.  This will often extend to funding the implementation of the standard throughout the healthcare system.  Compliance is often a combination of contractual, financial, regulatory and technical.

Adopting a control level of engagement risks being unable or unwilling to recognise and accommodate the requirements of other contributors to healthcare systems in the enterprise, jurisdiction or internationally.


Where it is not possible, practical or desirable to have a controlling level of engagement it is often desirable to lead other organisations so that standards are designed, developed and maintained in the interests of the contributor.  A contributor leading that design, development and maintenance will often fund their contribution, but probably not the whole cost of the standard.

Leadership ensures that the requirements of the enterprise or jurisdiction are accommodated by the standard or product so there is minimal need to tailor the standard or its products for deployment in the enterprise or jurisdiction’s healthcare system.  Leadership enables a broader range of stakeholders to contribute to the design, development and maintenance of the standard or product, sharing the burden without compromising utility.

Co-production with other parties reduces the cost of development to the contributor and broadens the pool of expertise to exploit.  The leadership level of engagement controls the quality of the standard or product.  Simply engaging without leadership risks the standard or product becoming unsuitable for use in the enterprise or jurisdiction.


For some standards or products leadership or control is available from other trusted contributors, organisations or jurisdictions.  Where the contributor has confidence in that leading or controlling party but wishes to exert a level of influence either as a body of expertise or to achieve a national objective it is appropriate to be actively engaged without leading.

Involvement can take many forms, including contributing to telephone conference calls, participation in workshops, undertaking specific work packages, developing software and hosting a secretariat service.

Involvement without leadership is also appropriate where the contributor has no ambition to improve the standard or product so that it better fits the enterprise or jurisdiction’s Healthcare System.

As with leadership, co-production with other parties reduces the cost of development to the contributor and broadens the pool of expertise to exploit.  Engaging without leadership risks the standard or product becoming unsuitable for use in the enterprise or jurisdiction.

Passive acceptance

For many standards the enterprise or jurisdiction has had no direct influence, despite depending on the standard for operations.  For many standards such as Simple Mail Transfer Protocol for e-mail or Hypertext Transfer Protocol this is not an issue.  For some however it is and it is appropriate to seek influence and possibly to strive to lead on-going development.


There are some standard which it is appropriate that the enterprise or jurisdiction does not adopt for various reasons, including incompatibility with legislation within a market or jurisdiction, incompatibility with architectural choices or fulfilling a use case which is not appropriate in the enterprise or jurisdiction

I think that’s a very governmental perspective – but it’s still very clearly laid out, and I’m very happy to be able to share it here.

What about vendors? I think that vendors get to go for control or leadership a lot less often than governments (though they achieve it more often when they do?). Most of us choose between Involvement and Passive acceptance (or Reluctant Dread – don’t forget that option). But a vendor’s choice is about more than control – it’s also about the people;

  • establishing bona-fides to prospective purchasers
  • being informed
  • upskilling & retaining key staff
  • keeping track of industry trends

Academics and consultants have their own model – as I’m finding out (being a consultant now). What about you? why do you participate (and which model do you follow with HL7?)

Question: CD data type and value sets


How are value sets bindings get validated in HL7 message? I notice that  ISO21090 CD data type does not have attributes to support value sets binding

The differentiation between code systems and value sets is confusing to many people. The official HL7 definition is

code system a managed collection of concept identifiers, usually codes, but sometimes more complex sets of rules and references, optionally including additional representations (which may or may not be identifiers of the concepts)
value set a uniquely identifiable set of valid concept identifiers, where any concept identifier in a coded element can be tested to determine whether it is a member of the Value Set.

It’s no wonder people are confused – this demonstrates a problem with getting standards communities to do definitions: the definition will be successively honed for correctness again and again, at the price of increasing lack of clarity.

The easiest way to understand the difference is that both code systems and value sets define lists of codes. The difference is that code systems define the codes themselves, while value sets refer to codes defined elsewhere.

CD data type

The CD data type includes two attributes for these two concepts:

codeSystem what defines the syntax and meaning of the code value. This value matters – you look at it if you want to know what the code means
valueSet a reference to a set of rules over what codes are allowed to be chosen here. This value doesn’t really matter – it doesn’t change what the code means

Actually it turns out that the claim that the value set doesn’t change what the code means is rather less true in practice than in theory. Take this (crude) example, based on Snomed-CT, where you have two different value sets:

Value Set A

  • 163497009: Obstetric examination
  • 271992004: Obstetric investigation
  • 108108009: Obstetrics manipulation

Value Set B

  • 163497009: Obstetric examination
  • 271992004: Obstetric investigation
  • 108106008: Obstetrics destructive procedure

Although the meaning of the 271992004 is theoretically the same in both lists, in practice what the user means when they pick 271992004 differs between the two lists. The valueSet attribute is defined on CD to allow really really sophisticated users to indulge in a fantasy that they can reverse the damage that bad value sets cause (because those value sets above are certainly bad ones). I always strongly recommend people not to use valueSet.

Case in Question

OrganizationA has gender value sets with values (M – male,F – female,U – unknown). ApplicationA within organizationA will have constrained gender value sets with only values  (M,F). Therefore, in this scenario, how will terminology server be able to validate if applicationA is sending value like U, which is supposed to be invalid for applicationA?

So the first – (M – male,F – female,U – unknown) – is actually a code system. Since it’s a local code system, the site allocates it’s own oid, which we will call 1.2.[x].1. When application A represents a code for female, it looks like this:

<code code=”F” codeSystem=”1.2.[x].1″/>

This doesn’t capture that fact that application A can only use M or F. If you really want to do that – in spite of my comments above – then you’d have to assign an OID to the value set. We’ll use 1.2.[y].2, which would mean the code value would look like this:

<code code=”F” codeSystem=”1.2.[x].1″ valueSet=”1.2.[y].2″/>

A terminology server could leverage the value set to determine whether application A is conforming to it’s own rules. More useful is for a terminology server to know that something inbound for application A is valid, but for that it can’t rely on the instance – it has to be configured elsewhere.

p.s. note that valueSet was only added in data types R2 / ISO 21090, though I’ve seen a few sites pre-adopt it in CDA R2.



Question: Using PV1-3 (“PL: Person Location”)

Question (originally submitted to Keith Boone, but we swap when appropriate):

How is PV1-3 mapped at a granular level for Room, bed, facility, status, type, building and floor? An example would be great. ~ Thanks

Here’s the data Structure Definition for PL (Person Location):

ID Name Version Data Type Table
1 Point of Care  2.1 IS Point of care
2 Room  2.3 IS Room
3 Bed  2.3 IS Bed
4 Facility  2.3 HD
5 Location Status  2.3 IS Location status
6 Person Location Type  2.3 IS Person location type
7 Building  2.3 IS Building
8 Floor  2.3 IS Floor
9 Location Description  2.3 ST
10 Comprehensive Location Identifier  2.5 EI
11 Assigning Authority for Location  2.5 HD

The accompanying note says: location identifiers that should be thought of in the following order from the most general to the most specific: facility, building, floor, point of care, room, bed.

The underlying notion of this type is a hospital campus (identified by a facilty).Hospital campuses usually have multiple buildings, and the builds are identified by some combination of ids, codes, or names (usually of benefactors or significant past faculty). If you don’t have multiple buildings, leave it blank. Or if the building code doesn’t matter because the other identifying information is unique anyway.

Buildings are usually multi-story, and rooms are usually numbered by the floor (Building A, 4th floor, Ward 2, Room 1 bed 25 – there’s a Room 1 on every floor)). If the floor doesn’t matter (because, say, the Rooms are uniquely numbered), then leave the floor blank. The Facility generally has the physical buildings and floors divided into wards and then rooms. Wards go in the “point of care” (“the nursing station for inpatient locations, or clinic or department, for locations other than inpatient”). Rooms are numbered relative to the ward/floor/building/institution – put this in Room. And rooms usually contain multiple beds.

So the mapping of facility/building/point of care/floor/bed is simple in concept, though in practice it can be difficult if the location management scheme in use has extra components or doesn’t use “ward” or anything matches it.

Type & Status

The location type:

Code ID Description
C 1 Clinic
D 2 Department
H 3 Home
N 4 Nursing Unit
O 5 Provider’s Office
P 6 Phone

SNF appears to be a US specific code, because I have no idea what it means.

The location status has no present values and a definition of “Location (e.g., Bed) status”. I’m guessing it means whether the bed is in use, empty, needing repair, etc. And for PV1-3, that means “in use”. But it’s usually left blank.


The standard provides these examples:

Nursing Unit A nursing unit at Community Hospital: 4 East, room 136, bed B 4E^136^B^CommunityHospital^^N^^^
Home The patient was treated at his home. ^^^^^H^^^
Clinic A clinic at University Hospitals: Internal Medicine Clinic located in the Briones building, 3rd floor. InternalMedicine^^^UniversityHospitals^^C^Briones^3^

Announcing a new training course: v2 -> CDA mapping

With the various NEHTA related implementations, and the coming implementation of the pcEHR, there’s a real swell of CDA implementations around the country.

For a variety of reasons, most implementations are producing CDA on the perimeter rather than at the source. From a technical perspective, it’s better and easier to produce CDA at the source (i.e. database), but for a variety of mainly non-technical reasons, it’s easier to produce CDA on an interface engine. In practice, this means translating from v2 messages to CDA documents.

I’m being run off my feet producing/reviewing v2 to CDA mappings, or advising on them. I can’t keep up. So I’m going to run a 2 day training course looking at v2 to CDA mapping generally, and specifically in regard to the published NEHTA CDA specifications. The course will be small – only 14-15 participants, in order to keep the teaching really hands on.

For further details, see the course notes. Also, there’s a Course flyer and application form as PDFs.

The course will be suitable for analysts, programmers, and interface engine operators (whatever you call them!), and also suitable for vendors, jurisdictions, consultants etc. If you fit into one of those categories, I’d love to see you there.


v3 has failed? Summary of comments

My post “HL7 needs a fresh look because V3 has failed” generated a huge number of page hits (more than any other page already), and lots of comments, both private and public.

In this post, I’m going to try and summarize the comments. But before I do that – I made two posts in a point/counterpoint style. The first post (v3 has failed) generated 5 times as many page views as the second (v3 has succeeded), and 10x as many comments. And the second post is the only one I sent anyone a link about (several HL7 mailing lists). Such is life, I guess.

I think that generally, people in the comments were defensive of HL7 because they felt that either

  1. v3 hadn’t failed
  2. v3 was trying to something impossibly hard anyway
  3. HL7 should be congratulated for having a fresh look task force

With regard to the #3 – yes, and that’s a tough ask. With regard to #2 – yes, I thought that too, until I wrote RFH. Then I looked afresh at the v3 specifications, and realized that they make something hard even harder.

As far as v3 failing, I said that v3 was a failure because it wasn’t a suitable vehicle as it is to take the organization forward. No one really disagreed with that – there were a number of passionate defenses of the achievements of v3, including

  • CDA
  • SPL
  • several national programs
  • RIM based EHRs (there are several) or CDRs (clinical data repositories)

Apologies if I missed anything. Of course, a number of people pointed out that the definition of failure is very much in the eye of the beholder. Indeed it is. You can do good things with v3 as it is (as I said myself).

A number of people suggested that HL7 should invest in tooling in order to make the standard successful. I very skeptical of that theory. Imagine if OMG started writing software and giving it away to try and make CORBA a success – how could that make any difference? I think that that investing in tooling in order to present a better face to an interoperability standard is doubling down. (please, this is not a criticism of national health programs for their efforts in this direction. Tooling is not a bad thing, but thinking tooling solves the underlying problem is. It’s HL7 that needs to beware of doubling down like this.)

Also, people questioned whether repositioning v3 in terms of technology adoption would help. JSON? HTML 5? IDL? Google Protocol? I don’t know whether that would help or not. It’s an area fraught with difficulty.

Some people think that the XML simplification methods (such as greenCDA) for greenCDA will make a big difference – I already talked about that.

There was one particular thread in the comments by Andrew McIntyre / Gunther Schadow that I’ll pick up in later post:  the relationship between v2, v3 and archetypes.



A comparison of v3 and RFH

Klaus Veil asked me for a comparison of v3 and my alternative proposal (RFH), to help people understand the differences. This is a list of some of the differences.

  • the v3 standard is primarily focused on explaining the standard – the process. And with reason – you have to understand the process, from the RIM down, to be able to implement it well. RFH turns that on it’s head: the focus is on explaining what you have to do to conform to the standard. It’s not that there’s actually less to understand – it’s just a different focus on the order in which you encounter things, and why. (My favorite example of the problem the v3 approach causes is the A_SpatialCoordinate cmet. It contains a coherent description of how the parts map to the rim, but the parts are not coherent – and it has passed ballot)
  • the v3 process starts from the RIM – the definitions are in the RIM, and you constrain the RIM down by removing optionality and capability to focus on a particular use case, and the instances that are exchanged are RIM instances. In this way, the RIM stands between the business analysis and the implementation model. RFH turns this on it’s head; the implementation is the business analysis model – if there’s a difference between the way business analysts think about the model and the implementation, this difference is explained directly in the implementation model. The implementation model is also secondarily mapped to the RIM – this is required to ensure correctness, but not for interoperability.
  • v3 is exploring various forms of simplified xml. The mantra has become, “interoperability is a transform away from the RIM”. RFH embraces this – interoperability is the focus, and the RIM is a transform away. And compromise is fought out in the implementer space, not the definitional space
  • n v3, the RIM provides the formalism by defining the base types. Typing is innately an exclusive process, so it doesn’t make sense to also define the concepts against different models/perspectives at the same time. The consequence of this is interminable debates about the ontology of the RIM itself, and the perspectives it enshrines. RFH pushes the RIM into the ontology space, where the definitions are open, not closed. This enables additional definitional perspectives to be introduced where appropriate. It would also allow different parts/patterns/concerns of the RIM to be teased apart (where now they are bound together by being contained in a “class” with all that entails)
  • in v3, we were forced to model the universe in order to avoid subsequent breaking changes. RFH doesn’t know how to solve this problem, but can leverage the fact that so much of it has already been done. Without this preexisting work, RFH would have a problem….
  • in v3, the definitions are “abstract” so that they can be technology independent. This is wonderful in theory, but requires custom tooling to map between the models and the reality (especially in conformance) – and hardly anyone actually understands this in the data types, let alone elsewhere. How painful this has been… RFH is focused on XML first – the concrete model that is exchanged. Other definitional forms are secondary. And the exact form is based closely on a recognized best of breed model (highrise API)
  • in v3, there are multiple overlapping static models, some of which are sort of incompatible from some perspectives. These overlapping models represent a failure of governance – the cat slipped out of the bag before we understood what was at stake, and we never got on top of it. It’s a major issue for CDA R3, for instance. RFH can’t solve this – instead, it elevates this problem to a central concern by insisting that resources are modular and separate, and we have to govern it from the start.
  • in v3, we have still not figured out a workable behavioral model. RFH starts with a restful infrastructure – this meets many requirements out of the box. Where it doesn’t, additional service models can be built on top by either HL7 or other adopters as desired, by reusing the resources. RFH defines an aggregation framework for resources to enable this, and this allows documents such as CDA to be natural outcomes of the methodology.
  • V3 requires custom tooling – both to produce the specification, and to adopt the specification. RFH still requires tooling to manage the definitional space, and ensure it is coherent, though more of this tooling arises logically from existing ontology toolkits. RFH doesn’t require custom tooling to adopt the specification, though the definitions are there to be leveraged, and that would require tooling.
  • v3 treats extensions as entirely foreign content. RFH, on the other hand, makes extensions part of the base content and insists that they be related to the existing definitional content. Since most extensions are based on content properly defined elsewhere, this is more natural and palatable to adopters
  • by pinning implementor concerns down, and being open definitionally (rather than the converse as in v3), RFH should offer many disconnected potential members of the community to re-engage.

These are the differences I had in my head as I set out to write RFH.

Not all of these differences will be to everyone’s liking. In particular, there’s no necessity for all the parts of the wire format to map to the RIM – and the resources I’ve already put in RFH don’t – they explicitly represent identification and management concerns that the RIM has never engaged with. So it may be that RFH can’t interoperate with v3 across a gateway.

Also, not all these differences I had in mind are properly represented in RFH – or it may be that they even not properly achieved. RFH isn’t offered up as a final specification, ready to ballot. It’s an idea, an example of what could be achieved if we approached the standards process from this different direction, and maybe it’s also a usable basis for further work.

The Road [Not] Taken

Just like there’s a well recognized grieving process, it seems to me that there’s a process for an organization that faces a strategic challenge.

  • At the beginning, an organization will be in denial that there is any problem. This is just good and proper Kuhnian cognitive dissonance – apparently it’s mandatory.
  • Then, when the organization finally accepts that there is a strategic challenge, it’s immediate response is to double down and do everything that it’s already doing harder. This is just natural – after all, the leadership believes in the direction – that’s why the organization is taking it (And there’s always apparently slackness to remove from an organization, but taking it out introduces more – there’s a lower limit to inefficiency, in the same way there’s a lower limit to unemployment)
  • Generally, by the time a strategic challenge comes, doubling-down – doing the same stuff harder – means that the problem will get worse, because what the organization is doing is making things worse.
  • Eventually things get to a full blown crisis. By this stage, the politics in the organization are just crazy. At this point management must take a choice, and there’s a fork in the road:


  • The risks of change are too great to survive, and the organization freezes up, goes into tumescence, and will eventually collapse


  • There’s some sort of internal palace revolution (at least at the logical level, even if not at the personnel level). The organization hunkers down and prepares a new strategy in secret – which is real risky, because it will have to be socialized in and outside the organization to get buy in. If it works, we go back to the beginning again. Other wise we fall back to the first road.

As far as I can tell, this applies to all sorts of things. Land wars in foreign countries, for instance. Large IT companies (Microsoft used to be pretty adept at the internal palace revolution, but it sure don’t look like it any more). And volunteer standards organizations too.

In fact, Ken Rubin and I were talking about this notion of organizations doubling down, and he pointed out that there’s a third option for some organizations: The road not taken.